You are currently browsing the category archive for the ‘Papers’ category.

2016 – Up now, an overall of 1567 citations among 74 works (including 3 books) on *GOOGLE SCHOLAR* (https://scholar.google.com/citations?user=gSyQ-g8AAAAJ&hl=en) [with an *Hirsh* h-index=19, and an average of 160.2 citations each for any work on my top five] + 900 citations among 57 works on the new *RESEARCH GATE* site (https://www.researchgate.net/profile/Vitorino_Ramos).

Refs.: * Science*,

**,**

*Artificial Intelligence***,**

*Swarm Intelligence***,**

*Data-Mining***,**

*Big-Data***,**

*Evolutionary Computation***,**

*Complex Systems***,**

*Image Analysis***,**

*Pattern Recognition***.**

*Data Analysis*

Figure – Neural circuit controller of the virtual ant (page 3, fig. 2). [URL: http://arxiv.org/abs/1507.08467 ]

Intelligence and decision in foraging ants. Individual or Collective? Internal or External? What is the right balance between the two. Can one have internal intelligence without external intelligence? Can one take examples from nature to build in silico artificial lives that present us with interesting patterns? We explore a model of foraging ants in this paper that will be presented in early September in Exeter, UK, at UKCI 2015. (available on arXiv [PDF] and ResearchGate)

Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; “** A Model for Foraging Ants, Controlled by Spiking Neural Networks and Double Pheromones**“, UKCI 2015 Computational Intelligence – University of Exeter, UK, September 2015.

**Abstract**: A model of an Ant System where ants are controlled by a spiking neural circuit and a second order pheromone mechanism in a foraging task is presented. A neural circuit is trained for individual ants and subsequently the ants are exposed to a virtual environment where a swarm of ants performed a resource foraging task. The model comprises an associative and unsupervised learning strategy for the neural circuit of the ant. The neural circuit adapts to the environment by means of classical conditioning. The initially unknown environment includes different types of stimuli representing food (rewarding) and obstacles (harmful) which, when they come in direct contact with the ant, elicit a reflex response in the motor neural system of the ant: moving towards or away from the source of the stimulus. The spiking neural circuits of the ant is trained to identify food and obstacles and move towards the former and avoid the latter. The ants are released on a landscape with multiple food sources where one ant alone would have difficulty harvesting the landscape to maximum efficiency. In this case the introduction of a double pheromone mechanism (positive and negative reinforcement feedback) yields better results than traditional ant colony optimization strategies. Traditional ant systems include mainly a positive reinforcement pheromone. This approach uses a second pheromone that acts as a marker for forbidden paths (negative feedback). This blockade is not permanent and is controlled by the evaporation rate of the pheromones. The combined action of both pheromones acts as a collective stigmergic memory of the swarm, which reduces the search space of the problem. This paper explores how the adaptation and learning abilities observed in biologically inspired cognitive architectures is synergistically enhanced by swarm optimization strategies. The model portraits two forms of artificial intelligent behaviour: at the individual level the spiking neural network is the main controller and at the collective level the pheromone distribution is a map towards the solution emerged by the colony. The presented model is an important pedagogical tool as it is also an easy to use library that allows access to the spiking neural network paradigm from inside a Netlogo—a language used mostly in agent based modelling and experimentation with complex systems.

**References**:

[**1**] C. G. Langton, “Studying artificial life with cellular automata,” Physica D: Nonlinear Phenomena, vol. 22, no. 1–3, pp. 120 – 149, 1986, proceedings of the Fifth Annual International Conference. [Online]. Available: http://www.sciencedirect.com/ science/article/pii/016727898690237X

[**2**] A. Abraham and V. Ramos, “Web usage mining using artificial ant colony clustering and linear genetic programming,” in Proceedings of the Congress on Evolutionary Computation. Australia: IEEE Press, 2003, pp. 1384–1391.

[**3**] V. Ramos, F. Muge, and P. Pina, “Self-organized data and image retrieval as a consequence of inter-dynamic synergistic relationships in artificial ant colonies,” Hybrid Intelligent Systems, vol. 87, 2002.

[**4**] V. Ramos and J. J. Merelo, “Self-organized stigmergic document maps: Environment as a mechanism for context learning,” in Proceddings of the AEB, Merida, Spain, February 2002. ´

[**5**] D. Sousa-Rodrigues and V. Ramos, “Traversing news with ant colony optimisation and negative pheromones,” in European Conference in Complex Systems, Lucca, Italy, Sep 2014.

[**6**] E. Bonabeau, G. Theraulaz, and M. Dorigo, Swarm Intelligence: From Natural to Artificial Systems, 1st ed., ser. Santa Fe Insitute Studies In The Sciences of Complexity. 198 Madison Avenue, New York: Oxford University Press, USA, Sep. 1999.

[**7**] M. Dorigo and L. M. Gambardella, “Ant colony system: A cooperative learning approach to the traveling salesman problem,” Universite Libre de Bruxelles, Tech. Rep. TR/IRIDIA/1996-5, ´ 1996.

[**8**] M. Dorigo, G. Di Caro, and L. M. Gambardella, “Ant algorithms for discrete optimization,” Artif. Life, vol. 5, no. 2, pp. 137– 172, Apr. 1999. [Online]. Available: http://dx.doi.org/10.1162/ 106454699568728

[**9**] L. M. Gambardella and M. Dorigo, “Ant-q: A reinforcement learning approach to the travelling salesman problem,” in Proceedings of the ML-95, Twelfth Intern. Conf. on Machine Learning, M. Kaufman, Ed., 1995, pp. 252–260.

[**10**] A. Gupta, V. Nagarajan, and R. Ravi, “Approximation algorithms for optimal decision trees and adaptive tsp problems,” in Proceedings of the 37th international colloquium conference on Automata, languages and programming, ser. ICALP’10. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 690–701. [Online]. Available: http://dl.acm.org/citation.cfm?id=1880918.1880993

[**11**] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Second order ˜ swarm intelligence,” in HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, ser. Lecture Notes in Computer Science, J.-S. Pan, M. Polycarpou, M. Wozniak, A. Carvalho, ´ H. Quintian, and E. Corchado, Eds. Salamanca, Spain: Springer ´ Berlin Heidelberg, Sep 2013, vol. 8073, pp. 411–420.

[**12**] W. Maass and C. M. Bishop, Pulsed Neural Networks. Cambridge, Massachusetts: MIT Press, 1998.

[**13**] E. M. Izhikevich and E. M. Izhikevich, “Simple model of spiking neurons.” IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, vol. 14, no. 6, pp. 1569–72, 2003. [Online]. Available: http://www.ncbi.nlm.nih. gov/pubmed/18244602

[**14**] C. Liu and J. Shapiro, “Implementing classical conditioning with spiking neurons,” in Artificial Neural Networks ICANN 2007, ser. Lecture Notes in Computer Science, J. de S, L. Alexandre, W. Duch, and D. Mandic, Eds. Springer Berlin Heidelberg, 2007, vol. 4668, pp. 400–410. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-74690-4 41

[**15**] J. Haenicke, E. Pamir, and M. P. Nawrot, “A spiking neuronal network model of fast associative learning in the honeybee,” Frontiers in Computational Neuroscience, no. 149, 2012. [Online]. Available: http://www.frontiersin.org/computational neuroscience/10.3389/conf.fncom.2012.55.00149/full

[**16**] L. I. Helgadottir, J. Haenicke, T. Landgraf, R. Rojas, and M. P. Nawrot, “Conditioned behavior in a robot controlled by a spiking neural network,” in International IEEE/EMBS Conference on Neural Engineering, NER, 2013, pp. 891–894.

[**17**] A. Cyr and M. Boukadoum, “Classical conditioning in different temporal constraints: an STDP learning rule for robots controlled by spiking neural networks,” pp. 257–272, 2012.

[**18**] X. Wang, Z. G. Hou, F. Lv, M. Tan, and Y. Wang, “Mobile robots’ modular navigation controller using spiking neural networks,” Neurocomputing, vol. 134, pp. 230–238, 2014.

[**19**] C. Hausler, M. P. Nawrot, and M. Schmuker, “A spiking neuron classifier network with a deep architecture inspired by the olfactory system of the honeybee,” in 2011 5th International IEEE/EMBS Conference on Neural Engineering, NER 2011, 2011, pp. 198–202.

[**20**] U. Wilensky, “Netlogo,” Evanston IL, USA, 1999. [Online]. Available: http://ccl.northwestern.edu/netlogo/

[**21**] C. Jimenez-Romero and J. Johnson, “Accepted abstract: Simulation of agents and robots controlled by spiking neural networks using netlogo,” in International Conference on Brain Engineering and Neuro-computing, Mykonos, Greece, Oct 2015.

[**22**] W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge University Press, 2002.

[**23**] J. v. H. W Gerstner, R Kempter and H. Wagner, “A neuronal learning rule for sub-millisecond temporal coding,” Nature, vol. 386, pp. 76–78, 1996.

[**24**] I. P. Pavlov, “Conditioned reflexes: An investigation of the activity of the cerebral cortex,” New York, 1927.

[**25**] E. J. H. Robinson, D. E. Jackson, M. Holcombe, and F. L. W. Ratnieks, “Insect communication: ‘no entry’ signal in ant foraging,” Nature, vol. 438, no. 7067, pp. 442–442, 11 2005. [Online]. Available: http://dx.doi.org/10.1038/438442a

[**26**] E. J. Robinson, D. Jackson, M. Holcombe, and F. L. Ratnieks, “No entry signal in ant foraging (hymenoptera: Formicidae): new insights from an agent-based model,” Myrmecological News, vol. 10, no. 120, 2007.

[**27**] D. Sousa-Rodrigues, J. Louçã, and V. Ramos, “From standard ˜ to second-order swarm intelligence phase-space maps,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, Sep 2011.

[**28**] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Spatio-temporal ˜ dynamics on co-evolved stigmergy,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, 9 2011.

[**29**] S. Tisue and U. Wilensky, “Netlogo: A simple environment for modeling complexity,” in International conference on complex systems. Boston, MA, 2004, pp. 16–21.

Figure – Two *simplicies a* and

**connected by the 2-dimensional face, the triangle**

*b***{1;2;3}**. In the analysis of the time-line of

**newspaper (link) the system used feature vectors based on frequency of words and them computed similarity between documents based on those feature vectors. This is a purely statistical approach that requires great computational power and that is difficult for problems that have large feature vectors and many documents. Feature vectors with 100,000 or more items are common and computing similarities between these documents becomes cumbersome. Instead of computing distance (or similarity) matrices between documents from feature vectors, the present approach explores the possibility of inferring the distance between documents from the**

*The Guardian***description. Q-analysis is a very natural notion of connectivity between the**

*Q-analysis**simplicies*of the structure and in the relation studied, documents are connected to each other through shared sets of tags entered by the journalists. Also in this framework, eccentricity is defined as a measure of the relatedness of one simplex in relation to another [7].

David M.S. Rodrigues and Vitorino Ramos, “* Traversing News with Ant Colony Optimisation and Negative Pheromones*” [PDF], accepted as preprint for oral presentation at the

**–**

*European Conference on Complex Systems***ECCS**‘

**14**in Lucca, Sept. 22-26, 2014, Italy.

**Abstract**: The past decade has seen the rapid development of the online newsroom. News published online are the main outlet of news surpassing traditional printed newspapers. This poses challenges to the production and to the consumption of those news. With those many sources of information available it is important to find ways to cluster and organise the documents if one wants to understand this new system. Traditional approaches to the problem of clustering documents usually embed the documents in a suitable similarity space. Previous studies have reported on the impact of the similarity measures used for clustering of * textual corpora* [1]. These similarity measures usually are calculated for bag of words representations of the documents. This makes the final document-word matrix high dimensional. Feature vectors with more than 10,000 dimensions are common and algorithms have severe problems with the high dimensionality of the data. A novel bio inspired approach to the problem of traversing the news is presented. It finds

**over documents published by the newspaper**

*Hamiltonian cycles**The Guardian*. A

**based on**

*Second Order Swarm Intelligence algorithm***was developed [2, 3] that uses a**

*Ant Colony Optimisation**negative pheromone*to mark unrewarding paths with a “no-entry” signal. This approach follows recent findings of

*negative pheromone*usage in real ants [4].

In this case study the corpus of data is represented as a bipartite relation between documents and keywords entered by the journalists to characterise the news. A new similarity measure between documents is presented based on the ** Q-analysis** description [5, 6, 7] of the

*simplicial*complex formed between documents and keywords. The eccentricity between documents (two

*simplicies*) is then used as a novel measure of similarity between documents. The results prove that the

**performs better in benchmark problems of the travelling salesman problem, with faster convergence and optimal results. The addition of the**

*Second Order Swarm Intelligence algorithm**negative pheromone*as a

*non-entry signal*improves the quality of the results. The application of the algorithm to the corpus of news of

*The Guardian*creates a coherent navigation system among the news. This allows the users to navigate the news published during a certain period of time in a semantic sequence instead of a time sequence. This work as broader application as it can be applied to many cases where the data is mapped to bipartite relations (e.g. protein expressions in cells, sentiment analysis, brand awareness in social media, routing problems), as it highlights the connectivity of the underlying complex system.

**Keywords**: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Hamiltonian cycles, Text Mining, Textual Corpora, Information Retrieval, Knowledge Discovery, Sentiment Analysis, Q-Analysis, Data Mining, Journalism, *The Guardian*.

**References**:

[**1**] Alexander Strehl, Joydeep Ghosh, and Raymond Mooney. ** Impact of similarity measures on web-page clustering**. In

*Workshop on Artifcial Intelligence for Web Search*(AAAI 2000), pages 58-64, 2000.

[

**2**] David M. S. Rodrigues, Jorge Louçã, and Vitorino Ramos.

*. In Stefan Thurner, editor, 8th*

**From standard to second-order Swarm Intelligence phase-space maps***European Conference on Complex Systems*, Vienna, Austria, 9 2011.

[

**3**] Vitorino Ramos, David M. S. Rodrigues, and Jorge Louçã.

**Second order Swarm Intelligence**. In Jeng-Shyang Pan, Marios M. Polycarpou, Micha l Wozniak, André C.P.L.F. Carvalho, Hector Quintian, and Emilio Corchado, editors,

*HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems*, volume 8073 of Lecture Notes in Computer Science, pages 411-420. Springer Berlin Heidelberg, Salamanca, Spain, 9 2013.

[

**4**] Elva J.H. Robinson, Duncan Jackson, Mike Holcombe, and Francis L.W. Ratnieks.

**.**

*No entry signal in ant foraging (hymenoptera: Formicidae): new insights from an agent-based model**Myrmecological*News, 10(120), 2007.

[

**5**] Ronald Harry Atkin.

**.**

*Mathematical Structure in Human Affairs**Heinemann Educational Publishers*, 48 Charles Street, London, 1 edition, 1974.

[

**6**] J. H. Johnson.

**. In Proceedings of the**

*A survey of Q-analysis, part 1: The past and present**Seminar on Q-analysis and the Social Sciences*, Universty of Leeds, 9 1983.

[

**7**] David M. S. Rodrigues.

**. In Albert Diaz-Guilera, Alex Arenas, and Alvaro Corral, editors, Proceedings of the**

*Identifying news clusters using Q-analysis and modularity**European Conference on Complex Systems*2013, Barcelona, 9 2013.

In order to solve hard combinatorial optimization problems (e.g. optimally scheduling students and teachers along a week plan on several different classes and classrooms), one way is to computationally mimic how ants forage the vicinity of their habitats searching for food. On a myriad of endless possibilities to find the optimal route (minimizing the travel distance), ants, collectively emerge the solution by using stigmergic signal traces, or *pheromones*, which also dynamically change under evaporation.

Current algorithms, however, make only use of a positive feedback type of *pheromone* along their search, that is, if they collectively visit a good low-distance route (a minimal pseudo-solution to the problem) they tend to reinforce that signal, for their colleagues. Nothing wrong with that, on the contrary, but no one knows however if a lower-distance alternative route is there also, just at the corner. On his global search endeavour, like a snowballing effect, positive feedbacks tend up to give credit to the exploitation of solutions but not on the – also useful – exploration side. The upcoming potential solutions can thus get crystallized, and freeze, while a small change on some parts of the whole route, could on the other-hand successfully increase the global result.

Figure – Influence of **negative pheromone** on *kroA100.tsp* problem (fig.1 – page 6) (values on lines represent 1-ALPHA). A typical standard ACS (*Ant Colony System*) is represented here by the line with value 0.0, while better results could be found by our approach, when using positive feedbacks (0.95) along with negative feedbacks (0.05). Not only we obtain better results, as we found them earlier.

There is, however, an advantage when a second type of *pheromone* (a negative feedback one) **co-evolves** with the first type. And we decided to research for his impact. What we found out, is that by using a second type of global feedback, we can indeed increase a faster search while achieving better results. **In a way, it’s like using two different types of evaporative traffic lights, in green and red, co-evolving together**. And as a conclusion, we should indeed use a negative no-entry signal pheromone. In small amounts (0.05), but use it. Not only this prevents the whole system to freeze on some solutions, to soon, as it enhances a better compromise on the search space of potential routes. The pre-print article is available here at arXiv. Follows the abstract and keywords:

*Vitorino Ramos*, *David M. S. Rodrigues*, *Jorge Louçã*, “** Second Order Swarm Intelligence**” [PDF], in

*Hybrid Artificial Intelligent Systems*, Lecture Notes in Computer Science, Springer-Verlag, Volume 8073, pp. 411-420, 2013.

**Abstract**: An artificial *Ant Colony System* (ACS) algorithm to solve general purpose* combinatorial Optimization Problems* (COP) that extends previous AC models [21] by the inclusion of a negative pheromone, is here described. Several *Travelling Salesman Problem*‘s (TSP) were used as benchmark. We show that by using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedbacks achieves better results than single positive feedback systems. The algorithm was tested against known NP complete combinatorial Optimization Problems, running on symmetrical TSPs. We show that the new algorithm compares favourably against these benchmarks, accordingly to recent biological findings by *Robinson* [26,27], and *Grüter* [28] where “No entry” signals and negative feedback allows a colony to quickly reallocate the majority of its foragers to superior food patches. This is the first time an extended ACS algorithm is implemented with these successful characteristics.

**Keywords**: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Travelling Salesman Problems (TSP).

Photo – “*O caos é uma ordem por decifrar*” (Portuguese), that is… “** Chaos is an order yet to be deciphered**“, a quote from the Nobel Prize in Literature (1998)

*José Saramago*[Lisbon, V. Ramos, 2013].

In 1990 (*), on one of his now famous works, *Christopher Langton* (link) decided to ask an important question. In order for computation to emerge spontaneously and become an important factor in the dynamics of a system, the material substrate must support the primitive functions required for computation: the **transmission**, **storage**, and **modification** of information. He then asked: ** Under what conditions might we expect physical systems to support such computational primitives**?

Naturally, the question is difficult to address directly. Instead, he decided to reformulate the question in the context of a class of formal abstractions of physical systems: cellular automata (CAs). First, he introduce cellular automata and a simple scheme for parametrising (**lambda parameter, λ**) the space of all possible CA rules. Then he applied this parametrisation scheme to the space of possible one-dimensional CAs in a qualitative survey of the different dynamical regimes existing in CA rule space and their relationship to one another.

By presenting a quantitative picture of these structural relationships, using data from an extensive survey of two-dimensional CAs, he finally review the observed relationships among dynamical regimes, discussing their implications for the more general question raised above. *Langton* found out that for a 2-state, 1-r neighbourhood, 1D cellular automata the optimal λ value is close to **0.5**. For a 2-state, Moore neighbourhood, 2D cellular automata, like Conway’s Life, the λ value is then **0.273**.

We then find that by selecting an appropriate parametrisation of the space of CAs, one observes a phase transition between highly ordered and highly disordered dynamics, analogous to the phase transition between the solid and fluid states of matter. Furthermore, *Langton* observed that CAs exhibiting the most complex behaviour – both qualitatively and quantitatively- **are found generically in the vicinity of this phase transition**. Most importantly, he observed that **CAs in the transition region have the greatest potential for the support of information storage, transmission, and modification, and therefore for the emergence of computation**. He concludes:

(…) * These observations suggest that there is a fundamental connection between phase transitions and computation, leading to the following hypothesis concerning the emergence of computation in physical systems: Computation may emerge spontaneously and come to dominate the dynamics of physical systems when those systems are at or near a transition between their solid and fluid phases, especially in the vicinity of a second-order or “critical” transition. *(…)

Moreover, we observe surprising similarities between the behaviours of computations and systems near phase transitions, finding analogs of computational complexity classes and the halting problem (*Turing*) within the phenomenology of phase transitions. *Langton*, concludes that there is a fundamental connection between computation and phase transitions, especially second-order or “critical” transitions, discussing some of the implications for our understanding of nature if such a connection is borne out.

The full paper (*), **Christopher G. Langton. “Computation at the edge of chaos”. Physica D, 42, 1990**, is available online, here [PDF].

Figure – *Hybrid Artificial Intelligent Systems* new LNAI (*Lecture Notes on Artificial Intelligence*) series volume 8073, Springer-Verlag Book [original photo by my colleague David M.S. Rodrigues].

New work, new book. Last week one of our latest works come out published on Springer. Edited by *Jeng-Shyang Pan*, *Marios M. Polycarpou*, *Emilio Corchado* et al. “** Hybrid Artificial Intelligent Systems**” comprises a full set of new papers on this hybrid area on Intelligent Computing (check the full articles list at Springer). Our new paper “

**Second Order Swarm Intelligence**” (pp. 411-420, Springer books link) was published on the

*Bio-inspired Models and Evolutionary Computation*section.

Four different snapshots (click to enlarge) from one of my latest books, recently published in Japan: *Ajith Abraham*, *Crina Grosan*, *Vitorino Ramos* (Eds.), “*Swarm Intelligence in Data Mining*” (**群知能と データマイニング**), *Tokyo Denki University* press [TDU], Tokyo, Japan, July 2012.

Figure – ** Attractor basins** (fig.2 pp.6 on Mária Ercsey-Ravasz and Zoltán Toroczkai, “

*Optimization hardness as transient chaos in an analog approach to constraint satisfaction*“, Nature Physics, vol. 7, p. 966-970, 2011.)

*Mária Ercsey-Ravasz* and *Zoltán Toroczkai* have proposed a way of mapping satisfiability problems to differential equations and a deterministic algorithm that solves them in polynomial continuous time at the expense of exponential energy functions (so the discrete approximation of the algorithm does not run in polynomial time, and an analogue system would need exponential resources).

The map assigns a phase space to a problem; the algorithm chooses random initial conditions from within that phase space. In the graphs above and below, they pick a 2-d subspace of the phase space and for each initial point in that space they illustrate 1) the particular solution the algorithm finds, 2) the corresponding “solution cluster”, an equivalence class of solutions that identifies two solutions if they differ in exactly one variable assignment, and 3) the time it takes to solve the problem. Each row adds another clause to satisfy.

The especially interesting part of the paper is the notion of an ** escape rate**, the proportion of the trajectories still searching for a solution after a time

*t*. In a companion paper, they show that the escape rate for

*Sudoku*combinatorial instances (

*The Chaos Within Sudoku*,

*Nature*, August 2012) correlates strongly with human judgements of hardness. This

*escape rate*is similar to the

*Kolmogorov complexity*in that it gives a notion of hardness to individual problem instances rather than to classes of problems. Full paper could be retrieved from arXiv:

*Mária Ercsey-Ravasz*and

*Zoltán Toroczkai*, “

**Optimization hardness as transient chaos in an analog approach to constraint satisfaction**“, Nature Physics, vol. 7, p. 966-970, 2011. (at arXiv on August 2012).

Figure – ** Attractor basins for 3-XORSAT** (fig.8 pp.18 on Mária Ercsey-Ravasz and Zoltán Toroczkai, “

*Optimization hardness as transient chaos in an analog approach to constraint satisfaction*“, Nature Physics, vol. 7, p. 966-970, 2011.)

Fig.1 – (click to enlarge) The optimal shortest path among *N*=1265 points depicting a Portuguese *Navalheira* crab as a result of one of our latest Swarm-Intelligence based algorithms. The problem of finding the shortest path among *N* different points in space is *NP-hard*, known as the *Travelling Salesmen Problem* (*TSP*), being one of the major and hardest benchmarks in Combinatorial Optimization (link) and Artificial Intelligence. (*V. Ramos*, *D. Rodrigues*, 2012)

This summer my kids just grab a tiny Portuguese *Navalheira *crab on the shore. After a small photo-session and some baby-sitting with a lettuce leaf, it was time to release it again into the ocean. He not only survived my kids, as he is now entitled into a new World Wide Web on-line life. After the *Shortest path Sardine* (link) with 1084 points, here is the *Crab* with 1265 points. The algorithm just run as little as 110 iterations.

Fig. 2 – (click to enlarge) Our 1265 initial points depicting a TSP Portuguese *Navalheira* crab. Could you already envision a minimal tour between all these points?

As usual in *Travelling Salesmen problems* (*TSP*) we start it with a set of points, in our case 1084 points or cities (fig. 2). Given a list of cities and their pairwise distances, the task is now to find the *shortest possible tour* that visits each city exactly once. The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods.

Fig. 3 – (click to enlarge) Again the shortest path *Navalheira* crab, where the optimal contour path (in black: first fig. above) with 1265 points (or cities) was filled in dark orange.

TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder.

What follows (fig. 4) is the original crab photo after image segmentation and just before adding *Gaussian* noise in order to retrieve several data points for the initial TSP problem. The algorithm was then embedded with the extracted *x*,*y* coordinates of these data points (fig. 2) in order for him to discover the minimal path, in just 110 iterations. For extra details, pay a visit onto the *Shortest path Sardine* (link) done earlier.

Fig. 4 – (click to enlarge) The original crab photo after some image processing as well as segmentation and just before adding *Gaussian* noise in order to retrieve several data points for the initial TSP problem.

Figure (click to enlarge) – Cover from one of my books published last month (10 July 2012) “*Swarm Intelligence in Data Mining*” recently translated and edited in Japan (by *Tokyo Denki University press* [TDU]). Cover image from Amazon.co.jp (url). Title was translated into **群知能と データマイニング**. Funny also, to see my own name for the first time translated into Japanese – wonder if it’s Kanji. A brief synopsis follow:

(…) ** Swarm Intelligence** (SI) is an innovative distributed intelligent paradigm for solving optimization problems that originally took its inspiration from the biological examples by swarming, flocking and herding phenomena in vertebrates.

*Particle Swarm Optimization*(PSO) incorporates swarming behaviours observed in flocks of birds, schools of fish, or swarms of bees, and even human social behaviour, from which the idea is emerged.

*Ant Colony Optimization*(ACO) deals with artificial systems that is inspired from the foraging behaviour of real ants, which are used to solve discrete optimization problems. Historically the notion of finding useful patterns in data has been given a variety of names including data mining, knowledge discovery, information extraction, etc. Data Mining is an analytic process designed to explore large amounts of data in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data. In order to achieve this, data mining uses computational techniques from statistics, machine learning and pattern recognition. Data mining and Swarm intelligence may seem that they do not have many properties in common. However, recent studies suggests that they can be used together for several real world data mining problems especially when other methods would be too expensive or difficult to implement. This book deals with the application of swarm intelligence methodologies in data mining. Addressing the various issues of swarm intelligence and data mining using different intelligent approaches is the novelty of this edited volume. This volume comprises of 11 chapters including an introductory chapters giving the fundamental definitions and some important research challenges. Chapters were selected on the basis of fundamental ideas/concepts rather than the thoroughness of techniques deployed. (…) (more)

Figure (clik to enlarge) – Applying P(0)=0.6; r=4; N=100000; for(n=0;n<=N;n++) { P(n+1)=r*P(n)*(1-P(n)); } *Robert May* Population Dynamics equation [1974-76] (do check on Logistic maps) for several iterations (generations). After 780 iterations, *P* is attracted to 1 (max. population), and then suddenly, for the next generations the very same population is almost extinguish.

“** Not only in research, but also in the everyday world of politics and economics, we would all be better off if more people realised that simple non-linear systems do not necessarily possess simple dynamical properties**.” ~

*Robert M. May*, “Simple Mathematical models with very complicated Dynamics”,

*Nature*, Vol. 261, p.459, June 10, 1976.

(…) The fact that the simple and deterministic equation (1) can possess dynamical trajectories which look like some sort of random noise has disturbing practical implications. It means, for example, that apparently erratic fluctuations in the census data for an animal population need not necessarily betoken either the vagaries of an unpredictable environment or sampling errors: they may simply derive from a rigidly deterministic population growth relationship such as equation (1). This point is discussed more fully and carefully elsewhere [1]. Alternatively, it may be observed that in the chaotic regime arbitrarily close initial conditions can lead to trajectories which, after a sufficiently long time, diverge widely. This means that, even if we have a simple model in which all the parameters are determined exactly, long term prediction is nevertheless impossible. In a meteorological context, *Lorenz* [15] has called this general phenomenon the “** butterfly effect**“: even if the atmosphere could be described by a deterministic model in which all parameters were known, the fluttering of a butterfly’s wings could alter the initial conditions, and thus (in the chaotic regime) alter the long term prediction. Fluid turbulence provides a classic example where, as a parameter (the

*Reynolds*number) is tuned in a set of deterministic equations (the

*Navier-Stokes*equations), the motion can undergo an abrupt transition from some stable configuration (for example, laminar flow) into an apparently stochastic, chaotic regime. Various models, based on the

*Navier-Stokes*differential equations, have been proposed as mathematical metaphors for this process [15,40,41] . In a recent review of the theory of turbulence,

*Martin*[42] has observed that the one-dimensional difference equation (1) may be useful in this context. Compared with the earlier models [15,40,41] it has the disadvantage of being even more abstractly metaphorical, and the advantage of having a spectrum of dynamical behaviour which is more richly complicated yet more amenable to analytical investigation. A more down-to-earth application is possible in the use of equation (1) to fit data [1,2,3,38,39,43] on biological populations with discrete, non-overlapping generations, as is the case for many temperate zone arthropods. (…) in pp. 13-14,

*Robert M. May*, “Simple Mathematical models with very complicated Dynamics“,

*Nature*, Vol. 261, p.459, June 10, 1976 [PDF link].

Video – ** Water has Memory** (from Oasis HD, Canada; link): just a liquid or much more? Many researchers are convinced that water is capable of “memory” by storing information and retrieving it. The possible applications are innumerable: limitless retention and storage capacity and the key to discovering the origins of life on our planet. Research into water is just beginning.

Water capable of processing information as well as a huge possible “*container*” for data media, that is something remarkable. This theory was first proposed by the late French immunologist Jacques Benveniste, in a controversial article published in 1988 in *Nature, *as a way of explaining how homeopathy works (link). *Benveniste*’s theory has continued to be championed by some and disputed by others. The video clip above, from the Oasis HD Channel, shows some fascinating recent experiments with water “memory” from the *Aerospace Institute of the University of Stuttgart* in Germany. The results with the different types of flowers immersed in water are particularly evocative.

This line of research also remembers me back of an old and quite interesting paper by a colleague, *Chrisantha Fernando*. Together with *Sampsa Sojakka*, both have proved that waves produced on the surface of water can be used as the medium for a *Wolfgang Maass*’ “Liquid State Machine” (link) that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Amazingly, Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. What follows is the abstract of their paper entitled “** Pattern Recognition in a Bucket**“, as well a PDF link onto it:

Figure – Typical wave patterns for the XOR task. Top-Left: [0 1] (right motor on), Top-Right: [1 0] (left motor on), Bottom-Left: [1 1] (both motors on), Bottom-Right: [0 0] (still water). Sobel filtered and thresholded images on right. (from Fig. 3. in in Chrisantha Fernando and Sampsa Sojakka, “*Pattern Recognition in a Bucket*“, ECAL proc., European Conference on Artificial Life, 2003.

[…] **Abstract**. This paper demonstrates that the waves produced on the surface of water can be used as the medium for a “Liquid State Machine” that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Interference between waves allows non-linear parallel computation upon simultaneous sensory inputs. Temporal patterns of stimulation are converted to spatial patterns of water waves upon which a linear discrimination can be made. Whereas Wolfgang Maass’ Liquid State Machine requires fine tuning of the spiking neural network parameters, water has inherent self-organising properties such as strong local interactions, time-dependent spread of activation to distant areas, inherent stability to a wide variety of inputs, and high complexity. Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. An analogy is made between water molecules and neurons in a recurrent neural network. […] in *Chrisantha Fernando* and *Sampsa Sojakka*, **“*** Pattern Recognition in a Bucket*“, ECAL proc., European Conference on Artificial Life, 2003. [PDF link]

Fig. – (** Organizational Complexity trough History**) Four forms behind the Organization and Evolution of all societies (

*David Ronfeldt*TIMN). Each form also seems to be triggered by major societal changes in communications and language. Oral speech enabled tribes (T), the written word enabled institutions (I), the printed word fostered regional and global markets (M), and the electric (digital) word is empowering worldwide networks (N). [in David Ronfeldt, “

*Tribes, Institutions, Markets, Networks: A framework about Societal Evolution*“, RAND Corporation, Document Number: P-7967, (1996). PDF link]

[…] Organizational complexity is defined as the amount of differentiation that exists within different elements constituting the organization. This is often operationalized as the number of different professional specializations that exist within the organization. For example, a school would be considered a less complex organization than a hospital, since a hospital requires a large diversity of professional specialties in order to function. Organizational complexity can also be observed via differentiation in structure, authority and locus of control, and attributes of personnel, products, and technologies. Contingency theory states that an organization structures itself and behaves in a particular manner as an attempt to fit with its environment. Thus organizations are more or less complex as a reaction to environmental complexity. An organization’s environment may be complex because it is turbulent, hostile, diverse, technologically complex, or restrictive. An organization may also be complex as a result of the complexity of its underlying technological core. For example, a nuclear power plant is likely to have a more complex organization than a standard power plant because the underlying technology is more difficult to understand and control. There are numerous consequences of environmental and organizational complexity. Organizational members, faced with overwhelming and/or complex decisions, omit, tolerate errors, queue, filter, abstract, use multiple channels, escape, and chunk in order to deal effectively with the complexity. At an organizational level, an organizational will respond to complexity by building barriers around its technical core; by smoothing input and output transactions; by planning and predicting; by segmenting itself and/or becoming decentralized; and by adopting rules.

Complexity science offers a broader view of organizational complexity – it maintains that all organizations are relatively complex, and that such complexity arises that complex behavior is not necessarily the result of complex action on the behalf of a single individual’s effort; rather, complex behavior of the whole can be the result of loosely coupled organizational members behaving in simple ways, acting on local information. Complexity science posits that most organizational behavior is the result of numerous events occurring over extended periods of time, rather than the result of some smaller number of critical incidents. […] in Dooley, K. (2002), “*Organizational Complexity*,” *International Encyclopedia of Business and Management*, M. Warner (ed.), London: Thompson Learning, p. 5013-5022. (PDF link)

The Internet has given us a glimpse of the power of networks. We are just beginning to realize how we can use networks as our primary form of living and working. *David Ronfeldt* has developed the TIMN framework to explain this – **Tribal** (T); **Institutional** (I); **Markets** (M); **Networks** (N). The TIMN framework shows how we have evolved as a civilization. It has not been a clean progression from one organizing mode to the next but rather each new form built upon and changed the previous mode. He sees the network form not as a modifier of previous forms, but a form in itself that can address issues that the three other forms could not address. This point is very important when it comes to things like implementing social business (a network mode) within corporations (institutional + market modes). Real network models (e.g. *wirearchy*) are new modes, not modifications of the old ones.

Another key point of this framework is that Tribes exist within Institutions, Markets and Networks. We never lose our affinity for community groups or family, but each mode brings new factors that influence our previous modes. For example, tribalism is alive and well in online social networks. It’s just not the same tribalism of several hundred years ago. Each transition also has its hazards. For instance, while tribal societies may result in nepotism, networked societies can lead to deception. *Ronfeldt* states that the initial tribal form informs the other modes and can have a profound influence as they evolve:

*Balanced combination is apparently imperative: Each form (and its realm) builds on its predecessor(s). In the progression from T through T+I+M+N, the rise of a new form depends on the successes (and failures) achieved through the earlier forms. For a society to progress optimally through the addition of new forms, no single form should be allowed to dominate any other, and none should be suppressed or eliminated. A society’s potential to function well at a given stage, and to evolve to a higher level of complexity, depends on its ability to integrate these inherently contradictory forms into a well-functioning whole. A society can constrain its prospects for evolutionary growth by elevating a single form to primacy — as appears to be a tendency at times in market-mad America*. [in David Ronfeldt, “*Tribes, Institutions, Markets, Networks: A framework about Societal Evolution*“, RAND Corporation, Document Number: P-7967, (1996). PDF link]

Finally, on these areas (far behind the strict topic of organizational topology and complex networks), let me add two books. One his from *José Fonseca*, a friend researcher I first met in 2001, during a joint interview for the Portuguese *Idéias & Negócios* Magazine, for his 5th anniversary (old link) embracing innovation in Portugal. His book entitled “** Complexity & Innovation in Organizations**” (above) was published in December that year, 2001 by

*Routledge*. The other one is more recent and from

*Ralph Stacey*, “

**” (below), Routledge, 2010. Even if, Ralph as many other past seminal books on this topic. Both, have worked together at the**

*Complexity and Organizational Reality: Uncertainty and the Need to Rethink Management After the Collapse of Investment Capitalism**Hertfordshire*University.

Fig. – A Symbolical Head (phrenological chart) illustrating the natural language of the faculties. At the Society pages / Economic Sociology web page.

You have much probably noticed by now how Scoop.it is emerging as a powerful platform for those collecting interesting research papers. There are several good examples, but let me stress one entitled “** Bounded Rationality and Beyond**” (scoop.it web page) curated by

*Alessandro Cerboni*(blog). On a difficult research theme,

*Alessandro*is doing a great job collecting nice essays and wonderful articles, whenever he founds them. One of those articles I really appreciated was

*John Conlisk*‘s “

*Why Bounded Rationality?*“, delivering into the field several important clues, for those who (like me) work in the area. What follows, is an excerpt from the article as well as part of his introductory section. The full (PDF) paper could be retrieved here:

*In this survey, four reasons are given for incorporating bounded rationality in economic models. First, there is abundant empirical evidence that it is important. Second, models of bounded rationality have proved themselves in a wide range of impressive work. Third, the standard justifications for assuming unbounded rationality are unconvincing; their logic cuts both ways. Fourth, deliberation about an economic decision is a costly activity, and good economics requires that we entertain all costs. These four reasons, or categories of reasons, are developed in the following four sections. Deliberation cost will be a recurring theme.*

**Why bounded rationality?** In four words (one for each section above): evidence, success, methodology, and scarcity. In more words: Psychology and economics provide wide-ranging evidence that bounded rationality is important (Section I). Economists who include bounds on rationality in their models have excellent success in describing economic behavior beyond the coverage of standard theory (Section II). The traditional appeals to economic methodology cut both ways; the conditions of a particular context may favor either bounded or unbounded rationality (Section III). Models of bounded rationality adhere to a fundamental tenet of economics, respect for scarcity. Human cognition, as a scarce resource, should be treated as such (Section IV). The survey stresses throughout that an appropriate rationality assumption is not something to decide once for all contexts. In principle, we might suppose there is an encompassing single theory which takes various forms of bounded and unbounded rationality as special. cases. As with other model ingredients, however, we in practice want to work directly with the most convenient special case which does justice to the context. The evidence and models surveyed suggest that a sensible rationality assumption will vary by context, depending on such conditions as deliberation cost, complexity, incentives, experience, and market discipline. Beyond the four reasons given, there is one more reason for studying bounded rationality. It is simply a fascinating thing to do. We can mix some Puck with our Hamlet.

“*Well summed up*“, says Hélder Barbosa over *Twitter* about this cartoon (@HelSimao). I agree. It’s quite easy to “translate” from *apples* to *oranges*. However, to do something really new, first requires know-how, a well-heeled technique, absolute reconnaissance of the state-of-the-art, and then the hardest – *soul*; … imagination and courage. As well as to test the new idea, indefinitely, during months if necessary, with great patience. At this point, ironically, science needs inner faith, or the courage to drop everything on the trash can if it’s not good enough, starting all over again. Not all scientists share these features, altogether. Some are doomed to produce low impact papers all their life, translating the work of others from *apples* to *oranges*.

[Cartoon is from VADLO, a growing search engine in Biology, as well as in all branches of life sciences, including Molecular Biology, Cell Biology, Structural Biology, Evolutionary Biology, Genetics, Genomics, Proteomics, Botany, Zoology, Biochemistry, Biophysics, Biotechnology, Biostatistics, Pharmacology and Biomedical research.]

Ever tried to solve a problem where its own problem statement is changing constantly? Have a look on our approach:

* Vitorino Ramos, David M.S. Rodrigues*,

*Jorge Louçã*, “

*“, in*

**Spatio-Temporal Dynamics on Co-Evolved Stigmergy***European Conference on Complex Systems*, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

**Abstract**: Research over hard ** NP**-complete

*Combinatorial Optimization Problems*(

**COP**’s) has been focused in recent years, on several robust bio-inspired meta-heuristics, like those involving

*Evolutionary Computation*(

**EC**) algorithmic paradigms. One particularly successful well-know meta-heuristic approach is based on

*Swarm Intelligence*(

**SI**), i.e., the self-organized stigmergic-based property of a complex system whereby the collective behaviors of (unsophisticated) entities interacting locally with their environment cause coherent functional global patterns to emerge. This line of research recognized as

*Ant Colony Optimization*(

**ACO**), uses a set of stochastic cooperating ant-like agents to find good solutions, using self-organized stigmergy as an indirect form of communication mediated by artificial pheromone, whereas agents deposit pheromone-signs on the edges of the problem-related graph complex network, encompassing a family of successful algorithmic variations such as:

*Ant Systems*(

**AS**),

*Ant Colony Systems*(

**ACS**),

*Max-Min Ant Systems*(

**) and**

*Max*–*Min*AS**Ant-Q**.

Albeit being extremely successful these algorithms mostly rely on positive feedback’s, causing excessive algorithmic exploitation over the entire combinatorial search space. This is particularly evident over well known benchmarks as the symmetrical *Traveling Salesman Problem* (**TSP**). Being these systems comprised of a large number of frequently similar components or events, the principal challenge is to understand how the components interact to produce a complex pattern feasible solution (in our case study, an optimal robust solution for hard ** NP**-complete dynamic

**-like combinatorial problems). A suitable approach is to first understand the role of two basic modes of interaction among the components of Self-Organizing (**

*TSP***SO**) Swarm-Intelligent-like systems: positive and negative feedback. While positive feedback promotes a snowballing auto-catalytic effect (e.g. trail pheromone upgrading over the network; exploitation of the search space), taking an initial change in a system and reinforcing that change in the same direction as the initial deviation (self-enhancement and amplification) allowing the entire colony to exploit some past and present solutions (environmental dynamic memory), negative feedback such as pheromone evaporation ensure that the overall learning system does not stables or freezes itself on a particular configuration (innovation; search space exploration). Although this kind of (global) delayed negative feedback is important (evaporation), for the many reasons given above, there is however strong assumptions that other negative feedbacks are present in nature, which could also play a role over increased convergence, namely implicit-like negative feedbacks. As in the case for positive feedbacks, there is no reason not to explore increasingly distributed and adaptive algorithmic variations where negative feedback is also imposed implicitly (not only explicitly) over each network edge, while the entire colony seeks for better answers in due time.

In order to overcome this hard search space exploitation-exploration compromise, our present algorithmic approach follows the route of very recent biological findings showing that forager ants lay attractive trail pheromones to guide nest mates to food, but where, the effectiveness of foraging networks were improved if pheromones could also be used to repel foragers from unrewarding routes. Increasing empirical evidences for such a negative trail pheromone exists, deployed by Pharaoh’s ants (*Monomorium pharaonis*) as a ‘*no entry*‘ signal to mark unrewarding foraging paths. The new algorithm comprises a second order approach to Swarm Intelligence, as pheromone-based no entry-signals cues, were introduced, co-evolving with the standard pheromone distributions (collective cognitive maps) in the aforementioned known algorithms.

To exhaustively test his adaptive response and robustness, we have recurred to different dynamic optimization problems. Medium-size and large-sized dynamic **TSP** problems were created. Settings and parameters such as, environmental upgrade frequencies, landscape changing or network topological speed severity, and type of dynamic were tested. Results prove that the present co-evolved two-type pheromone swarm intelligence algorithm is able to quickly track increasing swift changes on the dynamic **TSP** complex network, compared to standard algorithms.

**Keywords**: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Dynamical Symmetrical Traveling Salesman Problems (TSP).

Fig. – Recovery times over several dynamical stress tests at the fl1577 TSP problem (1577 node graph) – 460 iter max – Swift changes at every 150 iterations (20% = 314 nodes, 40% = 630 nodes, 60% = 946 nodes, 80% = 1260 nodes, 100% = 1576 nodes). [click to enlarge]

*David M.S. Rodrigues*, *Jorge Louçã*, *Vitorino Ramos*, “* From Standard to Second Order Swarm Intelligence Phase-space maps*“, in

*European Conference on Complex Systems*, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

**Abstract**: Standard *Stigmergic* approaches to *Swarm Intelligence* encompasses the use of a set of stochastic cooperating ant-like agents to find optimal solutions, using self-organized * Stigmergy* as an indirect form of communication mediated by a singular artificial pheromone. Agents deposit pheromone-signs on the edges of the problem-related graph to give rise to a family of successful algorithmic approaches entitled

*Ant Systems*(

**AS**),

*Ant Colony Systems*(

**ACS**), among others. These mainly rely on positive feedback’s, to search for an optimal solution in a large combinatorial space. The present work shows how, using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedback’s achieves better results than single positive feedback systems. This follows the route of very recent biological findings showing that forager ants, while laying attractive trail pheromones to guide nest mates to food, also gained foraging effectiveness by the use of pheromones that repelled foragers from unrewarding routes. The algorithm presented here takes inspiration precisely from this biological observation.

The new algorithm was exhaustively tested on a series of well-known benchmarks over hard NP-complete *Combinatorial Optimization Problems* (**COP**’s), running on symmetrical *Traveling Salesman Problems* (**TSP**). Different network topologies and stress tests were conducted over low-size TSP’s (eil51.tsp; eil78.tsp; kroA100.tsp), medium-size (d198.tsp; lin318.tsp; pcb442.tsp; att532.tsp; rat783.tsp) as well as large sized ones (fl1577.tsp; d2103.tsp) [numbers here referring to the number of nodes in the network]. We show that the new co-evolved stigmergic algorithm compared favorably against the benchmark. The algorithm was able to equal or majorly improve every instance of those standard algorithms, not only in the realm of the Swarm Intelligent **AS**, **ACS** approach, as in other computational paradigms like *Genetic Algorithms* (**GA**), *Evolutionary Programming* (**EP**), as well as **SOM** (*Self-Organizing Maps*) and **SA** (*Simulated Annealing*). In order to deeply understand how a second co-evolved pheromone was useful to track the collective system into such results, a refined phase-space map was produced mapping the pheromones ratio between a pure *Ant Colony System* (where no negative feedback besides pheromone evaporation is present) and the present second-order approach. The evaporation rate between different pheromones was also studied and its influence in the outcomes of the algorithm is shown. A final discussion on the phase-map is included. This work has implications in the way large combinatorial problems are addressed as the double feedback mechanism shows improvements over the single-positive feedback mechanisms in terms of convergence speed and on major results.

**Keywords**: Stigmergy, Co-Evolution, Self-Organization, Swarm Intelligence, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Traveling Salesman Problems (TSP), phase-space.

Fig. – Comparing convergence results between Standard algorithms vs. Second Order Swarm Intelligence, over TSP fl1577 (click to enlarge).

## Recent Comments