You are currently browsing the tag archive for the ‘Bio-inspired Computation’ tag.

Vitorino Ramos - Citations2016Jan

2016 – Up now, an overall of 1567 citations among 74 works (including 3 books) on GOOGLE SCHOLAR (https://scholar.google.com/citations?user=gSyQ-g8AAAAJ&hl=en) [with an Hirsh h-index=19, and an average of 160.2 citations each for any work on my top five] + 900 citations among 57 works on the new RESEARCH GATE site (https://www.researchgate.net/profile/Vitorino_Ramos).

Refs.: Science, Artificial Intelligence, Swarm Intelligence, Data-Mining, Big-Data, Evolutionary Computation, Complex Systems, Image Analysis, Pattern Recognition, Data Analysis.

Complete circuit diagram with pheromone - Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; Figure – Neural circuit controller of the virtual ant (page 3, fig. 2). [URL: http://arxiv.org/abs/1507.08467 ]

Intelligence and decision in foraging ants. Individual or Collective? Internal or External? What is the right balance between the two. Can one have internal intelligence without external intelligence? Can one take examples from nature to build in silico artificial lives that present us with interesting patterns? We explore a model of foraging ants in this paper that will be presented in early September in Exeter, UK, at UKCI 2015. (available on arXiv [PDF] and ResearchGate)

Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; “A Model for Foraging Ants, Controlled by Spiking Neural Networks and Double Pheromones“, UKCI 2015 Computational Intelligence – University of Exeter, UK, September 2015.

Abstract: A model of an Ant System where ants are controlled by a spiking neural circuit and a second order pheromone mechanism in a foraging task is presented. A neural circuit is trained for individual ants and subsequently the ants are exposed to a virtual environment where a swarm of ants performed a resource foraging task. The model comprises an associative and unsupervised learning strategy for the neural circuit of the ant. The neural circuit adapts to the environment by means of classical conditioning. The initially unknown environment includes different types of stimuli representing food (rewarding) and obstacles (harmful) which, when they come in direct contact with the ant, elicit a reflex response in the motor neural system of the ant: moving towards or away from the source of the stimulus. The spiking neural circuits of the ant is trained to identify food and obstacles and move towards the former and avoid the latter. The ants are released on a landscape with multiple food sources where one ant alone would have difficulty harvesting the landscape to maximum efficiency. In this case the introduction of a double pheromone mechanism (positive and negative reinforcement feedback) yields better results than traditional ant colony optimization strategies. Traditional ant systems include mainly a positive reinforcement pheromone. This approach uses a second pheromone that acts as a marker for forbidden paths (negative feedback). This blockade is not permanent and is controlled by the evaporation rate of the pheromones. The combined action of both pheromones acts as a collective stigmergic memory of the swarm, which reduces the search space of the problem. This paper explores how the adaptation and learning abilities observed in biologically inspired cognitive architectures is synergistically enhanced by swarm optimization strategies. The model portraits two forms of artificial intelligent behaviour: at the individual level the spiking neural network is the main controller and at the collective level the pheromone distribution is a map towards the solution emerged by the colony. The presented model is an important pedagogical tool as it is also an easy to use library that allows access to the spiking neural network paradigm from inside a Netlogo—a language used mostly in agent based modelling and experimentation with complex systems.

References:

[1] C. G. Langton, “Studying artificial life with cellular automata,” Physica D: Nonlinear Phenomena, vol. 22, no. 1–3, pp. 120 – 149, 1986, proceedings of the Fifth Annual International Conference. [Online]. Available: http://www.sciencedirect.com/ science/article/pii/016727898690237X
[2] A. Abraham and V. Ramos, “Web usage mining using artificial ant colony clustering and linear genetic programming,” in Proceedings of the Congress on Evolutionary Computation. Australia: IEEE Press, 2003, pp. 1384–1391.
[3] V. Ramos, F. Muge, and P. Pina, “Self-organized data and image retrieval as a consequence of inter-dynamic synergistic relationships in artificial ant colonies,” Hybrid Intelligent Systems, vol. 87, 2002.
[4] V. Ramos and J. J. Merelo, “Self-organized stigmergic document maps: Environment as a mechanism for context learning,” in Proceddings of the AEB, Merida, Spain, February 2002. ´
[5] D. Sousa-Rodrigues and V. Ramos, “Traversing news with ant colony optimisation and negative pheromones,” in European Conference in Complex Systems, Lucca, Italy, Sep 2014.
[6] E. Bonabeau, G. Theraulaz, and M. Dorigo, Swarm Intelligence: From Natural to Artificial Systems, 1st ed., ser. Santa Fe Insitute Studies In The Sciences of Complexity. 198 Madison Avenue, New York: Oxford University Press, USA, Sep. 1999.
[7] M. Dorigo and L. M. Gambardella, “Ant colony system: A cooperative learning approach to the traveling salesman problem,” Universite Libre de Bruxelles, Tech. Rep. TR/IRIDIA/1996-5, ´ 1996.
[8] M. Dorigo, G. Di Caro, and L. M. Gambardella, “Ant algorithms for discrete optimization,” Artif. Life, vol. 5, no. 2, pp. 137– 172, Apr. 1999. [Online]. Available: http://dx.doi.org/10.1162/ 106454699568728
[9] L. M. Gambardella and M. Dorigo, “Ant-q: A reinforcement learning approach to the travelling salesman problem,” in Proceedings of the ML-95, Twelfth Intern. Conf. on Machine Learning, M. Kaufman, Ed., 1995, pp. 252–260.
[10] A. Gupta, V. Nagarajan, and R. Ravi, “Approximation algorithms for optimal decision trees and adaptive tsp problems,” in Proceedings of the 37th international colloquium conference on Automata, languages and programming, ser. ICALP’10. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 690–701. [Online]. Available: http://dl.acm.org/citation.cfm?id=1880918.1880993
[11] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Second order ˜ swarm intelligence,” in HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, ser. Lecture Notes in Computer Science, J.-S. Pan, M. Polycarpou, M. Wozniak, A. Carvalho, ´ H. Quintian, and E. Corchado, Eds. Salamanca, Spain: Springer ´ Berlin Heidelberg, Sep 2013, vol. 8073, pp. 411–420.
[12] W. Maass and C. M. Bishop, Pulsed Neural Networks. Cambridge, Massachusetts: MIT Press, 1998.
[13] E. M. Izhikevich and E. M. Izhikevich, “Simple model of spiking neurons.” IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, vol. 14, no. 6, pp. 1569–72, 2003. [Online]. Available: http://www.ncbi.nlm.nih. gov/pubmed/18244602
[14] C. Liu and J. Shapiro, “Implementing classical conditioning with spiking neurons,” in Artificial Neural Networks ICANN 2007, ser. Lecture Notes in Computer Science, J. de S, L. Alexandre, W. Duch, and D. Mandic, Eds. Springer Berlin Heidelberg, 2007, vol. 4668, pp. 400–410. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-74690-4 41
[15] J. Haenicke, E. Pamir, and M. P. Nawrot, “A spiking neuronal network model of fast associative learning in the honeybee,” Frontiers in Computational Neuroscience, no. 149, 2012. [Online]. Available: http://www.frontiersin.org/computational neuroscience/10.3389/conf.fncom.2012.55.00149/full
[16] L. I. Helgadottir, J. Haenicke, T. Landgraf, R. Rojas, and M. P. Nawrot, “Conditioned behavior in a robot controlled by a spiking neural network,” in International IEEE/EMBS Conference on Neural Engineering, NER, 2013, pp. 891–894.
[17] A. Cyr and M. Boukadoum, “Classical conditioning in different temporal constraints: an STDP learning rule for robots controlled by spiking neural networks,” pp. 257–272, 2012.
[18] X. Wang, Z. G. Hou, F. Lv, M. Tan, and Y. Wang, “Mobile robots’ modular navigation controller using spiking neural networks,” Neurocomputing, vol. 134, pp. 230–238, 2014.
[19] C. Hausler, M. P. Nawrot, and M. Schmuker, “A spiking neuron classifier network with a deep architecture inspired by the olfactory system of the honeybee,” in 2011 5th International IEEE/EMBS Conference on Neural Engineering, NER 2011, 2011, pp. 198–202.
[20] U. Wilensky, “Netlogo,” Evanston IL, USA, 1999. [Online]. Available: http://ccl.northwestern.edu/netlogo/
[21] C. Jimenez-Romero and J. Johnson, “Accepted abstract: Simulation of agents and robots controlled by spiking neural networks using netlogo,” in International Conference on Brain Engineering and Neuro-computing, Mykonos, Greece, Oct 2015.
[22] W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge University Press, 2002.
[23] J. v. H. W Gerstner, R Kempter and H. Wagner, “A neuronal learning rule for sub-millisecond temporal coding,” Nature, vol. 386, pp. 76–78, 1996.
[24] I. P. Pavlov, “Conditioned reflexes: An investigation of the activity of the cerebral cortex,” New York, 1927.
[25] E. J. H. Robinson, D. E. Jackson, M. Holcombe, and F. L. W. Ratnieks, “Insect communication: ‘no entry’ signal in ant foraging,” Nature, vol. 438, no. 7067, pp. 442–442, 11 2005. [Online]. Available: http://dx.doi.org/10.1038/438442a
[26] E. J. Robinson, D. Jackson, M. Holcombe, and F. L. Ratnieks, “No entry signal in ant foraging (hymenoptera: Formicidae): new insights from an agent-based model,” Myrmecological News, vol. 10, no. 120, 2007.
[27] D. Sousa-Rodrigues, J. Louçã, and V. Ramos, “From standard ˜ to second-order swarm intelligence phase-space maps,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, Sep 2011.
[28] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Spatio-temporal ˜ dynamics on co-evolved stigmergy,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, 9 2011.
[29] S. Tisue and U. Wilensky, “Netlogo: A simple environment for modeling complexity,” in International conference on complex systems. Boston, MA, 2004, pp. 16–21.

David MS Rodrigues Reading the News Through its Structure New Hybrid Connectivity Based ApproachesFigure – Two simplicies a and b connected by the 2-dimensional face, the triangle {1;2;3}. In the analysis of the time-line of The Guardian newspaper (link) the system used feature vectors based on frequency of words and them computed similarity between documents based on those feature vectors. This is a purely statistical approach that requires great computational power and that is difficult for problems that have large feature vectors and many documents. Feature vectors with 100,000 or more items are common and computing similarities between these documents becomes cumbersome. Instead of computing distance (or similarity) matrices between documents from feature vectors, the present approach explores the possibility of inferring the distance between documents from the Q-analysis description. Q-analysis is a very natural notion of connectivity between the simplicies of the structure and in the relation studied, documents are connected to each other through shared sets of tags entered by the journalists. Also in this framework, eccentricity is defined as a measure of the relatedness of one simplex in relation to another [7].

David M.S. Rodrigues and Vitorino Ramos, “Traversing News with Ant Colony Optimisation and Negative Pheromones” [PDF], accepted as preprint for oral presentation at the European Conference on Complex SystemsECCS14 in Lucca, Sept. 22-26, 2014, Italy.

Abstract: The past decade has seen the rapid development of the online newsroom. News published online are the main outlet of news surpassing traditional printed newspapers. This poses challenges to the production and to the consumption of those news. With those many sources of information available it is important to find ways to cluster and organise the documents if one wants to understand this new system. Traditional approaches to the problem of clustering documents usually embed the documents in a suitable similarity space. Previous studies have reported on the impact of the similarity measures used for clustering of textual corpora [1]. These similarity measures usually are calculated for bag of words representations of the documents. This makes the final document-word matrix high dimensional. Feature vectors with more than 10,000 dimensions are common and algorithms have severe problems with the high dimensionality of the data. A novel bio inspired approach to the problem of traversing the news is presented. It finds Hamiltonian cycles over documents published by the newspaper The Guardian. A Second Order Swarm Intelligence algorithm based on Ant Colony Optimisation was developed [2, 3] that uses a negative pheromone to mark unrewarding paths with a “no-entry” signal. This approach follows recent findings of negative pheromone usage in real ants [4].

In this case study the corpus of data is represented as a bipartite relation between documents and keywords entered by the journalists to characterise the news. A new similarity measure between documents is presented based on the Q-analysis description [5, 6, 7] of the simplicial complex formed between documents and keywords. The eccentricity between documents (two simplicies) is then used as a novel measure of similarity between documents. The results prove that the Second Order Swarm Intelligence algorithm performs better in benchmark problems of the travelling salesman problem, with faster convergence and optimal results. The addition of the negative pheromone as a non-entry signal improves the quality of the results. The application of the algorithm to the corpus of news of The Guardian creates a coherent navigation system among the news. This allows the users to navigate the news published during a certain period of time in a semantic sequence instead of a time sequence. This work as broader application as it can be applied to many cases where the data is mapped to bipartite relations (e.g. protein expressions in cells, sentiment analysis, brand awareness in social media, routing problems), as it highlights the connectivity of the underlying complex system.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Hamiltonian cycles, Text Mining, Textual Corpora, Information Retrieval, Knowledge Discovery, Sentiment Analysis, Q-Analysis, Data Mining, Journalism, The Guardian.

References:

[1] Alexander Strehl, Joydeep Ghosh, and Raymond Mooney. Impact of similarity measures on web-page clustering.  In Workshop on Artifcial Intelligence for Web Search (AAAI 2000), pages 58-64, 2000.
[2] David M. S. Rodrigues, Jorge Louçã, and Vitorino Ramos. From standard to second-order Swarm Intelligence  phase-space maps. In Stefan Thurner, editor, 8th European Conference on Complex Systems, Vienna, Austria,  9 2011.
[3] Vitorino Ramos, David M. S. Rodrigues, and Jorge Louçã. Second order Swarm Intelligence. In Jeng-Shyang  Pan, Marios M. Polycarpou, Micha l Wozniak, André C.P.L.F. Carvalho, Hector Quintian, and Emilio Corchado,  editors, HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, volume 8073 of Lecture  Notes in Computer Science, pages 411-420. Springer Berlin Heidelberg, Salamanca, Spain, 9 2013.
[4] Elva J.H. Robinson, Duncan Jackson, Mike Holcombe, and Francis L.W. Ratnieks. No entry signal in ant  foraging (hymenoptera: Formicidae): new insights from an agent-based model. Myrmecological News, 10(120), 2007.
[5] Ronald Harry Atkin. Mathematical Structure in Human A ffairs. Heinemann Educational Publishers, 48 Charles  Street, London, 1 edition, 1974.
[6] J. H. Johnson. A survey of Q-analysis, part 1: The past and present. In Proceedings of the Seminar on Q-analysis  and the Social Sciences, Universty of Leeds, 9 1983.
[7] David M. S. Rodrigues. Identifying news clusters using Q-analysis and modularity. In Albert Diaz-Guilera,  Alex Arenas, and Alvaro Corral, editors, Proceedings of the European Conference on Complex Systems 2013, Barcelona, 9 2013.

In order to solve hard combinatorial optimization problems (e.g. optimally scheduling students and teachers along a week plan on several different classes and classrooms), one way is to computationally mimic how ants forage the vicinity of their habitats searching for food. On a myriad of endless possibilities to find the optimal route (minimizing the travel distance), ants, collectively emerge the solution by using stigmergic signal traces, or pheromones, which also dynamically change under evaporation.

Current algorithms, however, make only use of a positive feedback type of pheromone along their search, that is, if they collectively visit a good low-distance route (a minimal pseudo-solution to the problem) they tend to reinforce that signal, for their colleagues. Nothing wrong with that, on the contrary, but no one knows however if a lower-distance alternative route is there also, just at the corner. On his global search endeavour, like a snowballing effect, positive feedbacks tend up to give credit to the exploitation of solutions but not on the – also useful – exploration side. The upcoming potential solutions can thus get crystallized, and freeze, while a small change on some parts of the whole route, could on the other-hand successfully increase the global result.

Influence of Negative Pheromone in Swarm IntelligenceFigure – Influence of negative pheromone on kroA100.tsp problem (fig.1 – page 6) (values on lines represent 1-ALPHA). A typical standard ACS (Ant Colony System) is represented here by the line with value 0.0, while better results could be found by our approach, when using positive feedbacks (0.95) along with negative feedbacks (0.05). Not only we obtain better results, as we found them earlier.

There is, however, an advantage when a second type of pheromone (a negative feedback one) co-evolves with the first type. And we decided to research for his impact. What we found out, is that by using a second type of global feedback, we can indeed increase a faster search while achieving better results. In a way, it’s like using two different types of evaporative traffic lights, in green and red, co-evolving together. And as a conclusion, we should indeed use a negative no-entry signal pheromone. In small amounts (0.05), but use it. Not only this prevents the whole system to freeze on some solutions, to soon, as it enhances a better compromise on the search space of potential routes. The pre-print article is available here at arXiv. Follows the abstract and keywords:

Vitorino Ramos, David M. S. Rodrigues, Jorge Louçã, “Second Order Swarm Intelligence” [PDF], in Hybrid Artificial Intelligent Systems, Lecture Notes in Computer Science, Springer-Verlag, Volume 8073, pp. 411-420, 2013.

Abstract: An artificial Ant Colony System (ACS) algorithm to solve general purpose combinatorial Optimization Problems (COP) that extends previous AC models [21] by the inclusion of a negative pheromone, is here described. Several Travelling Salesman Problem‘s (TSP) were used as benchmark. We show that by using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedbacks achieves better results than single positive feedback systems. The algorithm was tested against known NP complete combinatorial Optimization Problems, running on symmetrical TSPs. We show that the new algorithm compares favourably against these benchmarks, accordingly to recent biological findings by Robinson [26,27], and Grüter [28] where “No entry” signals and negative feedback allows a colony to quickly reallocate the majority of its foragers to superior food patches. This is the first time an extended ACS algorithm is implemented with these successful characteristics.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Travelling Salesman Problems (TSP).

von Neumann

There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself“. ~ John von Neumann, in his 1949 University of Illinois lectures on the Theory and Organization of Complicated Automata [J. von Neumann, Theory of self-reproducing automata, 1949 Univ. of Illinois Lectures on the Theory and Organization of Complicated Automata, ed. A.W. Burks (University of Illinois Press, Urbana, IL, 1966).].

Signal Traces - Sept. 2013 Vitorino RamosPhoto – Signal traces, September 2013, Vitorino Ramos.

[…] While pheromone reinforcement plays a role as system’s memory, evaporation allows the system to adapt and dynamically decide, without any type of centralized or hierarchical control […], below.

“[…] whereas signals tends to be conspicuous, since natural selection has shaped signals to be strong and effective displays, information transfer via cues is often more subtle and based on incidental stimuli in an organism’s social environment […]”, Seeley, T.D., “The Honey Bee Colony as a Super-Organism”, American Scientist, 77, pp.546-553, 1989.

[…] If an ant colony on his cyclic way from the nest to a food source (and back again), has only two possible branches around an obstacle, one bigger and the other smaller (the bridge experiment [7,52]), pheromone will accumulate – as times passes – on the shorter path, simple because any ant that sets out on that path will return sooner, passing the same points more frequently, and via that way, reinforcing the signal of that precise branch. Even if as we know, the pheromone evaporation rate is the same in both branches, the longer branch will faster vanish his pheromone, since there is not enough critical mass of individuals to keep it. On the other hand – in what appears to be a vastly pedagogic trick of Mother Nature – evaporation plays a critical role on the society. Without it, the final global decision or the phase transition will never happen. Moreover, without it, the whole colony can never adapt if the environment suddenly changes (e.g., the appearance of a third even shorter branch). While pheromone reinforcement plays a role as system’s memory, evaporation allows the system to adapt and dynamically decide, without any type of centralized or hierarchical control. […], in “Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes“, V. Ramos et al., available as pre-print on arXiV, 2005.

[…] There is some degree of communication among the ants, just enough to keep them from wandering of completely at random. By this minimal communication they can remind each other that they are not alone but are cooperating with team-mates. It takes a large number of ants, all reinforcing each other this way, to sustain any activity – such as trail building – for any length of time. Now my very hazy understanding of the operation of brain leads me to believe that something similar pertains to the firing of neurons… […] in, p. 316, Hofstadter, D.R., “Gödel, Escher, Bach: An Eternal Golden Braid“, New York: Basic Books, 1979).

[…] Since in Self-Organized (SO) systems their organization arises entirely from multiple interactions, it is of critical importance to question how organisms acquire and act upon information [9]. Basically through two forms: a) information gathered from one’s neighbours, and b) information gathered from work in progress, that is, stigmergy. In the case of animal groups, these internal interactions typically involve information transfers between individuals. Biologists have recently recognized that information can flow within groups via two distinct pathways – signals and cues. Signals are stimuli shaped by natural selection specifically to convey information, whereas cues are stimuli that convey information only incidentally [9]. The distinction between signals and cues is illustrated by the difference on ant and deer trails. The chemical trail deposited by ants as they return from a desirable food source is a signal. Over evolutionary time such trails have been moulded by natural selection for the purpose of sharing with nest mates information about the location of rich food sources. In contrast, the rutted trails made by deer walking through the woods is a cue, not shaped by natural selection for communication among deer but are a simple by-product of animals walking along the same path. SO systems are based on both, but whereas signals tends to be conspicuous, since natural selection has shaped signals to be strong and effective displays, information transfer via cues is often more subtle and based on incidental stimuli in an organism’s social environment [45] […], in “Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes“, V. Ramos et al., available as pre-print on arXiV, 2005.

Hybrid Artificial Intelligent Systems HAIS 2013 (pp. 411-420 Second Order Swarm Intelligence)Figure – Hybrid Artificial Intelligent Systems new LNAI (Lecture Notes on Artificial Intelligence) series volume 8073, Springer-Verlag Book [original photo by my colleague David M.S. Rodrigues].

New work, new book. Last week one of our latest works come out published on Springer. Edited by Jeng-Shyang Pan, Marios M. Polycarpou, Emilio Corchado et al. “Hybrid Artificial Intelligent Systems” comprises a full set of new papers on this hybrid area on Intelligent Computing (check the full articles list at Springer). Our new paper “Second Order Swarm Intelligence” (pp. 411-420, Springer books link) was published on the Bio-inspired Models and Evolutionary Computation section.

Four different snapshots (click to enlarge) from one of my latest books, recently published in Japan: Ajith Abraham, Crina Grosan, Vitorino Ramos (Eds.), “Swarm Intelligence in Data Mining” (群知能と  データマイニング), Tokyo Denki University press [TDU], Tokyo, Japan, July 2012.

Fig.1 – (click to enlarge) The optimal shortest path among N=1265 points depicting a Portuguese Navalheira crab as a result of one of our latest Swarm-Intelligence based algorithms. The problem of finding the shortest path among N different points in space is NP-hard, known as the Travelling Salesmen Problem (TSP), being one of the major and hardest benchmarks in Combinatorial Optimization (link) and Artificial Intelligence. (V. Ramos, D. Rodrigues, 2012)

This summer my kids just grab a tiny Portuguese Navalheira crab on the shore. After a small photo-session and some baby-sitting with a lettuce leaf, it was time to release it again into the ocean. He not only survived my kids, as he is now entitled into a new World Wide Web on-line life. After the Shortest path Sardine (link) with 1084 points, here is the Crab with 1265 points. The algorithm just run as little as 110 iterations.

Fig. 2 – (click to enlarge) Our 1265 initial points depicting a TSP Portuguese Navalheira crab. Could you already envision a minimal tour between all these points?

As usual in Travelling Salesmen problems (TSP) we start it with a set of points, in our case 1084 points or cities (fig. 2). Given a list of cities and their pairwise distances, the task is now to find the shortest possible tour that visits each city exactly once. The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods.

Fig. 3 – (click to enlarge) Again the shortest path Navalheira crab, where the optimal contour path (in black: first fig. above) with 1265 points (or cities) was filled in dark orange.

TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder.

What follows (fig. 4) is the original crab photo after image segmentation and just before adding Gaussian noise in order to retrieve several data points for the initial TSP problem. The algorithm was then embedded with the extracted x,y coordinates of these data points (fig. 2) in order for him to discover the minimal path, in just 110 iterations. For extra details, pay a visit onto the Shortest path Sardine (link) done earlier.

Fig. 4 – (click to enlarge) The original crab photo after some image processing as well as segmentation and just before adding Gaussian noise in order to retrieve several data points for the initial TSP problem.

Figure (click to enlarge) – Cover from one of my books published last month (10 July 2012) “Swarm Intelligence in Data Mining” recently translated and edited in Japan (by Tokyo Denki University press [TDU]). Cover image from Amazon.co.jp (url). Title was translated into 群知能と  データマイニング. Funny also, to see my own name for the first time translated into Japanese – wonder if it’s Kanji. A brief synopsis follow:

(…) Swarm Intelligence (SI) is an innovative distributed intelligent paradigm for solving optimization problems that originally took its inspiration from the biological examples by swarming, flocking and herding phenomena in vertebrates. Particle Swarm Optimization (PSO) incorporates swarming behaviours observed in flocks of birds, schools of fish, or swarms of bees, and even human social behaviour, from which the idea is emerged. Ant Colony Optimization (ACO) deals with artificial systems that is inspired from the foraging behaviour of real ants, which are used to solve discrete optimization problems. Historically the notion of finding useful patterns in data has been given a variety of names including data mining, knowledge discovery, information extraction, etc. Data Mining is an analytic process designed to explore large amounts of data in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data. In order to achieve this, data mining uses computational techniques from statistics, machine learning and pattern recognition. Data mining and Swarm intelligence may seem that they do not have many properties in common. However, recent studies suggests that they can be used together for several real world data mining problems especially when other methods would be too expensive or difficult to implement. This book deals with the application of swarm intelligence methodologies in data mining. Addressing the various issues of swarm intelligence and data mining using different intelligent approaches is the novelty of this edited volume. This volume comprises of 11 chapters including an introductory chapters giving the fundamental definitions and some important research challenges. Chapters were selected on the basis of fundamental ideas/concepts rather than the thoroughness of techniques deployed. (…) (more)

Video – TED lecture: Empathy, cooperation, fairness and reciprocity – caring about the well-being of others seems like a very human trait. But Frans de Waal shares some surprising videos of behavioural tests, on primates and other mammals, that show how many of these moral traits all of us share. (TED, Nov. 2011, link).

Evolutionary explanations are built around the principle that all that natural selection can work with are the effects of behaviour – not the motivation behind it. This means there is only one logical starting point for evolutionary accounts, as explained by Trivers (2002, p. 6): “You begin with the effect of behaviour on actors and recipients; you deal with the problem of internal motivation, which is a secondary problem, afterwards. . . . [I]f you start with motivation, you have given up the evolutionary analysis at the outset.” ~ Frans B.M. de Waal, 2008.

Do animals have morals? And above all, did morality evolved? The question is pertinent in a broad range of quite different areas, as in as well Computer Sciences and Norm Generation (e.g. link for an MSc thesis) in bio-inspired Computation and Artificial Life, but here new fresh answers come directly from Biology. Besides the striking video lecture above, what follows are 2 different excerpts (abstract and conclusions) from a 2008 paper by Frans B.M. de Waal (Living Links Center lab., Emory University, link): de Waal, F.B.M. (2008). Putting the altruism back in altruism: The evolution of empathy. Ann. Rev. Psychol. 59: 279-300 (full PDF link):

(…) Abstract: Evolutionary theory postulates that altruistic behaviour evolved for the return-benefits it bears the performer. For return-benefits to play a motivational role, however, they need to be experienced by the organism. Motivational analyses should restrict themselves, therefore, to the altruistic impulse and its knowable consequences. Empathy is an ideal candidate mechanism to underlie so-called directed altruism, i.e., altruism in response to another’s pain, need, or distress. Evidence is accumulating that this mechanism is phylogenetically ancient, probably as old as mammals and birds. Perception of the emotional state of another automatically activates shared representations causing a matching emotional state in the observer.With increasing cognition, state-matching evolved into more complex forms, including concern for the other and perspective-taking. Empathy-induced altruism derives its strength from the emotional stake it offers the self in the other’s welfare. The dynamics of the empathy mechanism agree with predictions from kin selection and reciprocal altruism theory. (…)

(…) Conclusion: More than three decades ago, biologists deliberately removed the altruism from altruism.There is now increasing evidence that the brain is hardwired for social connection, and that the same empathy mechanism proposed to underlie human altruism (Batson 1991) may underlie the directed altruism of other animals. Empathy could well provide the main motivation making individuals who have exchanged benefits in the past to continue doing so in the future. Instead of assuming learned expectations or calculations about future benefits, this approach emphasizes a spontaneous altruistic impulse and a mediating role of the emotions. It is summarized in the five conclusions below: 1. An evolutionarily parsimonious account (cf. de Waal 1999) of directed altruism assumes similar motivational processes in humans and other animals. 2. Empathy, broadly defined, is a phylogenetically ancient capacity. 3. Without the emotional engagement brought about by empathy, it is unclear what could motivate the extremely costly helping behavior occasionally observed in social animals. 4. Consistent with kin selection and reciprocal altruism theory, empathy favours familiar individuals and previous cooperators, and is biased against previous defectors. 5. Combined with perspective-taking abilities, empathy’s motivational autonomy opens the door to intentionally altruistic altruism in a few large-brained species.(…) in, de Waal, F.B.M. (2008). Putting the altruism back in altruism: The evolution of empathy. Ann. Rev. Psychol. 59: 279-300 (full PDF link).

Frans de Waal research work does not end up here, of course. He is a ubiquitous influence and writer on many related areas such as: Cognition, Communication, Crowding/Conflict Resolution, Empathy and Altruism, Social Learning and Culture, Sharing and Cooperation and last but not least, Behavioural Economics. All of his papers are free on-line, in a web page I do vividly recommend a long visit.

Complex adaptive systems (CAS), including ecosystems, governments, biological cells, and markets, are characterized by intricate hierarchical arrangements of boundaries and signals. In ecosystems, for example, niches act as semi-permeable boundaries, and smells and visual patterns serve as signals; governments have departmental hierarchies with memoranda acting as signals; and so it is with other CAS. Despite a wealth of data and descriptions concerning different CAS, there remain many unanswered questions about “steering” these systems. In Signals and Boundaries, John Holland (Wikipedia entry) argues that understanding the origin of the intricate signal/border hierarchies of these systems is the key to answering such questions. He develops an overarching framework for comparing and steering CAS through the mechanisms that generate their signal/boundary hierarchies. Holland lays out a path for developing the framework that emphasizes agents, niches, theory, and mathematical models. He discusses, among other topics, theory construction; signal-processing agents; networks as representations of signal/boundary interaction; adaptation; recombination and reproduction; the use of tagged urn models (adapted from elementary probability theory) to represent boundary hierarchies; finitely generated systems as a way to tie the models examined into a single framework; the framework itself, illustrated by a simple finitely generated version of the development of a multi-celled organism; and Markov processes.

in, Introduction to John H. Holland, “Signals and Boundaries – Building blocks for Complex Adaptive Systems“, Cambridge, Mass. : ©MIT Press, 2012.

Video – Water has Memory (from Oasis HD, Canada; link): just a liquid or much more? Many researchers are convinced that water is capable of “memory” by storing information and retrieving it. The possible applications are innumerable: limitless retention and storage capacity and the key to discovering the origins of life on our planet. Research into water is just beginning.

Water capable of processing information as well as a huge possible “container” for data media, that is something remarkable. This theory was first proposed by the late French immunologist Jacques Benveniste, in a controversial article published in 1988 in Nature, as a way of explaining how homeopathy works (link). Benveniste’s theory has continued to be championed by some and disputed by others. The video clip above, from the Oasis HD Channel, shows some fascinating recent experiments with water “memory” from the Aerospace Institute of the University of Stuttgart in Germany. The results with the different types of flowers immersed in water are particularly evocative.

This line of research also remembers me back of an old and quite interesting paper by a colleague, Chrisantha Fernando. Together with Sampsa Sojakka, both have proved that waves produced on the surface of water can be used as the medium for a Wolfgang Maass’ “Liquid State Machine” (link) that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Amazingly, Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. What follows is the abstract of their paper entitled “Pattern Recognition in a Bucket“, as well a PDF link onto it:

Figure – Typical wave patterns for the XOR task. Top-Left: [0 1] (right motor on), Top-Right: [1 0] (left motor on), Bottom-Left: [1 1] (both motors on), Bottom-Right: [0 0] (still water). Sobel filtered and thresholded images on right. (from Fig. 3. in in Chrisantha Fernando and Sampsa Sojakka, “Pattern Recognition in a Bucket“, ECAL proc., European Conference on Artificial Life, 2003.

[…] Abstract. This paper demonstrates that the waves produced on the surface of water can be used as the medium for a “Liquid State Machine” that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Interference between waves allows non-linear parallel computation upon simultaneous sensory inputs. Temporal patterns of stimulation are converted to spatial patterns of water waves upon which a linear discrimination can be made. Whereas Wolfgang Maass’ Liquid State Machine requires fine tuning of the spiking neural network parameters, water has inherent self-organising properties such as strong local interactions, time-dependent spread of activation to distant areas, inherent stability to a wide variety of inputs, and high complexity. Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. An analogy is made between water molecules and neurons in a recurrent neural network. […] in Chrisantha Fernando and Sampsa Sojakka, Pattern Recognition in a Bucket“, ECAL proc., European Conference on Artificial Life, 2003. [PDF link]

Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!” ~ The Red Queen, at “Through the looking-glass, and what Alice found there“, Charles Lutwidge Dogson, 1871.

Your move. Alice was suddenly found in a new strange world. And quickly needed to adapt. As C.L. Dogson (most known as Lewis Carroll) brilliantly puts it, all the running you can do, does not suffices at all. This is a world (Wonderland) with different “physical” laws or “societal norms”. Surprisingly, those patterns appear also to us quite familiar, here, on planet Earth. As an example, the quote above is mainly the paradigm for Biological Co-Evolution, in the form of the Red-Queen effect.

In Wonderland (1st book), Alice follows the white rabbit, which end-ups driving her on this strange habitat, where apparently “normal” “physical” laws do not apply. On this second book however, Alice now needs to overcome a series of great obstacles – structured as phases in a game of chess – in order to become a queen.  Though, as she moves on, several other enigmatic personages appear. Punctuated as well as surrounded by circular arguments and logical paradoxes, Alice must keep on, in order to found the “other side of the mirror“.

There are other funny parallel moves, also. The story goes on that Lewis Carroll gave a copy of “Alice in Wonderland” to Queen Victoria who then asked him in return to send her his next book as she fancied the first one. The joke is that the book was (!) … “An Elementary Treatise on Determinants, With Their Application to Simultaneous Linear Equations and Algebraic Equations (link)”. Lewis Carroll then went on to write a follow-on to Alice in Wonderland entitled “Through the Looking-Glass, and what Alice found there” that features a chess board on his first pages, and where chess was used to gave her, Queen Victoria, a glimpse on what Alice explored on this new world.

In fact, the diagram on the first pages contains not only the entire book chapters of this novel as well how Alice moved on. Where basically, each move, moves the reader to a new chapter (see below) representing it. The entire book could be found here in PDF format.  Besides the beauty and philosophical value of Dogson‘s novel on itself, and his repercussions on nowadays co-Evolution research as a metaphor, this is much probably the first “chess-literature” diagram ever composed. Now, of course, pieces are not white and black, but instead white and red (note that pieces in c1 – queen – and c6 – king – are white). Lewis Carroll novel, then goes on like this: White pawn (Alice) to play, and win in eleven moves.

However, in order to enter this world you must follow the “rules” of this new world. “Chess” in here is not normal, as Wonderland was not normal to Alice’s eyes. Remember: If you do all do run you could do, you will find yourself at the same place. Better if you could run twice as fast! First Lewis Carroll words on his second book (at the preface / PDF link above) advise us:

(…) As the chess-problem, given on a previous page, has puzzled some of my readers, it may be well to explain that it is correctly worked out, so far as the moves are concerned. The alternation of Red and White is perhaps not so strictly observed as it might be, and the ‘castling’ of the three Queens is merely a way of saying that they entered the palace; but the ‘check’ of the White King at move 6, the capture of the Red Knight at move 7, and the final ‘check-mate’ of the Red King, will be found, by any one who will take the trouble to set the pieces and play the moves as directed, to be strictly in accordance with the laws of the game. (…) Lewis Carroll, Christmas,1896.

Now, the solution, could be delivered in various format languages. But here is one I prefer. It was encoded on classic BASIC, running on a ZX Spectrum emulator. Here is an excerpt:

1750 LET t$=”11. Alice takes Red Queen & wins(checkmate)”: GO SUB 7000 (…)
9000 REM
9001 REM ** ZX SPECTRUM MANUAL Page 96 Chapter 14. **
9002 REM
9004 RESTORE 9000 (…)
9006 LET b=BIN 01111100: LET c=BIN 00111000: LET d=BIN 00010000
9010 FOR n=1 TO 6: READ p$: REM 6 pieces
9020 FOR f=0 TO 7: REM read piece into 8 bytes
9030 READ a: POKE USR p$+f,a
9040 NEXT f
9100 REM bishop
9110 DATA “b”,0,d,BIN 00101000,BIN 01000100
9120 DATA BIN 01101100,c,b,0
9130 REM king
9140 DATA “k”,0,d,c,d
9150 DATA c,BIN 01000100,c,0
9160 REM rook
9170 DATA “r”,0,BIN 01010100,b,c
9180 DATA c,b,b,0
9190 REM queen
9200 DATA “q”,0,BIN 01010100,BIN 00101000,d
9210 DATA BIN 01101100,b,b,0
9220 REM pawn
9230 DATA “p”,0,0,d,c
9240 DATA c,d,b,0
9250 REM knight
9260 DATA “n”,0,d,c,BIN 01111000
9270 DATA BIN 00011000,c,b,0

(…) full code on [link]

This is BASIC-ally Alice’s story …

I would like to thank flocks, herds, and schools for existing: nature is the ultimate source of inspiration for computer graphics and animation.” in Craig Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model“, (paper link) published in Computer Graphics, 21(4), July 1987, pp. 25-34. (ACM SIGGRAPH ’87 Conference Proceedings, Anaheim, California, July 1987.)

[…] In conclusion, much elegant work has been done starting from activated mono-nucleotides. However, the prebiotic synthesis of a specific macromolecular sequence does not seem to be at hand, giving us the same problem we have with polypeptide sequences. Since there is no ascertained prebiotic pathway to their synthesis, it may be useful to try to conceive some working hypothesis. In order to do that, I would first like to consider a preliminary question about the proteins we have on our Earth: “Why these proteins … and not other ones?”. Discussing this question can in fact give us some clue as to how orderly sequences might have originated. […] A grain of sand in the Sahara – This is indeed a central question in our world of proteins. How have they been selected out? There is a well-known arithmetic at the basis of this question, (see for example De Duve, 2002) which says that for a polypeptide chain with 100 residues, 20^100 different chains are in principle possible: a number so large that it does not convey any physical meaning. In order to grasp it somewhat, consider that the proteins existing on our planet are of the order of a few thousand billions, let us say around 10^13 (and with all isomers and mutations we may arrive at a few orders of magnitude more). This sounds like a large number. However, the ratio between the possible (say 20^100) and the actual chains (say 10^15) corresponds approximately to the ratio between the radius of the universe and the radius of a hydrogen atom! Or, to use another analogy, nearer to our experience, a ratio many orders of magnitude greater than the ratio between all the grains of sand in the vast Sahara and a single grain. The space outside “our atom”, or our grain of sand, is the space of the “never-born proteins”, the proteins that are not with us – either because they didn’t have the chance to be formed, or because they “came” and were then obliterated. This arithmetic, although trivial, bears an important message: in order to reproduce our proteins we would have to hit the target of that particular grain of sand in the whole Sahara. Christian De Duve, in order to avoid this “sequence paradox” (De Duve, 2002), assumes that all started with short polypeptides – and this is in fact reasonable. However, the theoretically possible total number of long chains does not change if you start with short peptides instead of amino acids. The only way to limit the final number of possible chains would be to assume, for example, that peptide synthesis started only under a particular set of conditions of composition and concentration, thus bringing contingency into the picture. As a corollary, then, this set of proteins born as a product of contingency would have been the one that happened to start life. Probably there is no way of eliminating contingency from the aetiology of our set of proteins. […]

Figure – The ratio between the theoretical number of possible proteins and their actual number is many orders of magnitude greater than the ratio between all sand of the vast Sahara and a single grain of sand (caption on page 69).

[…] The other objection to the numerical meaning suggested by Figure (above) is that the maximum number of proteins is much smaller because a great number of chain configurations are prohibited for energetic reasons. This is reasonable. Let us then assume that 99.9999% of theoretically possible protein chains cannot exist because of energy reasons. This would leave only one protein out of one million, reducing the number of never-born proteins from, say, 10^60 to 10^54. Not a big deal. Of course one could also assume that the total number of energetically allowed proteins is extremely small, no larger than, say, 10^10. This cannot be excluded a priori, but is tantamount to saying that there is something very special about “our” proteins, namely that they are energetically special. Whether or not this is so can be checked experimentally as will be seen later in a research project aimed at this target. The assumption that “our” proteins have something special from the energetic point of view, would correspond to a strict deterministic view that claims that the pathway leading to our proteins was determined, that there was no other possible route. Someone adhering strictly to a biochemical anthropic principle might even say that these proteins are the way they are in order to allow life and the development of mankind on Earth. The contingency view would recite instead the following: if our proteins or nucleic acids have no special properties from the point of view of thermodynamics, then run the tape again and a different “grain of sand” might be produced – one that perhaps would not have supported life. Some may say at this point that proteins derive in any case from nucleic-acid templates – perhaps through a primitive genetic code. However, this is really no argument – it merely shifts the problem of the etiology of peptide chains to etiology of oligonucleotide chains, all arithmetic problems remaining more or less the same. […] pp. 68-70, in Pier Luigi Luisi, “The Emergence of Life: From Chemical Origins to Synthetic Biology“, Cambridge University Press, US, 2006.

ECCS11 Spatio-Temporal Dynamics on Co-Evolved Stigmergy Vitorino Ramos David M.S. Rodrigues Jorge Louçã

Ever tried to solve a problem where its own problem statement is changing constantly? Have a look on our approach:

Vitorino Ramos, David M.S. Rodrigues, Jorge LouçãSpatio-Temporal Dynamics on Co-Evolved Stigmergy“, in European Conference on Complex Systems, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

Abstract: Research over hard NP-complete Combinatorial Optimization Problems (COP’s) has been focused in recent years, on several robust bio-inspired meta-heuristics, like those involving Evolutionary Computation (EC) algorithmic paradigms. One particularly successful well-know meta-heuristic approach is based on Swarm Intelligence (SI), i.e., the self-organized stigmergic-based property of a complex system whereby the collective behaviors of (unsophisticated) entities interacting locally with their environment cause coherent functional global patterns to emerge. This line of research recognized as Ant Colony Optimization (ACO), uses a set of stochastic cooperating ant-like agents to find good solutions, using self-organized stigmergy as an indirect form of communication mediated by artificial pheromone, whereas agents deposit pheromone-signs on the edges of the problem-related graph complex network, encompassing a family of successful algorithmic variations such as: Ant Systems (AS), Ant Colony Systems (ACS), Max-Min Ant Systems (MaxMin AS) and Ant-Q.

Albeit being extremely successful these algorithms mostly rely on positive feedback’s, causing excessive algorithmic exploitation over the entire combinatorial search space. This is particularly evident over well known benchmarks as the symmetrical Traveling Salesman Problem (TSP). Being these systems comprised of a large number of frequently similar components or events, the principal challenge is to understand how the components interact to produce a complex pattern feasible solution (in our case study, an optimal robust solution for hard NP-complete dynamic TSP-like combinatorial problems). A suitable approach is to first understand the role of two basic modes of interaction among the components of Self-Organizing (SO) Swarm-Intelligent-like systems: positive and negative feedback. While positive feedback promotes a snowballing auto-catalytic effect (e.g. trail pheromone upgrading over the network; exploitation of the search space), taking an initial change in a system and reinforcing that change in the same direction as the initial deviation (self-enhancement and amplification) allowing the entire colony to exploit some past and present solutions (environmental dynamic memory), negative feedback such as pheromone evaporation ensure that the overall learning system does not stables or freezes itself on a particular configuration (innovation; search space exploration). Although this kind of (global) delayed negative feedback is important (evaporation), for the many reasons given above, there is however strong assumptions that other negative feedbacks are present in nature, which could also play a role over increased convergence, namely implicit-like negative feedbacks. As in the case for positive feedbacks, there is no reason not to explore increasingly distributed and adaptive algorithmic variations where negative feedback is also imposed implicitly (not only explicitly) over each network edge, while the entire colony seeks for better answers in due time.

In order to overcome this hard search space exploitation-exploration compromise, our present algorithmic approach follows the route of very recent biological findings showing that forager ants lay attractive trail pheromones to guide nest mates to food, but where, the effectiveness of foraging networks were improved if pheromones could also be used to repel foragers from unrewarding routes. Increasing empirical evidences for such a negative trail pheromone exists, deployed by Pharaoh’s ants (Monomorium pharaonis) as a ‘no entry‘ signal to mark unrewarding foraging paths. The new algorithm comprises a second order approach to Swarm Intelligence, as pheromone-based no entry-signals cues, were introduced, co-evolving with the standard pheromone distributions (collective cognitive maps) in the aforementioned known algorithms.

To exhaustively test his adaptive response and robustness, we have recurred to different dynamic optimization problems. Medium-size and large-sized dynamic TSP problems were created. Settings and parameters such as, environmental upgrade frequencies, landscape changing or network topological speed severity, and type of dynamic were tested. Results prove that the present co-evolved two-type pheromone swarm intelligence algorithm is able to quickly track increasing swift changes on the dynamic TSP complex network, compared to standard algorithms.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Dynamical Symmetrical Traveling Salesman Problems (TSP).


Fig. – Recovery times over several dynamical stress tests at the fl1577 TSP problem (1577 node graph) – 460 iter max – Swift changes at every 150 iterations (20% = 314 nodes, 40% = 630 nodes, 60% = 946 nodes, 80% = 1260 nodes, 100% = 1576 nodes). [click to enlarge]

ECCS11 From Standard to Second Order Swarm Intelligence Phase-Space Maps David Rodrigues Jorge Louçã Vitorino Ramos

David M.S. Rodrigues, Jorge Louçã, Vitorino Ramos, “From Standard to Second Order Swarm Intelligence Phase-space maps“, in European Conference on Complex Systems, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

Abstract: Standard Stigmergic approaches to Swarm Intelligence encompasses the use of a set of stochastic cooperating ant-like agents to find optimal solutions, using self-organized Stigmergy as an indirect form of communication mediated by a singular artificial pheromone. Agents deposit pheromone-signs on the edges of the problem-related graph to give rise to a family of successful algorithmic approaches entitled Ant Systems (AS), Ant Colony Systems (ACS), among others. These mainly rely on positive feedback’s, to search for an optimal solution in a large combinatorial space. The present work shows how, using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedback’s achieves better results than single positive feedback systems. This follows the route of very recent biological findings showing that forager ants, while laying attractive trail pheromones to guide nest mates to food, also gained foraging effectiveness by the use of pheromones that repelled foragers from unrewarding routes. The algorithm presented here takes inspiration precisely from this biological observation.

The new algorithm was exhaustively tested on a series of well-known benchmarks over hard NP-complete Combinatorial Optimization Problems (COP’s), running on symmetrical Traveling Salesman Problems (TSP). Different network topologies and stress tests were conducted over low-size TSP’s (eil51.tsp; eil78.tsp; kroA100.tsp), medium-size (d198.tsp; lin318.tsp; pcb442.tsp; att532.tsp; rat783.tsp) as well as large sized ones (fl1577.tsp; d2103.tsp) [numbers here referring to the number of nodes in the network]. We show that the new co-evolved stigmergic algorithm compared favorably against the benchmark. The algorithm was able to equal or majorly improve every instance of those standard algorithms, not only in the realm of the Swarm Intelligent AS, ACS approach, as in other computational paradigms like Genetic Algorithms (GA), Evolutionary Programming (EP), as well as SOM (Self-Organizing Maps) and SA (Simulated Annealing). In order to deeply understand how a second co-evolved pheromone was useful to track the collective system into such results, a refined phase-space map was produced mapping the pheromones ratio between a pure Ant Colony System (where no negative feedback besides pheromone evaporation is present) and the present second-order approach. The evaporation rate between different pheromones was also studied and its influence in the outcomes of the algorithm is shown. A final discussion on the phase-map is included. This work has implications in the way large combinatorial problems are addressed as the double feedback mechanism shows improvements over the single-positive feedback mechanisms in terms of convergence speed and on major results.

Keywords: Stigmergy, Co-Evolution, Self-Organization, Swarm Intelligence, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Traveling Salesman Problems (TSP), phase-space.

Fig. – Comparing convergence results between Standard algorithms vs. Second Order Swarm Intelligence, over TSP fl1577 (click to enlarge).

Picture – The European Conference on Complex Systems (ECCS’11 – link) at one of the main Austrian newspapers Der Standard: “Die ganze Welt als Computersimulation” (link), Klaus Taschwer, Der Standard, 14 September [click to enlarge – photo taken at the conference on Sept. 15, Vienna 2011].

Take Darwin, for example: would Caltech have hired Darwin? Probably not. He had only vague ideas about some of the mechanisms underlying biological Evolution. He had no way of knowing about genetics, and he lived before the discovery of mutations. Nevertheless, he did work out, from the top down, the notion of natural selection and the magnificent idea of the relationship of all living things.” Murray Gell-Mann in “Plectics“, excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995).

To be honest, I didn’t enjoy this title, but all of us had a fair share with journalists, now and then by now. After all, 99% of us don’t do computer simulation. We are all after the main principles, and their direct applications.

During 5 days (12-16 Sept.), with around 700 attendees the Vienna 2011 conference evolved around main important themes as Complexity & Networks (XNet), Current Trends in Game Theory, Complexity in Energy Infrastructures, Emergent Properties in Natural and Artificial Complex Systems (EPNACS), Complexity and the Future of Transportation Systems, Econophysics, Cultural and Opinion Dynamics, Dynamics on and of Complex Networks, Frontiers in the Theory of Evolution, and – among many others – Dynamics of Human Interactions.

For those who know me (will definitely understand), I was mainly attending those sessions underlined above, the last one (Frontiers in Evolution) being one of my favorites, among all these ECCS years. All in all, the conference had highly quality works (daily, we had about 3-4 works I definitely think should be followed in the future) and to those, more attention should be deserved (my main critics to the conference organization goes in here). Naturally, the newspaper article also reflects on the FuturICT, being historically one of the major scientific European projects ever done (along, probably, with the Geneva LHC), which teams spread across Europe, including Portugal with a representative team of 7 members present on the conference, led by Jorge Louçã, the former editor and organizer on the previous ECCS’10 last year in Lisbon.

Video – “… they forgot to say: in principle!“. Ricard Solé addressing the topic of a Morphospace for Biological Computation at ECCS’11 (European Conference on Complex Systems), while keeping is good humor on.

Let me draw anyway your attention to 4 outstanding lectures: Peter Schuster (link) on the first day, dissected on the source of Complexity in Evolution, battling among – as he puts it – two paradoxes: (1) Evolution is an enormously complex process, and (2) biological evolution on Earth proceeds from lower towards higher complexity. Earlier on that morning – opening the conference -, Murray Gell-Mann (link) who co-founded the Santa Fe Institute in 1984, gave a wonderful lecture on Generalized Entropies. Besides his age, the 1969 Nobel Prize in physics for his work on the theory of elementary particles, gladly turned his interest in the 1990s to the theory of Complex Adaptive Systems (CAS). Next, Albert-László Barabási (link), tamed Complexity on Controlling Networks. Finally, at the last day, closing the conference in pure gold, Ricard Solé (link) addressed the topic of a Morphospace for Biological Computation, an amazing lecture with a powerful topic to which – nevertheless – I felt he had little time (20 minutes), for such a rich endeavor. However – by no means -, he have lost his good humor during the talk (check my video above). Next year, the conference will be held in Brussels, and by just judging at the poster design, it promises. Go ants, go … !

Picture – The European Conference on Complex Systems (ECCS’12 – link) poster design for next year in Brussels.

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

@ViRAms on Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Archives

Blog Stats

  • 245,649 hits