You are currently browsing the tag archive for the ‘Swarm Intelligence’ tag.

Complete circuit diagram with pheromone - Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; Figure – Neural circuit controller of the virtual ant (page 3, fig. 2). [URL: http://arxiv.org/abs/1507.08467 ]

Intelligence and decision in foraging ants. Individual or Collective? Internal or External? What is the right balance between the two. Can one have internal intelligence without external intelligence? Can one take examples from nature to build in silico artificial lives that present us with interesting patterns? We explore a model of foraging ants in this paper that will be presented in early September in Exeter, UK, at UKCI 2015. (available on arXiv [PDF] and ResearchGate)

Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; “A Model for Foraging Ants, Controlled by Spiking Neural Networks and Double Pheromones“, UKCI 2015 Computational Intelligence – University of Exeter, UK, September 2015.

Abstract: A model of an Ant System where ants are controlled by a spiking neural circuit and a second order pheromone mechanism in a foraging task is presented. A neural circuit is trained for individual ants and subsequently the ants are exposed to a virtual environment where a swarm of ants performed a resource foraging task. The model comprises an associative and unsupervised learning strategy for the neural circuit of the ant. The neural circuit adapts to the environment by means of classical conditioning. The initially unknown environment includes different types of stimuli representing food (rewarding) and obstacles (harmful) which, when they come in direct contact with the ant, elicit a reflex response in the motor neural system of the ant: moving towards or away from the source of the stimulus. The spiking neural circuits of the ant is trained to identify food and obstacles and move towards the former and avoid the latter. The ants are released on a landscape with multiple food sources where one ant alone would have difficulty harvesting the landscape to maximum efficiency. In this case the introduction of a double pheromone mechanism (positive and negative reinforcement feedback) yields better results than traditional ant colony optimization strategies. Traditional ant systems include mainly a positive reinforcement pheromone. This approach uses a second pheromone that acts as a marker for forbidden paths (negative feedback). This blockade is not permanent and is controlled by the evaporation rate of the pheromones. The combined action of both pheromones acts as a collective stigmergic memory of the swarm, which reduces the search space of the problem. This paper explores how the adaptation and learning abilities observed in biologically inspired cognitive architectures is synergistically enhanced by swarm optimization strategies. The model portraits two forms of artificial intelligent behaviour: at the individual level the spiking neural network is the main controller and at the collective level the pheromone distribution is a map towards the solution emerged by the colony. The presented model is an important pedagogical tool as it is also an easy to use library that allows access to the spiking neural network paradigm from inside a Netlogo—a language used mostly in agent based modelling and experimentation with complex systems.

References:

[1] C. G. Langton, “Studying artificial life with cellular automata,” Physica D: Nonlinear Phenomena, vol. 22, no. 1–3, pp. 120 – 149, 1986, proceedings of the Fifth Annual International Conference. [Online]. Available: http://www.sciencedirect.com/ science/article/pii/016727898690237X
[2] A. Abraham and V. Ramos, “Web usage mining using artificial ant colony clustering and linear genetic programming,” in Proceedings of the Congress on Evolutionary Computation. Australia: IEEE Press, 2003, pp. 1384–1391.
[3] V. Ramos, F. Muge, and P. Pina, “Self-organized data and image retrieval as a consequence of inter-dynamic synergistic relationships in artificial ant colonies,” Hybrid Intelligent Systems, vol. 87, 2002.
[4] V. Ramos and J. J. Merelo, “Self-organized stigmergic document maps: Environment as a mechanism for context learning,” in Proceddings of the AEB, Merida, Spain, February 2002. ´
[5] D. Sousa-Rodrigues and V. Ramos, “Traversing news with ant colony optimisation and negative pheromones,” in European Conference in Complex Systems, Lucca, Italy, Sep 2014.
[6] E. Bonabeau, G. Theraulaz, and M. Dorigo, Swarm Intelligence: From Natural to Artificial Systems, 1st ed., ser. Santa Fe Insitute Studies In The Sciences of Complexity. 198 Madison Avenue, New York: Oxford University Press, USA, Sep. 1999.
[7] M. Dorigo and L. M. Gambardella, “Ant colony system: A cooperative learning approach to the traveling salesman problem,” Universite Libre de Bruxelles, Tech. Rep. TR/IRIDIA/1996-5, ´ 1996.
[8] M. Dorigo, G. Di Caro, and L. M. Gambardella, “Ant algorithms for discrete optimization,” Artif. Life, vol. 5, no. 2, pp. 137– 172, Apr. 1999. [Online]. Available: http://dx.doi.org/10.1162/ 106454699568728
[9] L. M. Gambardella and M. Dorigo, “Ant-q: A reinforcement learning approach to the travelling salesman problem,” in Proceedings of the ML-95, Twelfth Intern. Conf. on Machine Learning, M. Kaufman, Ed., 1995, pp. 252–260.
[10] A. Gupta, V. Nagarajan, and R. Ravi, “Approximation algorithms for optimal decision trees and adaptive tsp problems,” in Proceedings of the 37th international colloquium conference on Automata, languages and programming, ser. ICALP’10. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 690–701. [Online]. Available: http://dl.acm.org/citation.cfm?id=1880918.1880993
[11] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Second order ˜ swarm intelligence,” in HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, ser. Lecture Notes in Computer Science, J.-S. Pan, M. Polycarpou, M. Wozniak, A. Carvalho, ´ H. Quintian, and E. Corchado, Eds. Salamanca, Spain: Springer ´ Berlin Heidelberg, Sep 2013, vol. 8073, pp. 411–420.
[12] W. Maass and C. M. Bishop, Pulsed Neural Networks. Cambridge, Massachusetts: MIT Press, 1998.
[13] E. M. Izhikevich and E. M. Izhikevich, “Simple model of spiking neurons.” IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, vol. 14, no. 6, pp. 1569–72, 2003. [Online]. Available: http://www.ncbi.nlm.nih. gov/pubmed/18244602
[14] C. Liu and J. Shapiro, “Implementing classical conditioning with spiking neurons,” in Artificial Neural Networks ICANN 2007, ser. Lecture Notes in Computer Science, J. de S, L. Alexandre, W. Duch, and D. Mandic, Eds. Springer Berlin Heidelberg, 2007, vol. 4668, pp. 400–410. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-74690-4 41
[15] J. Haenicke, E. Pamir, and M. P. Nawrot, “A spiking neuronal network model of fast associative learning in the honeybee,” Frontiers in Computational Neuroscience, no. 149, 2012. [Online]. Available: http://www.frontiersin.org/computational neuroscience/10.3389/conf.fncom.2012.55.00149/full
[16] L. I. Helgadottir, J. Haenicke, T. Landgraf, R. Rojas, and M. P. Nawrot, “Conditioned behavior in a robot controlled by a spiking neural network,” in International IEEE/EMBS Conference on Neural Engineering, NER, 2013, pp. 891–894.
[17] A. Cyr and M. Boukadoum, “Classical conditioning in different temporal constraints: an STDP learning rule for robots controlled by spiking neural networks,” pp. 257–272, 2012.
[18] X. Wang, Z. G. Hou, F. Lv, M. Tan, and Y. Wang, “Mobile robots’ modular navigation controller using spiking neural networks,” Neurocomputing, vol. 134, pp. 230–238, 2014.
[19] C. Hausler, M. P. Nawrot, and M. Schmuker, “A spiking neuron classifier network with a deep architecture inspired by the olfactory system of the honeybee,” in 2011 5th International IEEE/EMBS Conference on Neural Engineering, NER 2011, 2011, pp. 198–202.
[20] U. Wilensky, “Netlogo,” Evanston IL, USA, 1999. [Online]. Available: http://ccl.northwestern.edu/netlogo/
[21] C. Jimenez-Romero and J. Johnson, “Accepted abstract: Simulation of agents and robots controlled by spiking neural networks using netlogo,” in International Conference on Brain Engineering and Neuro-computing, Mykonos, Greece, Oct 2015.
[22] W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge University Press, 2002.
[23] J. v. H. W Gerstner, R Kempter and H. Wagner, “A neuronal learning rule for sub-millisecond temporal coding,” Nature, vol. 386, pp. 76–78, 1996.
[24] I. P. Pavlov, “Conditioned reflexes: An investigation of the activity of the cerebral cortex,” New York, 1927.
[25] E. J. H. Robinson, D. E. Jackson, M. Holcombe, and F. L. W. Ratnieks, “Insect communication: ‘no entry’ signal in ant foraging,” Nature, vol. 438, no. 7067, pp. 442–442, 11 2005. [Online]. Available: http://dx.doi.org/10.1038/438442a
[26] E. J. Robinson, D. Jackson, M. Holcombe, and F. L. Ratnieks, “No entry signal in ant foraging (hymenoptera: Formicidae): new insights from an agent-based model,” Myrmecological News, vol. 10, no. 120, 2007.
[27] D. Sousa-Rodrigues, J. Louçã, and V. Ramos, “From standard ˜ to second-order swarm intelligence phase-space maps,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, Sep 2011.
[28] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Spatio-temporal ˜ dynamics on co-evolved stigmergy,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, 9 2011.
[29] S. Tisue and U. Wilensky, “Netlogo: A simple environment for modeling complexity,” in International conference on complex systems. Boston, MA, 2004, pp. 16–21.

David MS Rodrigues Reading the News Through its Structure New Hybrid Connectivity Based ApproachesFigure – Two simplicies a and b connected by the 2-dimensional face, the triangle {1;2;3}. In the analysis of the time-line of The Guardian newspaper (link) the system used feature vectors based on frequency of words and them computed similarity between documents based on those feature vectors. This is a purely statistical approach that requires great computational power and that is difficult for problems that have large feature vectors and many documents. Feature vectors with 100,000 or more items are common and computing similarities between these documents becomes cumbersome. Instead of computing distance (or similarity) matrices between documents from feature vectors, the present approach explores the possibility of inferring the distance between documents from the Q-analysis description. Q-analysis is a very natural notion of connectivity between the simplicies of the structure and in the relation studied, documents are connected to each other through shared sets of tags entered by the journalists. Also in this framework, eccentricity is defined as a measure of the relatedness of one simplex in relation to another [7].

David M.S. Rodrigues and Vitorino Ramos, “Traversing News with Ant Colony Optimisation and Negative Pheromones” [PDF], accepted as preprint for oral presentation at the European Conference on Complex SystemsECCS14 in Lucca, Sept. 22-26, 2014, Italy.

Abstract: The past decade has seen the rapid development of the online newsroom. News published online are the main outlet of news surpassing traditional printed newspapers. This poses challenges to the production and to the consumption of those news. With those many sources of information available it is important to find ways to cluster and organise the documents if one wants to understand this new system. Traditional approaches to the problem of clustering documents usually embed the documents in a suitable similarity space. Previous studies have reported on the impact of the similarity measures used for clustering of textual corpora [1]. These similarity measures usually are calculated for bag of words representations of the documents. This makes the final document-word matrix high dimensional. Feature vectors with more than 10,000 dimensions are common and algorithms have severe problems with the high dimensionality of the data. A novel bio inspired approach to the problem of traversing the news is presented. It finds Hamiltonian cycles over documents published by the newspaper The Guardian. A Second Order Swarm Intelligence algorithm based on Ant Colony Optimisation was developed [2, 3] that uses a negative pheromone to mark unrewarding paths with a “no-entry” signal. This approach follows recent findings of negative pheromone usage in real ants [4].

In this case study the corpus of data is represented as a bipartite relation between documents and keywords entered by the journalists to characterise the news. A new similarity measure between documents is presented based on the Q-analysis description [5, 6, 7] of the simplicial complex formed between documents and keywords. The eccentricity between documents (two simplicies) is then used as a novel measure of similarity between documents. The results prove that the Second Order Swarm Intelligence algorithm performs better in benchmark problems of the travelling salesman problem, with faster convergence and optimal results. The addition of the negative pheromone as a non-entry signal improves the quality of the results. The application of the algorithm to the corpus of news of The Guardian creates a coherent navigation system among the news. This allows the users to navigate the news published during a certain period of time in a semantic sequence instead of a time sequence. This work as broader application as it can be applied to many cases where the data is mapped to bipartite relations (e.g. protein expressions in cells, sentiment analysis, brand awareness in social media, routing problems), as it highlights the connectivity of the underlying complex system.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Hamiltonian cycles, Text Mining, Textual Corpora, Information Retrieval, Knowledge Discovery, Sentiment Analysis, Q-Analysis, Data Mining, Journalism, The Guardian.

References:

[1] Alexander Strehl, Joydeep Ghosh, and Raymond Mooney. Impact of similarity measures on web-page clustering.  In Workshop on Artifcial Intelligence for Web Search (AAAI 2000), pages 58-64, 2000.
[2] David M. S. Rodrigues, Jorge Louçã, and Vitorino Ramos. From standard to second-order Swarm Intelligence  phase-space maps. In Stefan Thurner, editor, 8th European Conference on Complex Systems, Vienna, Austria,  9 2011.
[3] Vitorino Ramos, David M. S. Rodrigues, and Jorge Louçã. Second order Swarm Intelligence. In Jeng-Shyang  Pan, Marios M. Polycarpou, Micha l Wozniak, André C.P.L.F. Carvalho, Hector Quintian, and Emilio Corchado,  editors, HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, volume 8073 of Lecture  Notes in Computer Science, pages 411-420. Springer Berlin Heidelberg, Salamanca, Spain, 9 2013.
[4] Elva J.H. Robinson, Duncan Jackson, Mike Holcombe, and Francis L.W. Ratnieks. No entry signal in ant  foraging (hymenoptera: Formicidae): new insights from an agent-based model. Myrmecological News, 10(120), 2007.
[5] Ronald Harry Atkin. Mathematical Structure in Human A ffairs. Heinemann Educational Publishers, 48 Charles  Street, London, 1 edition, 1974.
[6] J. H. Johnson. A survey of Q-analysis, part 1: The past and present. In Proceedings of the Seminar on Q-analysis  and the Social Sciences, Universty of Leeds, 9 1983.
[7] David M. S. Rodrigues. Identifying news clusters using Q-analysis and modularity. In Albert Diaz-Guilera,  Alex Arenas, and Alvaro Corral, editors, Proceedings of the European Conference on Complex Systems 2013, Barcelona, 9 2013.

In order to solve hard combinatorial optimization problems (e.g. optimally scheduling students and teachers along a week plan on several different classes and classrooms), one way is to computationally mimic how ants forage the vicinity of their habitats searching for food. On a myriad of endless possibilities to find the optimal route (minimizing the travel distance), ants, collectively emerge the solution by using stigmergic signal traces, or pheromones, which also dynamically change under evaporation.

Current algorithms, however, make only use of a positive feedback type of pheromone along their search, that is, if they collectively visit a good low-distance route (a minimal pseudo-solution to the problem) they tend to reinforce that signal, for their colleagues. Nothing wrong with that, on the contrary, but no one knows however if a lower-distance alternative route is there also, just at the corner. On his global search endeavour, like a snowballing effect, positive feedbacks tend up to give credit to the exploitation of solutions but not on the – also useful – exploration side. The upcoming potential solutions can thus get crystallized, and freeze, while a small change on some parts of the whole route, could on the other-hand successfully increase the global result.

Influence of Negative Pheromone in Swarm IntelligenceFigure – Influence of negative pheromone on kroA100.tsp problem (fig.1 – page 6) (values on lines represent 1-ALPHA). A typical standard ACS (Ant Colony System) is represented here by the line with value 0.0, while better results could be found by our approach, when using positive feedbacks (0.95) along with negative feedbacks (0.05). Not only we obtain better results, as we found them earlier.

There is, however, an advantage when a second type of pheromone (a negative feedback one) co-evolves with the first type. And we decided to research for his impact. What we found out, is that by using a second type of global feedback, we can indeed increase a faster search while achieving better results. In a way, it’s like using two different types of evaporative traffic lights, in green and red, co-evolving together. And as a conclusion, we should indeed use a negative no-entry signal pheromone. In small amounts (0.05), but use it. Not only this prevents the whole system to freeze on some solutions, to soon, as it enhances a better compromise on the search space of potential routes. The pre-print article is available here at arXiv. Follows the abstract and keywords:

Vitorino Ramos, David M. S. Rodrigues, Jorge Louçã, “Second Order Swarm Intelligence” [PDF], in Hybrid Artificial Intelligent Systems, Lecture Notes in Computer Science, Springer-Verlag, Volume 8073, pp. 411-420, 2013.

Abstract: An artificial Ant Colony System (ACS) algorithm to solve general purpose combinatorial Optimization Problems (COP) that extends previous AC models [21] by the inclusion of a negative pheromone, is here described. Several Travelling Salesman Problem‘s (TSP) were used as benchmark. We show that by using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedbacks achieves better results than single positive feedback systems. The algorithm was tested against known NP complete combinatorial Optimization Problems, running on symmetrical TSPs. We show that the new algorithm compares favourably against these benchmarks, accordingly to recent biological findings by Robinson [26,27], and Grüter [28] where “No entry” signals and negative feedback allows a colony to quickly reallocate the majority of its foragers to superior food patches. This is the first time an extended ACS algorithm is implemented with these successful characteristics.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Travelling Salesman Problems (TSP).

Signal Traces - Sept. 2013 Vitorino RamosPhoto – Signal traces, September 2013, Vitorino Ramos.

[…] While pheromone reinforcement plays a role as system’s memory, evaporation allows the system to adapt and dynamically decide, without any type of centralized or hierarchical control […], below.

“[…] whereas signals tends to be conspicuous, since natural selection has shaped signals to be strong and effective displays, information transfer via cues is often more subtle and based on incidental stimuli in an organism’s social environment […]”, Seeley, T.D., “The Honey Bee Colony as a Super-Organism”, American Scientist, 77, pp.546-553, 1989.

[…] If an ant colony on his cyclic way from the nest to a food source (and back again), has only two possible branches around an obstacle, one bigger and the other smaller (the bridge experiment [7,52]), pheromone will accumulate – as times passes – on the shorter path, simple because any ant that sets out on that path will return sooner, passing the same points more frequently, and via that way, reinforcing the signal of that precise branch. Even if as we know, the pheromone evaporation rate is the same in both branches, the longer branch will faster vanish his pheromone, since there is not enough critical mass of individuals to keep it. On the other hand – in what appears to be a vastly pedagogic trick of Mother Nature – evaporation plays a critical role on the society. Without it, the final global decision or the phase transition will never happen. Moreover, without it, the whole colony can never adapt if the environment suddenly changes (e.g., the appearance of a third even shorter branch). While pheromone reinforcement plays a role as system’s memory, evaporation allows the system to adapt and dynamically decide, without any type of centralized or hierarchical control. […], in “Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes“, V. Ramos et al., available as pre-print on arXiV, 2005.

[…] There is some degree of communication among the ants, just enough to keep them from wandering of completely at random. By this minimal communication they can remind each other that they are not alone but are cooperating with team-mates. It takes a large number of ants, all reinforcing each other this way, to sustain any activity – such as trail building – for any length of time. Now my very hazy understanding of the operation of brain leads me to believe that something similar pertains to the firing of neurons… […] in, p. 316, Hofstadter, D.R., “Gödel, Escher, Bach: An Eternal Golden Braid“, New York: Basic Books, 1979).

[…] Since in Self-Organized (SO) systems their organization arises entirely from multiple interactions, it is of critical importance to question how organisms acquire and act upon information [9]. Basically through two forms: a) information gathered from one’s neighbours, and b) information gathered from work in progress, that is, stigmergy. In the case of animal groups, these internal interactions typically involve information transfers between individuals. Biologists have recently recognized that information can flow within groups via two distinct pathways – signals and cues. Signals are stimuli shaped by natural selection specifically to convey information, whereas cues are stimuli that convey information only incidentally [9]. The distinction between signals and cues is illustrated by the difference on ant and deer trails. The chemical trail deposited by ants as they return from a desirable food source is a signal. Over evolutionary time such trails have been moulded by natural selection for the purpose of sharing with nest mates information about the location of rich food sources. In contrast, the rutted trails made by deer walking through the woods is a cue, not shaped by natural selection for communication among deer but are a simple by-product of animals walking along the same path. SO systems are based on both, but whereas signals tends to be conspicuous, since natural selection has shaped signals to be strong and effective displays, information transfer via cues is often more subtle and based on incidental stimuli in an organism’s social environment [45] […], in “Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes“, V. Ramos et al., available as pre-print on arXiV, 2005.

Hybrid Artificial Intelligent Systems HAIS 2013 (pp. 411-420 Second Order Swarm Intelligence)Figure – Hybrid Artificial Intelligent Systems new LNAI (Lecture Notes on Artificial Intelligence) series volume 8073, Springer-Verlag Book [original photo by my colleague David M.S. Rodrigues].

New work, new book. Last week one of our latest works come out published on Springer. Edited by Jeng-Shyang Pan, Marios M. Polycarpou, Emilio Corchado et al. “Hybrid Artificial Intelligent Systems” comprises a full set of new papers on this hybrid area on Intelligent Computing (check the full articles list at Springer). Our new paper “Second Order Swarm Intelligence” (pp. 411-420, Springer books link) was published on the Bio-inspired Models and Evolutionary Computation section.

The Hacker and the Ants is a work of science fiction by Rudy Rucker published in 1994 by Avon Books. It was written while Rucker was working as a programmer at Autodesk, Inc., of Sausalito, California from 1988 to 1992. The main character is a transrealist interpretation of Rucker’s life in the 1970s (Rucker taught mathematics at the State University College at Geneseo, New York from 1972 to 1978. from Wikipedia). The plot follows:

(…) Jerzy Rugby is trying to create truly intelligent robots. While his actual life crumbles, Rugby toils in his virtual office, testing the robots online. Then, something goes wrong and zillions of computer virus ants invade the net. Rugby is the man wanted for the crime. He’s been set up to take a fall for a giant cyberconspiracy and he needs to figure out who — or what — is sabotaging the system in order to clear his name. Plunging deep into the virtual worlds of Antland of Fnoor to find some answers, Rugby confronts both electronic and all-too-real perils, facing death itself in a battle for his freedom. (…)

Nocturnal swarm

Nocturnal moth trails – Fluttering wings leave lacy trails as moths beat their way to a floodlight on a rural Ontario lawn. The midsummer night’s exposure, held for 20 seconds, captured some of the hundreds of insects engaged in a nocturnal swarm. [Photo: Steve Irvine, National Geographic, 2013, link]

Octavio Aburto David and Goliath CaboPulmo NatGeo2012

During several years, Octavio Aburto thought of one photo. Now, he finally got it. The recently published photograph by Aburto, titled “David and Goliath” (it his in fact David Castro, one of his research science colleagues at the center of this stunning image) has been widely shared over the last few weeks. It was taken at Cabo Pulmo National Park (Mexico) and submitted to the National Geographic photo contest 2012. Here, he captures the sheer size of fish aggregations in perspective with a single human surrounded by abundant marine life. On a recent interview, he explains:

[…] … this “David and Goliath” image is speaking to the courtship behavior of one particular species of Jack fish. […] Many people say that a single image is worth a thousand words, but a single image can also represent thousands of data points and countless statistical analyses. One image, or a small series of images can tell a complicated story in a very simple way. […] The picture you see was taken November 1st, 2012. But this picture has been in my mind for three years — I have been trying to capture this image ever since I saw the behavior of these fish and witnessed the incredible tornado that they form during courtship. So, I guess you could say this image took almost three years. […], in mission-blue.org , Dec. 2012.

Video – Behind the scenes of David and Goliath image. This photo was taken at Cabo Pulmo National Park and submitted to the National Geographic photo contest 2012. You can see more of his images from this place and about Mexican seas on Octavio‘s web link.

Four different snapshots (click to enlarge) from one of my latest books, recently published in Japan: Ajith Abraham, Crina Grosan, Vitorino Ramos (Eds.), “Swarm Intelligence in Data Mining” (群知能と  データマイニング), Tokyo Denki University press [TDU], Tokyo, Japan, July 2012.

Fig.1 – (click to enlarge) The optimal shortest path among N=1265 points depicting a Portuguese Navalheira crab as a result of one of our latest Swarm-Intelligence based algorithms. The problem of finding the shortest path among N different points in space is NP-hard, known as the Travelling Salesmen Problem (TSP), being one of the major and hardest benchmarks in Combinatorial Optimization (link) and Artificial Intelligence. (V. Ramos, D. Rodrigues, 2012)

This summer my kids just grab a tiny Portuguese Navalheira crab on the shore. After a small photo-session and some baby-sitting with a lettuce leaf, it was time to release it again into the ocean. He not only survived my kids, as he is now entitled into a new World Wide Web on-line life. After the Shortest path Sardine (link) with 1084 points, here is the Crab with 1265 points. The algorithm just run as little as 110 iterations.

Fig. 2 – (click to enlarge) Our 1265 initial points depicting a TSP Portuguese Navalheira crab. Could you already envision a minimal tour between all these points?

As usual in Travelling Salesmen problems (TSP) we start it with a set of points, in our case 1084 points or cities (fig. 2). Given a list of cities and their pairwise distances, the task is now to find the shortest possible tour that visits each city exactly once. The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods.

Fig. 3 – (click to enlarge) Again the shortest path Navalheira crab, where the optimal contour path (in black: first fig. above) with 1265 points (or cities) was filled in dark orange.

TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder.

What follows (fig. 4) is the original crab photo after image segmentation and just before adding Gaussian noise in order to retrieve several data points for the initial TSP problem. The algorithm was then embedded with the extracted x,y coordinates of these data points (fig. 2) in order for him to discover the minimal path, in just 110 iterations. For extra details, pay a visit onto the Shortest path Sardine (link) done earlier.

Fig. 4 – (click to enlarge) The original crab photo after some image processing as well as segmentation and just before adding Gaussian noise in order to retrieve several data points for the initial TSP problem.

Figure (click to enlarge) – Cover from one of my books published last month (10 July 2012) “Swarm Intelligence in Data Mining” recently translated and edited in Japan (by Tokyo Denki University press [TDU]). Cover image from Amazon.co.jp (url). Title was translated into 群知能と  データマイニング. Funny also, to see my own name for the first time translated into Japanese – wonder if it’s Kanji. A brief synopsis follow:

(…) Swarm Intelligence (SI) is an innovative distributed intelligent paradigm for solving optimization problems that originally took its inspiration from the biological examples by swarming, flocking and herding phenomena in vertebrates. Particle Swarm Optimization (PSO) incorporates swarming behaviours observed in flocks of birds, schools of fish, or swarms of bees, and even human social behaviour, from which the idea is emerged. Ant Colony Optimization (ACO) deals with artificial systems that is inspired from the foraging behaviour of real ants, which are used to solve discrete optimization problems. Historically the notion of finding useful patterns in data has been given a variety of names including data mining, knowledge discovery, information extraction, etc. Data Mining is an analytic process designed to explore large amounts of data in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data. In order to achieve this, data mining uses computational techniques from statistics, machine learning and pattern recognition. Data mining and Swarm intelligence may seem that they do not have many properties in common. However, recent studies suggests that they can be used together for several real world data mining problems especially when other methods would be too expensive or difficult to implement. This book deals with the application of swarm intelligence methodologies in data mining. Addressing the various issues of swarm intelligence and data mining using different intelligent approaches is the novelty of this edited volume. This volume comprises of 11 chapters including an introductory chapters giving the fundamental definitions and some important research challenges. Chapters were selected on the basis of fundamental ideas/concepts rather than the thoroughness of techniques deployed. (…) (more)

I would like to thank flocks, herds, and schools for existing: nature is the ultimate source of inspiration for computer graphics and animation.” in Craig Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model“, (paper link) published in Computer Graphics, 21(4), July 1987, pp. 25-34. (ACM SIGGRAPH ’87 Conference Proceedings, Anaheim, California, July 1987.)

There is an entire genealogy to be written from the point of view of the challenge posed by insect coordination, by “swarm intelligence.” Again and again, poetic, philosophical, and biological studies ask the same question: how does this “intelligent,” global organization emerge from a myriad of local, “dumb” interactions?” — Alex Galloway and Eugene Thacker, The Exploit.

[…] The interest in swarms was intimately connected to the research on emergence and “superorganisms” that arose during the early years of the twentieth century, especially in the 1920s. Even though the author of the notion of superorganisms was the now somewhat discredited writer Herbert Spencer,63 who introduced it in 1898, the idea was fed into contemporary discourse surrounding swarms and emergence through myrmecologist William Morton Wheeler. In 1911 Wheeler had published his classic article “The Ant Colony as an Organism” (in Journal of Morphology), and similar interests continued to be expressed in his subsequent writings. His ideas became well known in the 1990s in discussions concerning artificial life and holistic swarm-like organization. For writers such as Kevin Kelly, mentioned earlier in this chapter, Wheeler’s ideas regarding superorganisms stood as the inspiration for the hype surrounding emergent behavior.64 Yet the actual context of his paper was a lecture given at the Marine Biological Laboratory at Woods Hole in 1910.65 As Charlotte Sleigh points out, Wheeler saw himself as continuing the work of holistic philosophers, and later, in the 1910s and 1920s, found affinities with Bergson’s philosophy of temporality as well.66 In 1926, when emergence had already been discussed in terms of, for example, emergent evolution, evolutionary naturalism, creative synthesis, organicism, and emergent vitalism, Wheeler noted that this phenomenon seemed to challenge the basic dualisms of determinism versus freedom, mechanism versus vitalism, and the many versus the one.67 An animal phenomenon thus presented a crisis for the fundamental philosophical concepts that did not seem to apply to such a transversal mode of organization, or agencement to use the term that Wheeler coined. It was a challenge to philosophy and simultaneously to the physical, chemical, psychological, and social sciences, a phenomenon that seemed to cut through these seemingly disconnected spheres of reality.

In addition to Wheeler, one of the key writers on emergence – again also for Kelly in his Out of Control 68 – was C. Lloyd Morgan, whose Emergent Evolution (1927) proposed to see evolution in terms of emergent “relatedness”. Drawing on Bergson and Whitehead, Morgan rejected a mechanistic dissecting view that the interactions of entities “whether physical or mental” always resulted only in “mixings” that could be seen beforehand. Instead he proposed that the continuity of the mechanistic relations were supplemented with sudden changes at times. At times reminiscent of Lucretius’s view that there is a basic force, clinamen, that is the active differentiating principle of the world, Morgan focused on how qualitative changes in direction could affect the compositions and aggregates. He was interested in the question of the new and how novelty is possible. In his curious modernization of Spinoza, Morgan argued for the primacy of relations – or “relatedness,” to be accurate.69 Instead of speaking of agencies or activities, which implied a self-enclosed view of interactions, in Emergent Evolution Morgan propagated in a way an ethological view of the world. Entities and organisms are characterized by relatedness, the tendency to relate to their environment and, for example, other organisms. So actually, what emerge are relations:

If it be asked: What is it that you claim to be emergent? the brief reply is: Some new kind of relation. Revert to the atom, the molecule, the thing (e.g. a crystal), the organism, the person. At each ascending step there is a new entity in virtue of some new kind of relation, or set of relations, within it, or, as I phrase it, intrinsic to it. Each exhibits also new ways of acting on, and reacting to, other entities. There are new kinds of extrinsic relatedness“.70

The evolutionary levels of mind, life, and matter are in this scheme intimately related, with the lower levels continuously affording the emergence of so-called higher functions, like those of humans. Different levels of relatedness might not have any understanding of the relations that define other levels of existence, but still these other levels with their relations affect the other levels. Morgan tried, nonetheless, to steer clear of the idealistic notions of humanism that promoted the human mind as representing a superior stage in emergence. His stance was much closer to a certain monism in which mind and matter are continuously in some kind of intimate correspondence whereby even the simplest expressions of life participate in a wider field of relatedness. In Emergent Evolution Morgan described relations as completely concrete. He emphasized that the issue is not only about relations in terms but as much about terms in relation, with concrete situations, or events, stemming from their relations.71 In a way, other views on emergence put similar emphasis on the priority of relations, expressing a kind of radical empiricism in the vein of William James. Drawing on E. G. Spaulding’s 1918 study The New Rationalism, Wheeler noted the unpredictable potentials in connectionism: a connected whole is more than (or at least nor reducible to) its constituent parts, implying the impossibility to find causal determination of aggregates. Whereas existing sciences might be able to recognize and track down certain relationships that they have normalized or standardized, the relations might still produce properties that are beyond those of the initial conditions – and thus also demand a vector of analysis that parts from existing theories – dealing with properties that open up only in relation to themselves (as a “law unto themselves”). 72 Instead, a more complicated mode of development was at hand, in which aggregates, or agencements, simultaneously involved various levels of reality. This also implied that aggregates, emergent orders, have no one direction but are constituted of relations that extend in various directions:

We must also remember that most authors artificially isolate the emergent whole and fail to emphasize the fact that its parts have important relations not only with one another but also with the environment and that these external relations may contribute effectively towards producing both the whole and its novelty“.73 […]

in (passage from), Jussi Parikka, “Insect Media: An Archaeology of Animals and Technology“, Chapter II – Genesis of Form: Insect Architecture and Swarms, (section) Emergence and Relatedness: A Radical Empiricism – take one, pp. 51-53, University of Minnesota Press, Minneapolis, 2011.

ECCS11 Spatio-Temporal Dynamics on Co-Evolved Stigmergy Vitorino Ramos David M.S. Rodrigues Jorge Louçã

Ever tried to solve a problem where its own problem statement is changing constantly? Have a look on our approach:

Vitorino Ramos, David M.S. Rodrigues, Jorge LouçãSpatio-Temporal Dynamics on Co-Evolved Stigmergy“, in European Conference on Complex Systems, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

Abstract: Research over hard NP-complete Combinatorial Optimization Problems (COP’s) has been focused in recent years, on several robust bio-inspired meta-heuristics, like those involving Evolutionary Computation (EC) algorithmic paradigms. One particularly successful well-know meta-heuristic approach is based on Swarm Intelligence (SI), i.e., the self-organized stigmergic-based property of a complex system whereby the collective behaviors of (unsophisticated) entities interacting locally with their environment cause coherent functional global patterns to emerge. This line of research recognized as Ant Colony Optimization (ACO), uses a set of stochastic cooperating ant-like agents to find good solutions, using self-organized stigmergy as an indirect form of communication mediated by artificial pheromone, whereas agents deposit pheromone-signs on the edges of the problem-related graph complex network, encompassing a family of successful algorithmic variations such as: Ant Systems (AS), Ant Colony Systems (ACS), Max-Min Ant Systems (MaxMin AS) and Ant-Q.

Albeit being extremely successful these algorithms mostly rely on positive feedback’s, causing excessive algorithmic exploitation over the entire combinatorial search space. This is particularly evident over well known benchmarks as the symmetrical Traveling Salesman Problem (TSP). Being these systems comprised of a large number of frequently similar components or events, the principal challenge is to understand how the components interact to produce a complex pattern feasible solution (in our case study, an optimal robust solution for hard NP-complete dynamic TSP-like combinatorial problems). A suitable approach is to first understand the role of two basic modes of interaction among the components of Self-Organizing (SO) Swarm-Intelligent-like systems: positive and negative feedback. While positive feedback promotes a snowballing auto-catalytic effect (e.g. trail pheromone upgrading over the network; exploitation of the search space), taking an initial change in a system and reinforcing that change in the same direction as the initial deviation (self-enhancement and amplification) allowing the entire colony to exploit some past and present solutions (environmental dynamic memory), negative feedback such as pheromone evaporation ensure that the overall learning system does not stables or freezes itself on a particular configuration (innovation; search space exploration). Although this kind of (global) delayed negative feedback is important (evaporation), for the many reasons given above, there is however strong assumptions that other negative feedbacks are present in nature, which could also play a role over increased convergence, namely implicit-like negative feedbacks. As in the case for positive feedbacks, there is no reason not to explore increasingly distributed and adaptive algorithmic variations where negative feedback is also imposed implicitly (not only explicitly) over each network edge, while the entire colony seeks for better answers in due time.

In order to overcome this hard search space exploitation-exploration compromise, our present algorithmic approach follows the route of very recent biological findings showing that forager ants lay attractive trail pheromones to guide nest mates to food, but where, the effectiveness of foraging networks were improved if pheromones could also be used to repel foragers from unrewarding routes. Increasing empirical evidences for such a negative trail pheromone exists, deployed by Pharaoh’s ants (Monomorium pharaonis) as a ‘no entry‘ signal to mark unrewarding foraging paths. The new algorithm comprises a second order approach to Swarm Intelligence, as pheromone-based no entry-signals cues, were introduced, co-evolving with the standard pheromone distributions (collective cognitive maps) in the aforementioned known algorithms.

To exhaustively test his adaptive response and robustness, we have recurred to different dynamic optimization problems. Medium-size and large-sized dynamic TSP problems were created. Settings and parameters such as, environmental upgrade frequencies, landscape changing or network topological speed severity, and type of dynamic were tested. Results prove that the present co-evolved two-type pheromone swarm intelligence algorithm is able to quickly track increasing swift changes on the dynamic TSP complex network, compared to standard algorithms.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Dynamical Symmetrical Traveling Salesman Problems (TSP).


Fig. – Recovery times over several dynamical stress tests at the fl1577 TSP problem (1577 node graph) – 460 iter max – Swift changes at every 150 iterations (20% = 314 nodes, 40% = 630 nodes, 60% = 946 nodes, 80% = 1260 nodes, 100% = 1576 nodes). [click to enlarge]

ECCS11 From Standard to Second Order Swarm Intelligence Phase-Space Maps David Rodrigues Jorge Louçã Vitorino Ramos

David M.S. Rodrigues, Jorge Louçã, Vitorino Ramos, “From Standard to Second Order Swarm Intelligence Phase-space maps“, in European Conference on Complex Systems, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

Abstract: Standard Stigmergic approaches to Swarm Intelligence encompasses the use of a set of stochastic cooperating ant-like agents to find optimal solutions, using self-organized Stigmergy as an indirect form of communication mediated by a singular artificial pheromone. Agents deposit pheromone-signs on the edges of the problem-related graph to give rise to a family of successful algorithmic approaches entitled Ant Systems (AS), Ant Colony Systems (ACS), among others. These mainly rely on positive feedback’s, to search for an optimal solution in a large combinatorial space. The present work shows how, using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedback’s achieves better results than single positive feedback systems. This follows the route of very recent biological findings showing that forager ants, while laying attractive trail pheromones to guide nest mates to food, also gained foraging effectiveness by the use of pheromones that repelled foragers from unrewarding routes. The algorithm presented here takes inspiration precisely from this biological observation.

The new algorithm was exhaustively tested on a series of well-known benchmarks over hard NP-complete Combinatorial Optimization Problems (COP’s), running on symmetrical Traveling Salesman Problems (TSP). Different network topologies and stress tests were conducted over low-size TSP’s (eil51.tsp; eil78.tsp; kroA100.tsp), medium-size (d198.tsp; lin318.tsp; pcb442.tsp; att532.tsp; rat783.tsp) as well as large sized ones (fl1577.tsp; d2103.tsp) [numbers here referring to the number of nodes in the network]. We show that the new co-evolved stigmergic algorithm compared favorably against the benchmark. The algorithm was able to equal or majorly improve every instance of those standard algorithms, not only in the realm of the Swarm Intelligent AS, ACS approach, as in other computational paradigms like Genetic Algorithms (GA), Evolutionary Programming (EP), as well as SOM (Self-Organizing Maps) and SA (Simulated Annealing). In order to deeply understand how a second co-evolved pheromone was useful to track the collective system into such results, a refined phase-space map was produced mapping the pheromones ratio between a pure Ant Colony System (where no negative feedback besides pheromone evaporation is present) and the present second-order approach. The evaporation rate between different pheromones was also studied and its influence in the outcomes of the algorithm is shown. A final discussion on the phase-map is included. This work has implications in the way large combinatorial problems are addressed as the double feedback mechanism shows improvements over the single-positive feedback mechanisms in terms of convergence speed and on major results.

Keywords: Stigmergy, Co-Evolution, Self-Organization, Swarm Intelligence, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Traveling Salesman Problems (TSP), phase-space.

Fig. – Comparing convergence results between Standard algorithms vs. Second Order Swarm Intelligence, over TSP fl1577 (click to enlarge).

Picture – The European Conference on Complex Systems (ECCS’11 – link) at one of the main Austrian newspapers Der Standard: “Die ganze Welt als Computersimulation” (link), Klaus Taschwer, Der Standard, 14 September [click to enlarge – photo taken at the conference on Sept. 15, Vienna 2011].

Take Darwin, for example: would Caltech have hired Darwin? Probably not. He had only vague ideas about some of the mechanisms underlying biological Evolution. He had no way of knowing about genetics, and he lived before the discovery of mutations. Nevertheless, he did work out, from the top down, the notion of natural selection and the magnificent idea of the relationship of all living things.” Murray Gell-Mann in “Plectics“, excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995).

To be honest, I didn’t enjoy this title, but all of us had a fair share with journalists, now and then by now. After all, 99% of us don’t do computer simulation. We are all after the main principles, and their direct applications.

During 5 days (12-16 Sept.), with around 700 attendees the Vienna 2011 conference evolved around main important themes as Complexity & Networks (XNet), Current Trends in Game Theory, Complexity in Energy Infrastructures, Emergent Properties in Natural and Artificial Complex Systems (EPNACS), Complexity and the Future of Transportation Systems, Econophysics, Cultural and Opinion Dynamics, Dynamics on and of Complex Networks, Frontiers in the Theory of Evolution, and – among many others – Dynamics of Human Interactions.

For those who know me (will definitely understand), I was mainly attending those sessions underlined above, the last one (Frontiers in Evolution) being one of my favorites, among all these ECCS years. All in all, the conference had highly quality works (daily, we had about 3-4 works I definitely think should be followed in the future) and to those, more attention should be deserved (my main critics to the conference organization goes in here). Naturally, the newspaper article also reflects on the FuturICT, being historically one of the major scientific European projects ever done (along, probably, with the Geneva LHC), which teams spread across Europe, including Portugal with a representative team of 7 members present on the conference, led by Jorge Louçã, the former editor and organizer on the previous ECCS’10 last year in Lisbon.

Video – “… they forgot to say: in principle!“. Ricard Solé addressing the topic of a Morphospace for Biological Computation at ECCS’11 (European Conference on Complex Systems), while keeping is good humor on.

Let me draw anyway your attention to 4 outstanding lectures: Peter Schuster (link) on the first day, dissected on the source of Complexity in Evolution, battling among – as he puts it – two paradoxes: (1) Evolution is an enormously complex process, and (2) biological evolution on Earth proceeds from lower towards higher complexity. Earlier on that morning – opening the conference -, Murray Gell-Mann (link) who co-founded the Santa Fe Institute in 1984, gave a wonderful lecture on Generalized Entropies. Besides his age, the 1969 Nobel Prize in physics for his work on the theory of elementary particles, gladly turned his interest in the 1990s to the theory of Complex Adaptive Systems (CAS). Next, Albert-László Barabási (link), tamed Complexity on Controlling Networks. Finally, at the last day, closing the conference in pure gold, Ricard Solé (link) addressed the topic of a Morphospace for Biological Computation, an amazing lecture with a powerful topic to which – nevertheless – I felt he had little time (20 minutes), for such a rich endeavor. However – by no means -, he have lost his good humor during the talk (check my video above). Next year, the conference will be held in Brussels, and by just judging at the poster design, it promises. Go ants, go … !

Picture – The European Conference on Complex Systems (ECCS’12 – link) poster design for next year in Brussels.

Fig.1 – (click to enlarge) The optimal shortest path among N=1084 points depicting a Portuguese sardine as a result of one of our latest Swarm-Intelligence based algorithms. The problem of finding the shortest path among N different points in space is NP-hard, known as the Travelling Salesmen Problem (TSP), being one of the major and hardest benchmarks in Combinatorial Optimization (link) and Artificial Intelligence. (D. Rodrigues, V. Ramos, 2011)

Almost summer time in Portugal, great weather as usual, and the perfect moment to eat sardines along with friends in open air esplanades; in fact, a lot of grilled sardines. We usually eat grilled sardines with a tomato-onion salad along with barbecued cherry peppers in salt and olive oil. That’s tasty, believe me. But not tasty enough however for me and one of my colleagues, David Rodrigues (blog link/twitter link). We decided to take this experience a little further on, creating the first shortest path sardine.

Fig. 2 – (click to enlarge) Our 1084 initial points depicting a TSP Portuguese sardine. Could you already envision a minimal tour between all these points?

As usual in Travelling Salesmen problems (TSP) we start it with a set of points, in our case 1084 points or cities (fig. 2). Given a list of cities and their pairwise distances, the task is now to find the shortest possible tour that visits each city exactly once. The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder. (link)

Fig. 3 – (click to enlarge) A well done and quite grilled shortest path sardine, where the optimal contour path (in blue: first fig. above) with 1084 points was filled in black colour. Nice T-shirt!

Even for toy-problems like the present 1084 TSP sardine, the amount of possible paths are incredible huge. And only one of those possible paths is the optimal (minimal) one. Consider for example a TSP with N=4 cities, A, B, C, and D. Starting in city A, the number of possible paths is 6: that is 1) A to B, B to C, C to D, and D to A, 2) A-B, B-D, D-C, C-A, 3) A-C, C-B, B-D and D-A, 4) A-C, C-D, D-B, and B-A, 5) A-D, D-C, C-B, and B-A, and finally 6) A-D, D-B, B-C, and C-A. I.e. there are (N1)! [i.e., N1 factorial] possible paths. For N=3 cities, 2×1=2 possible paths, for N=4 cities, 3x2x1=6 possible paths, for N=5 cities, 4x3x2x1=24 possible paths, … for N=20 cities, 121.645.100.408.832.000 possible paths, and so on.

The most direct solution would be to try all permutations (ordered combinations) and see which one is cheapest (using computational brute force search). The running time for this approach however, lies within a polynomial factor of O(n!), the factorial of the number of cities, so this solution becomes impractical even for only 20 cities. One of the earliest and oldest applications of dynamic programming is the Held–Karp algorithm which only solves the problem in time O(n22n).

In our present case (N=1084) we have had to deal with 1083 factorial possible paths, leading to the astronomical number of 1.19×102818 possible solutions. That’s roughly 1 followed by 2818 zeroes! – better now to check this Wikipedia entry on very large numbers. Our new Swarm-Intelligent based algorithm, running on a normal PC was however, able to formulate a minimal solution (fig.1) within just several minutes. We will soon post more about our novel self-organized stigmergic-based algorithmic approach, but meanwhile, if you enjoyed these drawings, do not hesitate in asking us for a grilled cherry pepper as well. We will be pleased to deliver you one by email.

p.s. – This is a joint twin post with David Rodrigues.

Fig. 4 – (click to enlarge) Zoom at the end sardine tail optimal contour path (in blue: first fig. above) filled in black, from a total set of 1084 initial points.

With an eye for detail and an easy style, Peter Miller explains why swarm intelligence has scientists buzzing.” — Steven Strogatz, author of Sync, and Professor of Mathematics, Cornell University.

From the introduction of, Peter Miller, “Smart Swarms – How Understanding Flocks, Schools and Colonies Can Make Us Better at Communicating, Decision Making and Getting Things Done“. (…) The modern world may be obsessed with speed and productivity, but twenty-first century humans actually have much to learn from the ancient instincts of swarms. A fascinating new take on the concept of collective intelligence and its colourful manifestations in some of our most complex problems, Smart Swarm introduces a compelling new understanding of the real experts on solving our own complex problems relating to such topics as business, politics, and technology. Based on extensive globe-trotting research, this lively tour from National Geographic reporter Peter Miller introduces thriving throngs of ant colonies, which have inspired computer programs for streamlining factory processes, telephone networks, and truck routes; termites, used in recent studies for climate-control solutions; schools of fish, on which the U.S. military modelled a team of robots; and many other examples of the wisdom to be gleaned about the behaviour of crowds-among critters and corporations alike. In the tradition of James Surowiecki‘s The Wisdom of Crowds and the innovative works of Malcolm Gladwell, Smart Swarm is an entertaining yet enlightening look at small-scale phenomena with big implications for us all. (…)

(…) What do ants, bees, and birds know that we don’t? How can that give us an advantage? Consider: • Southwest Airlines used virtual ants to determine the best way to board a plane. • The CIA was inspired by swarm behavior to invent a more effective spy network. • Filmmakers studied flocks of birds as models for armies of Orcs in Lord of the Rings battle scenes. • Defense agencies sponsored teams of robots that can sense radioactivity, heat, or a chemical device as easily as a school of fish can locate food. Find out how “smart swarms” can teach us how to make better choices, create stronger networks, and organize our businesses more effectively than we ever thought possible. (…)

[…] Dumb parts, properly connected into a swarm, yield smart results. […] ~ Kevin Kelly. / […] “Now make a four!” the voice booms. Within moments a “4” emerges. “Three.” And in a blink a “3” appears. Then in rapid succession, “Two… One…Zero.” The emergent thing is on a roll. […], Kevin Kelly, Out of Control, 1994.

video -‘Swarm Showreel’ by SwarmWorks Ltd. December 2009 (EVENTS ZUM SCHWÄRMEN – Von Entertainment bis Business).

[…] In a darkened Las Vegas conference room, a cheering audience waves cardboard wands in the air. Each wand is red on one side, green on the other. Far in back of the huge auditorium, a camera scans the frantic attendees. The video camera links the color spots of the wands to a nest of computers set up by graphics wizard Loren Carpenter. Carpenter’s custom software locates each red and each green wand in the auditorium. Tonight there are just shy of 5,000 wandwavers. The computer displays the precise location of each wand (and its color) onto an immense, detailed video map of the auditorium hung on the front stage, which all can see. More importantly, the computer counts the total red or green wands and uses that value to control software. As the audience wave the wands, the display screen shows a sea of lights dancing crazily in the dark, like a candlelight parade gone punk. The viewers see themselves on the map; they are either a red or green pixel. By flipping their own wands, they can change the color of their projected pixels instantly.
Loren Carpenter boots up the ancient video game of Pong onto the immense screen. Pong was the first commercial video game to reach pop consciousness. It’s a minimalist arrangement: a white dot bounces inside a square; two movable rectangles on each side act as virtual paddles. In short, electronic ping-pong. In this version, displaying the red side of your wand moves the paddle up. Green moves it down. More precisely, the Pong paddle moves as the average number of red wands in the auditorium increases or decreases. Your wand is just one vote.
Carpenter doesn’t need to explain very much. Every attendee at this 1991 conference of computer graphic experts was probably once hooked on Pong. His amplified voice booms in the hall, “Okay guys. Folks on the left side of the auditorium control the left paddle. Folks on the right side control the right paddle. If you think you are on the left, then you really are. Okay? Go!”
The audience roars in delight. Without a moment’s hesitation, 5,000 people are playing a reasonably good game of Pong. Each move of the paddle is the average of several thousand players’ intentions. The sensation is unnerving. The paddle usually does what you intend, but not always. When it doesn’t, you find yourself spending as much attention trying to anticipate the paddle as the incoming ball. One is definitely aware of another intelligence online: it’s this hollering mob.
The group mind plays Pong so well that Carpenter decides to up the ante. Without warning the ball bounces faster. The participants squeal in unison. In a second or two, the mob has adjusted to the quicker pace and is playing better than before. Carpenter speeds up the game further; the mob learns instantly.
“Let’s try something else,” Carpenter suggests. A map of seats in the auditorium appears on the screen. He draws a wide circle in white around the center. “Can you make a green ‘5’ in the circle?” he asks the audience. The audience stares at the rows of red pixels. The game is similar to that of holding a placard up in a stadium to make a picture, but now there are no preset orders, just a virtual mirror. Almost immediately wiggles of green pixels appear and grow haphazardly, as those who think their seat is in the path of the “5” flip their wands to green. A vague figure is materializing. The audience collectively begins to discern a “5” in the noise. Once discerned, the “5” quickly precipitates out into stark clarity. The wand-wavers on the fuzzy edge of the figure decide what side they “should” be on, and the emerging “5” sharpens up. The number assembles itself.
“Now make a four!” the voice booms. Within moments a “4” emerges. “Three.” And in a blink a “3” appears. Then in rapid succession, “Two… One…Zero.” The emergent thing is on a roll.
Loren Carpenter launches an airplane flight simulator on the screen. His instructions are terse: “You guys on the left are controlling roll; you on the right, pitch. If you point the plane at anything interesting, I’ll fire a rocket at it.” The plane is airborne. The pilot is…5,000 novices. For once the auditorium is completely silent. Everyone studies the navigation instruments as the scene outside the windshield sinks in. The plane is headed for a landing in a pink valley among pink hills. The runway looks very tiny. There is something both delicious and ludicrous about the notion of having the passengers of a plane collectively fly it. The brute democratic sense of it all is very appealing. As a passenger you get to vote for everything; not only where the group is headed, but when to trim the flaps.
But group mind seems to be a liability in the decisive moments of touchdown, where there is no room for averages. As the 5,000 conference participants begin to take down their plane for landing, the hush in the hall is ended by abrupt shouts and urgent commands. The auditorium becomes a gigantic cockpit in crisis. “Green, green, green!” one faction shouts. “More red!” a moment later from the crowd. “Red, red! REEEEED !” The plane is pitching to the left in a sickening way. It is obvious that it will miss the landing strip and arrive wing first. Unlike Pong, the flight simulator entails long delays in feedback from lever to effect, from the moment you tap the aileron to the moment it banks. The latent signals confuse the group mind. It is caught in oscillations of overcompensation. The plane is lurching wildly. Yet the mob somehow aborts the landing and pulls the plane up sensibly. They turn the plane around to try again.
How did they turn around? Nobody decided whether to turn left or right, or even to turn at all. Nobody was in charge. But as if of one mind, the plane banks and turns wide. It tries landing again. Again it approaches cockeyed. The mob decides in unison, without lateral communication, like a flock of birds taking off, to pull up once more. On the way up the plane rolls a bit. And then rolls a bit more. At some magical moment, the same strong thought simultaneously infects five thousand minds: “I wonder if we can do a 360?”
Without speaking a word, the collective keeps tilting the plane. There’s no undoing it. As the horizon spins dizzily, 5,000 amateur pilots roll a jet on their first solo flight. It was actually quite graceful. They give themselves a standing ovation. The conferees did what birds do: they flocked. But they flocked self- consciously. They responded to an overview of themselves as they co-formed a “5” or steered the jet. A bird on the fly, however, has no overarching concept of the shape of its flock. “Flockness” emerges from creatures completely oblivious of their collective shape, size, or alignment. A flocking bird is blind to the grace and cohesiveness of a flock in flight.
At dawn, on a weedy Michigan lake, ten thousand mallards fidget. In the soft pink glow of morning, the ducks jabber, shake out their wings, and dunk for breakfast. Ducks are spread everywhere. Suddenly, cued by some imperceptible signal, a thousand birds rise as one thing. They lift themselves into the air in a great thunder. As they take off they pull up a thousand more birds from the surface of the lake with them, as if they were all but part of a reclining giant now rising. The monstrous beast hovers in the air, swerves to the east sun, and then, in a blink, reverses direction, turning itself inside out. A second later, the entire swarm veers west and away, as if steered by a single mind. In the 17th century, an anonymous poet wrote: “…and the thousands of fishes moved as a huge beast, piercing the water. They appeared united, inexorably bound to a common fate. How comes this unity?”
A flock is not a big bird. Writes the science reporter James Gleick, “Nothing in the motion of an individual bird or fish, no matter how fluid, can prepare us for the sight of a skyful of starlings pivoting over a cornfield, or a million minnows snapping into a tight, polarized array….High-speed film [of flocks turning to avoid predators] reveals that the turning motion travels through the flock as a wave, passing from bird to bird in the space of about one-seventieth of a second. That is far less than the bird’s reaction time.” The flock is more than the sum of the birds.
In the film Batman Returns a horde of large black bats swarmed through flooded tunnels into downtown Gotham. The bats were computer generated. A single bat was created and given leeway to automatically flap its wings. The one bat was copied by the dozens until the animators had a mob. Then each bat was instructed to move about on its own on the screen following only a few simple rules encoded into an algorithm: don’t bump into another bat, keep up with your neighbors, and don’t stray too far away. When the algorithmic bats were run, they flocked like real bats.
The flocking rules were discovered by Craig Reynolds, a computer scientist working at Symbolics, a graphics hardware manufacturer. By tuning the various forces in his simple equation a little more cohesion, a little less lag time. Reynolds could shape the flock to behave like living bats, sparrows, or fish. Even the marching mob of penguins in Batman Returns were flocked by Reynolds’s algorithms. Like the bats, the computer-modeled 3-D penguins were cloned en masse and then set loose into the scene aimed in a certain direction. Their crowdlike jostling as they marched down the snowy street simply emerged, out of anyone’s control. So realistic is the flocking of Reynolds’s simple algorithms that biologists have gone back to their hi-speed films and concluded that the flocking behavior of real birds and fish must emerge from a similar set of simple rules. A flock was once thought to be a decisive sign of life, some noble formation only life could achieve. Via Reynolds’s algorithm it is now seen as an adaptive trick suitable for any distributed vivisystem, organic or made. […] in Kevin Kelly, “Out of Control – the New Biology of Machines, Social Systems and the Economic World“, pp. 11-12-13, 1994 (full pdf book)

The dynamics of ant swarms share an uncanny similarity with the movement of various fluids (video above). Micah Streiff and his team from the Georgia Institute of Technology in Atlanta captured writhing groups of ants behaving just like liquids. You can watch them diffuse outwards from a pool, tackle jagged surface like a viscous fluid or flow from a funnel (from NewScientist | 2010 best videos).

[…] Fire ants use their claws to grip diverse surfaces, including each other. As a result of their mutual adhesion and large numbers, ant colonies flow like inanimate fluids. In this sequence of films, we demonstrate how ants behave similarly to the spreading of drops, the capillary rise of menisci, and gravity-driven flow down a wall. By emulating the flow of fluids, ant colonies can remain united under stressful conditions. […], in Micah Streiff, Nathan Mlot, Sho Shinotsuka, Alex Alexeev, David Hu, “Ants as Fluids: Physics-Inspired Biology,” ArXiv, 15 Oct 2010. http://arxiv.org/abs/1010.3256 .

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

@ViRAms on Twitter

Archives

Blog Stats

  • 256,420 hits