You are currently browsing the tag archive for the ‘Distributed Computation’ tag.

Vitorino Ramos - Citations2016Jan

2016 – Up now, an overall of 1567 citations among 74 works (including 3 books) on GOOGLE SCHOLAR (https://scholar.google.com/citations?user=gSyQ-g8AAAAJ&hl=en) [with an Hirsh h-index=19, and an average of 160.2 citations each for any work on my top five] + 900 citations among 57 works on the new RESEARCH GATE site (https://www.researchgate.net/profile/Vitorino_Ramos).

Refs.: Science, Artificial Intelligence, Swarm Intelligence, Data-Mining, Big-Data, Evolutionary Computation, Complex Systems, Image Analysis, Pattern Recognition, Data Analysis.

Complete circuit diagram with pheromone - Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; Figure – Neural circuit controller of the virtual ant (page 3, fig. 2). [URL: http://arxiv.org/abs/1507.08467 ]

Intelligence and decision in foraging ants. Individual or Collective? Internal or External? What is the right balance between the two. Can one have internal intelligence without external intelligence? Can one take examples from nature to build in silico artificial lives that present us with interesting patterns? We explore a model of foraging ants in this paper that will be presented in early September in Exeter, UK, at UKCI 2015. (available on arXiv [PDF] and ResearchGate)

Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; “A Model for Foraging Ants, Controlled by Spiking Neural Networks and Double Pheromones“, UKCI 2015 Computational Intelligence – University of Exeter, UK, September 2015.

Abstract: A model of an Ant System where ants are controlled by a spiking neural circuit and a second order pheromone mechanism in a foraging task is presented. A neural circuit is trained for individual ants and subsequently the ants are exposed to a virtual environment where a swarm of ants performed a resource foraging task. The model comprises an associative and unsupervised learning strategy for the neural circuit of the ant. The neural circuit adapts to the environment by means of classical conditioning. The initially unknown environment includes different types of stimuli representing food (rewarding) and obstacles (harmful) which, when they come in direct contact with the ant, elicit a reflex response in the motor neural system of the ant: moving towards or away from the source of the stimulus. The spiking neural circuits of the ant is trained to identify food and obstacles and move towards the former and avoid the latter. The ants are released on a landscape with multiple food sources where one ant alone would have difficulty harvesting the landscape to maximum efficiency. In this case the introduction of a double pheromone mechanism (positive and negative reinforcement feedback) yields better results than traditional ant colony optimization strategies. Traditional ant systems include mainly a positive reinforcement pheromone. This approach uses a second pheromone that acts as a marker for forbidden paths (negative feedback). This blockade is not permanent and is controlled by the evaporation rate of the pheromones. The combined action of both pheromones acts as a collective stigmergic memory of the swarm, which reduces the search space of the problem. This paper explores how the adaptation and learning abilities observed in biologically inspired cognitive architectures is synergistically enhanced by swarm optimization strategies. The model portraits two forms of artificial intelligent behaviour: at the individual level the spiking neural network is the main controller and at the collective level the pheromone distribution is a map towards the solution emerged by the colony. The presented model is an important pedagogical tool as it is also an easy to use library that allows access to the spiking neural network paradigm from inside a Netlogo—a language used mostly in agent based modelling and experimentation with complex systems.

References:

[1] C. G. Langton, “Studying artificial life with cellular automata,” Physica D: Nonlinear Phenomena, vol. 22, no. 1–3, pp. 120 – 149, 1986, proceedings of the Fifth Annual International Conference. [Online]. Available: http://www.sciencedirect.com/ science/article/pii/016727898690237X
[2] A. Abraham and V. Ramos, “Web usage mining using artificial ant colony clustering and linear genetic programming,” in Proceedings of the Congress on Evolutionary Computation. Australia: IEEE Press, 2003, pp. 1384–1391.
[3] V. Ramos, F. Muge, and P. Pina, “Self-organized data and image retrieval as a consequence of inter-dynamic synergistic relationships in artificial ant colonies,” Hybrid Intelligent Systems, vol. 87, 2002.
[4] V. Ramos and J. J. Merelo, “Self-organized stigmergic document maps: Environment as a mechanism for context learning,” in Proceddings of the AEB, Merida, Spain, February 2002. ´
[5] D. Sousa-Rodrigues and V. Ramos, “Traversing news with ant colony optimisation and negative pheromones,” in European Conference in Complex Systems, Lucca, Italy, Sep 2014.
[6] E. Bonabeau, G. Theraulaz, and M. Dorigo, Swarm Intelligence: From Natural to Artificial Systems, 1st ed., ser. Santa Fe Insitute Studies In The Sciences of Complexity. 198 Madison Avenue, New York: Oxford University Press, USA, Sep. 1999.
[7] M. Dorigo and L. M. Gambardella, “Ant colony system: A cooperative learning approach to the traveling salesman problem,” Universite Libre de Bruxelles, Tech. Rep. TR/IRIDIA/1996-5, ´ 1996.
[8] M. Dorigo, G. Di Caro, and L. M. Gambardella, “Ant algorithms for discrete optimization,” Artif. Life, vol. 5, no. 2, pp. 137– 172, Apr. 1999. [Online]. Available: http://dx.doi.org/10.1162/ 106454699568728
[9] L. M. Gambardella and M. Dorigo, “Ant-q: A reinforcement learning approach to the travelling salesman problem,” in Proceedings of the ML-95, Twelfth Intern. Conf. on Machine Learning, M. Kaufman, Ed., 1995, pp. 252–260.
[10] A. Gupta, V. Nagarajan, and R. Ravi, “Approximation algorithms for optimal decision trees and adaptive tsp problems,” in Proceedings of the 37th international colloquium conference on Automata, languages and programming, ser. ICALP’10. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 690–701. [Online]. Available: http://dl.acm.org/citation.cfm?id=1880918.1880993
[11] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Second order ˜ swarm intelligence,” in HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, ser. Lecture Notes in Computer Science, J.-S. Pan, M. Polycarpou, M. Wozniak, A. Carvalho, ´ H. Quintian, and E. Corchado, Eds. Salamanca, Spain: Springer ´ Berlin Heidelberg, Sep 2013, vol. 8073, pp. 411–420.
[12] W. Maass and C. M. Bishop, Pulsed Neural Networks. Cambridge, Massachusetts: MIT Press, 1998.
[13] E. M. Izhikevich and E. M. Izhikevich, “Simple model of spiking neurons.” IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, vol. 14, no. 6, pp. 1569–72, 2003. [Online]. Available: http://www.ncbi.nlm.nih. gov/pubmed/18244602
[14] C. Liu and J. Shapiro, “Implementing classical conditioning with spiking neurons,” in Artificial Neural Networks ICANN 2007, ser. Lecture Notes in Computer Science, J. de S, L. Alexandre, W. Duch, and D. Mandic, Eds. Springer Berlin Heidelberg, 2007, vol. 4668, pp. 400–410. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-74690-4 41
[15] J. Haenicke, E. Pamir, and M. P. Nawrot, “A spiking neuronal network model of fast associative learning in the honeybee,” Frontiers in Computational Neuroscience, no. 149, 2012. [Online]. Available: http://www.frontiersin.org/computational neuroscience/10.3389/conf.fncom.2012.55.00149/full
[16] L. I. Helgadottir, J. Haenicke, T. Landgraf, R. Rojas, and M. P. Nawrot, “Conditioned behavior in a robot controlled by a spiking neural network,” in International IEEE/EMBS Conference on Neural Engineering, NER, 2013, pp. 891–894.
[17] A. Cyr and M. Boukadoum, “Classical conditioning in different temporal constraints: an STDP learning rule for robots controlled by spiking neural networks,” pp. 257–272, 2012.
[18] X. Wang, Z. G. Hou, F. Lv, M. Tan, and Y. Wang, “Mobile robots’ modular navigation controller using spiking neural networks,” Neurocomputing, vol. 134, pp. 230–238, 2014.
[19] C. Hausler, M. P. Nawrot, and M. Schmuker, “A spiking neuron classifier network with a deep architecture inspired by the olfactory system of the honeybee,” in 2011 5th International IEEE/EMBS Conference on Neural Engineering, NER 2011, 2011, pp. 198–202.
[20] U. Wilensky, “Netlogo,” Evanston IL, USA, 1999. [Online]. Available: http://ccl.northwestern.edu/netlogo/
[21] C. Jimenez-Romero and J. Johnson, “Accepted abstract: Simulation of agents and robots controlled by spiking neural networks using netlogo,” in International Conference on Brain Engineering and Neuro-computing, Mykonos, Greece, Oct 2015.
[22] W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge University Press, 2002.
[23] J. v. H. W Gerstner, R Kempter and H. Wagner, “A neuronal learning rule for sub-millisecond temporal coding,” Nature, vol. 386, pp. 76–78, 1996.
[24] I. P. Pavlov, “Conditioned reflexes: An investigation of the activity of the cerebral cortex,” New York, 1927.
[25] E. J. H. Robinson, D. E. Jackson, M. Holcombe, and F. L. W. Ratnieks, “Insect communication: ‘no entry’ signal in ant foraging,” Nature, vol. 438, no. 7067, pp. 442–442, 11 2005. [Online]. Available: http://dx.doi.org/10.1038/438442a
[26] E. J. Robinson, D. Jackson, M. Holcombe, and F. L. Ratnieks, “No entry signal in ant foraging (hymenoptera: Formicidae): new insights from an agent-based model,” Myrmecological News, vol. 10, no. 120, 2007.
[27] D. Sousa-Rodrigues, J. Louçã, and V. Ramos, “From standard ˜ to second-order swarm intelligence phase-space maps,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, Sep 2011.
[28] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Spatio-temporal ˜ dynamics on co-evolved stigmergy,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, 9 2011.
[29] S. Tisue and U. Wilensky, “Netlogo: A simple environment for modeling complexity,” in International conference on complex systems. Boston, MA, 2004, pp. 16–21.

David MS Rodrigues Reading the News Through its Structure New Hybrid Connectivity Based ApproachesFigure – Two simplicies a and b connected by the 2-dimensional face, the triangle {1;2;3}. In the analysis of the time-line of The Guardian newspaper (link) the system used feature vectors based on frequency of words and them computed similarity between documents based on those feature vectors. This is a purely statistical approach that requires great computational power and that is difficult for problems that have large feature vectors and many documents. Feature vectors with 100,000 or more items are common and computing similarities between these documents becomes cumbersome. Instead of computing distance (or similarity) matrices between documents from feature vectors, the present approach explores the possibility of inferring the distance between documents from the Q-analysis description. Q-analysis is a very natural notion of connectivity between the simplicies of the structure and in the relation studied, documents are connected to each other through shared sets of tags entered by the journalists. Also in this framework, eccentricity is defined as a measure of the relatedness of one simplex in relation to another [7].

David M.S. Rodrigues and Vitorino Ramos, “Traversing News with Ant Colony Optimisation and Negative Pheromones” [PDF], accepted as preprint for oral presentation at the European Conference on Complex SystemsECCS14 in Lucca, Sept. 22-26, 2014, Italy.

Abstract: The past decade has seen the rapid development of the online newsroom. News published online are the main outlet of news surpassing traditional printed newspapers. This poses challenges to the production and to the consumption of those news. With those many sources of information available it is important to find ways to cluster and organise the documents if one wants to understand this new system. Traditional approaches to the problem of clustering documents usually embed the documents in a suitable similarity space. Previous studies have reported on the impact of the similarity measures used for clustering of textual corpora [1]. These similarity measures usually are calculated for bag of words representations of the documents. This makes the final document-word matrix high dimensional. Feature vectors with more than 10,000 dimensions are common and algorithms have severe problems with the high dimensionality of the data. A novel bio inspired approach to the problem of traversing the news is presented. It finds Hamiltonian cycles over documents published by the newspaper The Guardian. A Second Order Swarm Intelligence algorithm based on Ant Colony Optimisation was developed [2, 3] that uses a negative pheromone to mark unrewarding paths with a “no-entry” signal. This approach follows recent findings of negative pheromone usage in real ants [4].

In this case study the corpus of data is represented as a bipartite relation between documents and keywords entered by the journalists to characterise the news. A new similarity measure between documents is presented based on the Q-analysis description [5, 6, 7] of the simplicial complex formed between documents and keywords. The eccentricity between documents (two simplicies) is then used as a novel measure of similarity between documents. The results prove that the Second Order Swarm Intelligence algorithm performs better in benchmark problems of the travelling salesman problem, with faster convergence and optimal results. The addition of the negative pheromone as a non-entry signal improves the quality of the results. The application of the algorithm to the corpus of news of The Guardian creates a coherent navigation system among the news. This allows the users to navigate the news published during a certain period of time in a semantic sequence instead of a time sequence. This work as broader application as it can be applied to many cases where the data is mapped to bipartite relations (e.g. protein expressions in cells, sentiment analysis, brand awareness in social media, routing problems), as it highlights the connectivity of the underlying complex system.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Hamiltonian cycles, Text Mining, Textual Corpora, Information Retrieval, Knowledge Discovery, Sentiment Analysis, Q-Analysis, Data Mining, Journalism, The Guardian.

References:

[1] Alexander Strehl, Joydeep Ghosh, and Raymond Mooney. Impact of similarity measures on web-page clustering.  In Workshop on Artifcial Intelligence for Web Search (AAAI 2000), pages 58-64, 2000.
[2] David M. S. Rodrigues, Jorge Louçã, and Vitorino Ramos. From standard to second-order Swarm Intelligence  phase-space maps. In Stefan Thurner, editor, 8th European Conference on Complex Systems, Vienna, Austria,  9 2011.
[3] Vitorino Ramos, David M. S. Rodrigues, and Jorge Louçã. Second order Swarm Intelligence. In Jeng-Shyang  Pan, Marios M. Polycarpou, Micha l Wozniak, André C.P.L.F. Carvalho, Hector Quintian, and Emilio Corchado,  editors, HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, volume 8073 of Lecture  Notes in Computer Science, pages 411-420. Springer Berlin Heidelberg, Salamanca, Spain, 9 2013.
[4] Elva J.H. Robinson, Duncan Jackson, Mike Holcombe, and Francis L.W. Ratnieks. No entry signal in ant  foraging (hymenoptera: Formicidae): new insights from an agent-based model. Myrmecological News, 10(120), 2007.
[5] Ronald Harry Atkin. Mathematical Structure in Human A ffairs. Heinemann Educational Publishers, 48 Charles  Street, London, 1 edition, 1974.
[6] J. H. Johnson. A survey of Q-analysis, part 1: The past and present. In Proceedings of the Seminar on Q-analysis  and the Social Sciences, Universty of Leeds, 9 1983.
[7] David M. S. Rodrigues. Identifying news clusters using Q-analysis and modularity. In Albert Diaz-Guilera,  Alex Arenas, and Alvaro Corral, editors, Proceedings of the European Conference on Complex Systems 2013, Barcelona, 9 2013.

In order to solve hard combinatorial optimization problems (e.g. optimally scheduling students and teachers along a week plan on several different classes and classrooms), one way is to computationally mimic how ants forage the vicinity of their habitats searching for food. On a myriad of endless possibilities to find the optimal route (minimizing the travel distance), ants, collectively emerge the solution by using stigmergic signal traces, or pheromones, which also dynamically change under evaporation.

Current algorithms, however, make only use of a positive feedback type of pheromone along their search, that is, if they collectively visit a good low-distance route (a minimal pseudo-solution to the problem) they tend to reinforce that signal, for their colleagues. Nothing wrong with that, on the contrary, but no one knows however if a lower-distance alternative route is there also, just at the corner. On his global search endeavour, like a snowballing effect, positive feedbacks tend up to give credit to the exploitation of solutions but not on the – also useful – exploration side. The upcoming potential solutions can thus get crystallized, and freeze, while a small change on some parts of the whole route, could on the other-hand successfully increase the global result.

Influence of Negative Pheromone in Swarm IntelligenceFigure – Influence of negative pheromone on kroA100.tsp problem (fig.1 – page 6) (values on lines represent 1-ALPHA). A typical standard ACS (Ant Colony System) is represented here by the line with value 0.0, while better results could be found by our approach, when using positive feedbacks (0.95) along with negative feedbacks (0.05). Not only we obtain better results, as we found them earlier.

There is, however, an advantage when a second type of pheromone (a negative feedback one) co-evolves with the first type. And we decided to research for his impact. What we found out, is that by using a second type of global feedback, we can indeed increase a faster search while achieving better results. In a way, it’s like using two different types of evaporative traffic lights, in green and red, co-evolving together. And as a conclusion, we should indeed use a negative no-entry signal pheromone. In small amounts (0.05), but use it. Not only this prevents the whole system to freeze on some solutions, to soon, as it enhances a better compromise on the search space of potential routes. The pre-print article is available here at arXiv. Follows the abstract and keywords:

Vitorino Ramos, David M. S. Rodrigues, Jorge Louçã, “Second Order Swarm Intelligence” [PDF], in Hybrid Artificial Intelligent Systems, Lecture Notes in Computer Science, Springer-Verlag, Volume 8073, pp. 411-420, 2013.

Abstract: An artificial Ant Colony System (ACS) algorithm to solve general purpose combinatorial Optimization Problems (COP) that extends previous AC models [21] by the inclusion of a negative pheromone, is here described. Several Travelling Salesman Problem‘s (TSP) were used as benchmark. We show that by using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedbacks achieves better results than single positive feedback systems. The algorithm was tested against known NP complete combinatorial Optimization Problems, running on symmetrical TSPs. We show that the new algorithm compares favourably against these benchmarks, accordingly to recent biological findings by Robinson [26,27], and Grüter [28] where “No entry” signals and negative feedback allows a colony to quickly reallocate the majority of its foragers to superior food patches. This is the first time an extended ACS algorithm is implemented with these successful characteristics.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Travelling Salesman Problems (TSP).

von Neumann

There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself“. ~ John von Neumann, in his 1949 University of Illinois lectures on the Theory and Organization of Complicated Automata [J. von Neumann, Theory of self-reproducing automata, 1949 Univ. of Illinois Lectures on the Theory and Organization of Complicated Automata, ed. A.W. Burks (University of Illinois Press, Urbana, IL, 1966).].

Hybrid Artificial Intelligent Systems HAIS 2013 (pp. 411-420 Second Order Swarm Intelligence)Figure – Hybrid Artificial Intelligent Systems new LNAI (Lecture Notes on Artificial Intelligence) series volume 8073, Springer-Verlag Book [original photo by my colleague David M.S. Rodrigues].

New work, new book. Last week one of our latest works come out published on Springer. Edited by Jeng-Shyang Pan, Marios M. Polycarpou, Emilio Corchado et al. “Hybrid Artificial Intelligent Systems” comprises a full set of new papers on this hybrid area on Intelligent Computing (check the full articles list at Springer). Our new paper “Second Order Swarm Intelligence” (pp. 411-420, Springer books link) was published on the Bio-inspired Models and Evolutionary Computation section.

Fig.1 – (click to enlarge) The optimal shortest path among N=1265 points depicting a Portuguese Navalheira crab as a result of one of our latest Swarm-Intelligence based algorithms. The problem of finding the shortest path among N different points in space is NP-hard, known as the Travelling Salesmen Problem (TSP), being one of the major and hardest benchmarks in Combinatorial Optimization (link) and Artificial Intelligence. (V. Ramos, D. Rodrigues, 2012)

This summer my kids just grab a tiny Portuguese Navalheira crab on the shore. After a small photo-session and some baby-sitting with a lettuce leaf, it was time to release it again into the ocean. He not only survived my kids, as he is now entitled into a new World Wide Web on-line life. After the Shortest path Sardine (link) with 1084 points, here is the Crab with 1265 points. The algorithm just run as little as 110 iterations.

Fig. 2 – (click to enlarge) Our 1265 initial points depicting a TSP Portuguese Navalheira crab. Could you already envision a minimal tour between all these points?

As usual in Travelling Salesmen problems (TSP) we start it with a set of points, in our case 1084 points or cities (fig. 2). Given a list of cities and their pairwise distances, the task is now to find the shortest possible tour that visits each city exactly once. The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods.

Fig. 3 – (click to enlarge) Again the shortest path Navalheira crab, where the optimal contour path (in black: first fig. above) with 1265 points (or cities) was filled in dark orange.

TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder.

What follows (fig. 4) is the original crab photo after image segmentation and just before adding Gaussian noise in order to retrieve several data points for the initial TSP problem. The algorithm was then embedded with the extracted x,y coordinates of these data points (fig. 2) in order for him to discover the minimal path, in just 110 iterations. For extra details, pay a visit onto the Shortest path Sardine (link) done earlier.

Fig. 4 – (click to enlarge) The original crab photo after some image processing as well as segmentation and just before adding Gaussian noise in order to retrieve several data points for the initial TSP problem.

Video – Water has Memory (from Oasis HD, Canada; link): just a liquid or much more? Many researchers are convinced that water is capable of “memory” by storing information and retrieving it. The possible applications are innumerable: limitless retention and storage capacity and the key to discovering the origins of life on our planet. Research into water is just beginning.

Water capable of processing information as well as a huge possible “container” for data media, that is something remarkable. This theory was first proposed by the late French immunologist Jacques Benveniste, in a controversial article published in 1988 in Nature, as a way of explaining how homeopathy works (link). Benveniste’s theory has continued to be championed by some and disputed by others. The video clip above, from the Oasis HD Channel, shows some fascinating recent experiments with water “memory” from the Aerospace Institute of the University of Stuttgart in Germany. The results with the different types of flowers immersed in water are particularly evocative.

This line of research also remembers me back of an old and quite interesting paper by a colleague, Chrisantha Fernando. Together with Sampsa Sojakka, both have proved that waves produced on the surface of water can be used as the medium for a Wolfgang Maass’ “Liquid State Machine” (link) that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Amazingly, Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. What follows is the abstract of their paper entitled “Pattern Recognition in a Bucket“, as well a PDF link onto it:

Figure – Typical wave patterns for the XOR task. Top-Left: [0 1] (right motor on), Top-Right: [1 0] (left motor on), Bottom-Left: [1 1] (both motors on), Bottom-Right: [0 0] (still water). Sobel filtered and thresholded images on right. (from Fig. 3. in in Chrisantha Fernando and Sampsa Sojakka, “Pattern Recognition in a Bucket“, ECAL proc., European Conference on Artificial Life, 2003.

[…] Abstract. This paper demonstrates that the waves produced on the surface of water can be used as the medium for a “Liquid State Machine” that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Interference between waves allows non-linear parallel computation upon simultaneous sensory inputs. Temporal patterns of stimulation are converted to spatial patterns of water waves upon which a linear discrimination can be made. Whereas Wolfgang Maass’ Liquid State Machine requires fine tuning of the spiking neural network parameters, water has inherent self-organising properties such as strong local interactions, time-dependent spread of activation to distant areas, inherent stability to a wide variety of inputs, and high complexity. Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. An analogy is made between water molecules and neurons in a recurrent neural network. […] in Chrisantha Fernando and Sampsa Sojakka, Pattern Recognition in a Bucket“, ECAL proc., European Conference on Artificial Life, 2003. [PDF link]

I would like to thank flocks, herds, and schools for existing: nature is the ultimate source of inspiration for computer graphics and animation.” in Craig Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model“, (paper link) published in Computer Graphics, 21(4), July 1987, pp. 25-34. (ACM SIGGRAPH ’87 Conference Proceedings, Anaheim, California, July 1987.)

ECCS11 Spatio-Temporal Dynamics on Co-Evolved Stigmergy Vitorino Ramos David M.S. Rodrigues Jorge Louçã

Ever tried to solve a problem where its own problem statement is changing constantly? Have a look on our approach:

Vitorino Ramos, David M.S. Rodrigues, Jorge LouçãSpatio-Temporal Dynamics on Co-Evolved Stigmergy“, in European Conference on Complex Systems, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

Abstract: Research over hard NP-complete Combinatorial Optimization Problems (COP’s) has been focused in recent years, on several robust bio-inspired meta-heuristics, like those involving Evolutionary Computation (EC) algorithmic paradigms. One particularly successful well-know meta-heuristic approach is based on Swarm Intelligence (SI), i.e., the self-organized stigmergic-based property of a complex system whereby the collective behaviors of (unsophisticated) entities interacting locally with their environment cause coherent functional global patterns to emerge. This line of research recognized as Ant Colony Optimization (ACO), uses a set of stochastic cooperating ant-like agents to find good solutions, using self-organized stigmergy as an indirect form of communication mediated by artificial pheromone, whereas agents deposit pheromone-signs on the edges of the problem-related graph complex network, encompassing a family of successful algorithmic variations such as: Ant Systems (AS), Ant Colony Systems (ACS), Max-Min Ant Systems (MaxMin AS) and Ant-Q.

Albeit being extremely successful these algorithms mostly rely on positive feedback’s, causing excessive algorithmic exploitation over the entire combinatorial search space. This is particularly evident over well known benchmarks as the symmetrical Traveling Salesman Problem (TSP). Being these systems comprised of a large number of frequently similar components or events, the principal challenge is to understand how the components interact to produce a complex pattern feasible solution (in our case study, an optimal robust solution for hard NP-complete dynamic TSP-like combinatorial problems). A suitable approach is to first understand the role of two basic modes of interaction among the components of Self-Organizing (SO) Swarm-Intelligent-like systems: positive and negative feedback. While positive feedback promotes a snowballing auto-catalytic effect (e.g. trail pheromone upgrading over the network; exploitation of the search space), taking an initial change in a system and reinforcing that change in the same direction as the initial deviation (self-enhancement and amplification) allowing the entire colony to exploit some past and present solutions (environmental dynamic memory), negative feedback such as pheromone evaporation ensure that the overall learning system does not stables or freezes itself on a particular configuration (innovation; search space exploration). Although this kind of (global) delayed negative feedback is important (evaporation), for the many reasons given above, there is however strong assumptions that other negative feedbacks are present in nature, which could also play a role over increased convergence, namely implicit-like negative feedbacks. As in the case for positive feedbacks, there is no reason not to explore increasingly distributed and adaptive algorithmic variations where negative feedback is also imposed implicitly (not only explicitly) over each network edge, while the entire colony seeks for better answers in due time.

In order to overcome this hard search space exploitation-exploration compromise, our present algorithmic approach follows the route of very recent biological findings showing that forager ants lay attractive trail pheromones to guide nest mates to food, but where, the effectiveness of foraging networks were improved if pheromones could also be used to repel foragers from unrewarding routes. Increasing empirical evidences for such a negative trail pheromone exists, deployed by Pharaoh’s ants (Monomorium pharaonis) as a ‘no entry‘ signal to mark unrewarding foraging paths. The new algorithm comprises a second order approach to Swarm Intelligence, as pheromone-based no entry-signals cues, were introduced, co-evolving with the standard pheromone distributions (collective cognitive maps) in the aforementioned known algorithms.

To exhaustively test his adaptive response and robustness, we have recurred to different dynamic optimization problems. Medium-size and large-sized dynamic TSP problems were created. Settings and parameters such as, environmental upgrade frequencies, landscape changing or network topological speed severity, and type of dynamic were tested. Results prove that the present co-evolved two-type pheromone swarm intelligence algorithm is able to quickly track increasing swift changes on the dynamic TSP complex network, compared to standard algorithms.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Dynamical Symmetrical Traveling Salesman Problems (TSP).


Fig. – Recovery times over several dynamical stress tests at the fl1577 TSP problem (1577 node graph) – 460 iter max – Swift changes at every 150 iterations (20% = 314 nodes, 40% = 630 nodes, 60% = 946 nodes, 80% = 1260 nodes, 100% = 1576 nodes). [click to enlarge]

ECCS11 From Standard to Second Order Swarm Intelligence Phase-Space Maps David Rodrigues Jorge Louçã Vitorino Ramos

David M.S. Rodrigues, Jorge Louçã, Vitorino Ramos, “From Standard to Second Order Swarm Intelligence Phase-space maps“, in European Conference on Complex Systems, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

Abstract: Standard Stigmergic approaches to Swarm Intelligence encompasses the use of a set of stochastic cooperating ant-like agents to find optimal solutions, using self-organized Stigmergy as an indirect form of communication mediated by a singular artificial pheromone. Agents deposit pheromone-signs on the edges of the problem-related graph to give rise to a family of successful algorithmic approaches entitled Ant Systems (AS), Ant Colony Systems (ACS), among others. These mainly rely on positive feedback’s, to search for an optimal solution in a large combinatorial space. The present work shows how, using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedback’s achieves better results than single positive feedback systems. This follows the route of very recent biological findings showing that forager ants, while laying attractive trail pheromones to guide nest mates to food, also gained foraging effectiveness by the use of pheromones that repelled foragers from unrewarding routes. The algorithm presented here takes inspiration precisely from this biological observation.

The new algorithm was exhaustively tested on a series of well-known benchmarks over hard NP-complete Combinatorial Optimization Problems (COP’s), running on symmetrical Traveling Salesman Problems (TSP). Different network topologies and stress tests were conducted over low-size TSP’s (eil51.tsp; eil78.tsp; kroA100.tsp), medium-size (d198.tsp; lin318.tsp; pcb442.tsp; att532.tsp; rat783.tsp) as well as large sized ones (fl1577.tsp; d2103.tsp) [numbers here referring to the number of nodes in the network]. We show that the new co-evolved stigmergic algorithm compared favorably against the benchmark. The algorithm was able to equal or majorly improve every instance of those standard algorithms, not only in the realm of the Swarm Intelligent AS, ACS approach, as in other computational paradigms like Genetic Algorithms (GA), Evolutionary Programming (EP), as well as SOM (Self-Organizing Maps) and SA (Simulated Annealing). In order to deeply understand how a second co-evolved pheromone was useful to track the collective system into such results, a refined phase-space map was produced mapping the pheromones ratio between a pure Ant Colony System (where no negative feedback besides pheromone evaporation is present) and the present second-order approach. The evaporation rate between different pheromones was also studied and its influence in the outcomes of the algorithm is shown. A final discussion on the phase-map is included. This work has implications in the way large combinatorial problems are addressed as the double feedback mechanism shows improvements over the single-positive feedback mechanisms in terms of convergence speed and on major results.

Keywords: Stigmergy, Co-Evolution, Self-Organization, Swarm Intelligence, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Traveling Salesman Problems (TSP), phase-space.

Fig. – Comparing convergence results between Standard algorithms vs. Second Order Swarm Intelligence, over TSP fl1577 (click to enlarge).

Picture – The European Conference on Complex Systems (ECCS’11 – link) at one of the main Austrian newspapers Der Standard: “Die ganze Welt als Computersimulation” (link), Klaus Taschwer, Der Standard, 14 September [click to enlarge – photo taken at the conference on Sept. 15, Vienna 2011].

Take Darwin, for example: would Caltech have hired Darwin? Probably not. He had only vague ideas about some of the mechanisms underlying biological Evolution. He had no way of knowing about genetics, and he lived before the discovery of mutations. Nevertheless, he did work out, from the top down, the notion of natural selection and the magnificent idea of the relationship of all living things.” Murray Gell-Mann in “Plectics“, excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995).

To be honest, I didn’t enjoy this title, but all of us had a fair share with journalists, now and then by now. After all, 99% of us don’t do computer simulation. We are all after the main principles, and their direct applications.

During 5 days (12-16 Sept.), with around 700 attendees the Vienna 2011 conference evolved around main important themes as Complexity & Networks (XNet), Current Trends in Game Theory, Complexity in Energy Infrastructures, Emergent Properties in Natural and Artificial Complex Systems (EPNACS), Complexity and the Future of Transportation Systems, Econophysics, Cultural and Opinion Dynamics, Dynamics on and of Complex Networks, Frontiers in the Theory of Evolution, and – among many others – Dynamics of Human Interactions.

For those who know me (will definitely understand), I was mainly attending those sessions underlined above, the last one (Frontiers in Evolution) being one of my favorites, among all these ECCS years. All in all, the conference had highly quality works (daily, we had about 3-4 works I definitely think should be followed in the future) and to those, more attention should be deserved (my main critics to the conference organization goes in here). Naturally, the newspaper article also reflects on the FuturICT, being historically one of the major scientific European projects ever done (along, probably, with the Geneva LHC), which teams spread across Europe, including Portugal with a representative team of 7 members present on the conference, led by Jorge Louçã, the former editor and organizer on the previous ECCS’10 last year in Lisbon.

Video – “… they forgot to say: in principle!“. Ricard Solé addressing the topic of a Morphospace for Biological Computation at ECCS’11 (European Conference on Complex Systems), while keeping is good humor on.

Let me draw anyway your attention to 4 outstanding lectures: Peter Schuster (link) on the first day, dissected on the source of Complexity in Evolution, battling among – as he puts it – two paradoxes: (1) Evolution is an enormously complex process, and (2) biological evolution on Earth proceeds from lower towards higher complexity. Earlier on that morning – opening the conference -, Murray Gell-Mann (link) who co-founded the Santa Fe Institute in 1984, gave a wonderful lecture on Generalized Entropies. Besides his age, the 1969 Nobel Prize in physics for his work on the theory of elementary particles, gladly turned his interest in the 1990s to the theory of Complex Adaptive Systems (CAS). Next, Albert-László Barabási (link), tamed Complexity on Controlling Networks. Finally, at the last day, closing the conference in pure gold, Ricard Solé (link) addressed the topic of a Morphospace for Biological Computation, an amazing lecture with a powerful topic to which – nevertheless – I felt he had little time (20 minutes), for such a rich endeavor. However – by no means -, he have lost his good humor during the talk (check my video above). Next year, the conference will be held in Brussels, and by just judging at the poster design, it promises. Go ants, go … !

Picture – The European Conference on Complex Systems (ECCS’12 – link) poster design for next year in Brussels.

It is very difficult to make good mistakes“, Tim Harford, July 2011.

TED talk (July 2011) by Tim Harford a writer on Economics who studies Complex Systems, exposing a surprising link among the successful ones: they were built through trial and error. In this sparkling talk from TEDGlobal 2011, he asks us to embrace our randomness and start making better mistakes [from TED]. Instead of the God complex, he purposes trial and error, or to be more precise, Genetic Algorithms and Evolutionary Computation (one of those examples over his talk  is indeed the evolutionary optimal design of an airplane nozzle).

Now, we may ask, if it’s clear to you from the talk whether the nozzle was computationally designed using evolutionary search as suggested by the imagery, or was the imagery designed to describe the process in the laboratory? … as a colleague ask me the other day over Google plus. A great question, since as I believe it will be not clear to everyone watching that lecture.

Though, it was clear to me from the beginning, for one simple reason. That is a well-know work in the Evolutionary Computation area, done by one of its pioneers, Professor Hans-Paul Schwefel from Germany, in 1974 I believe. Unfortunately, at least to me I must say, Tim Harford did not mentioned the author, neither he mentions over his talk, the entire Evolutionary Computation or Genetic Algorithms area, even if he makes a clear bridge between these concepts and the search for innovation. The optimal nozzle design was in fact produced for the first time, on Schwefel‘s PhD thesis (“Adaptive Mechanismen in der Biologischen Evolution und ihr Einfluß auf die Evolutiongeschwindigkeit“), and he did arrive at this results by using a branch of Evolutionary Computation know as (ES) Evolution Strategies [here is a Wikipedia entry]. The objective was to achieve the maximum thrust and for that some parameters should be adjusted, such as in which point the small aperture should be put between the two entrances. What follows is a rather old video from YouTube on the process:

The animation shows the evolution of a nozzle design since its initial configuration until the final one. After achieving such a design it was a a little difficult understanding why the surprising design was good and a team of physicists and engineers gathered to provide an investigation aiming at devising some explanation for the final nozzle configuration. Schwefel (later on with his German group) also investigated the algorithmic features of Evolution Strategies, what made possible different generalizations such as a surplus of offspring created, the use of non-elitist evolution strategies (the comma selection scheme), and the use of recombination beyond the well known mutation operator to generate the offspring. Here are some related links and papers (link).

Albeit these details … I did enjoyed the talk a lot as well as his quote above. There is still a subtle difference between “trial and error” and “Evolutionary search” even if linked, but when Tim Harford makes a connection between Innovation and Evolutionary Computation, it remembered me back the “actual” (one decade now, perhaps) work of David Goldberg (IlliGAL – Illinois Genetic Algorithms laboratory). Another founding father of the area, now dedicated to innovation, learning, etc… much on these precise lines. Mostly his books, (2002) The design of innovation: Lessons from and for competent genetic algorithms. Kluwer Academic Publishers, and (2006) The Entrepreneurial Engineer by Wiley.

Finally, let me add, that there are other beautiful examples of Evolutionary Design. The one I love most – however – (for several reasons, namely the powerful abstract message that is sends out into other conceptual fields) is this: a simple bridge. Enjoy, and for some seconds do think about your own area of work.

Fig.1 – (click to enlarge) The optimal shortest path among N=1084 points depicting a Portuguese sardine as a result of one of our latest Swarm-Intelligence based algorithms. The problem of finding the shortest path among N different points in space is NP-hard, known as the Travelling Salesmen Problem (TSP), being one of the major and hardest benchmarks in Combinatorial Optimization (link) and Artificial Intelligence. (D. Rodrigues, V. Ramos, 2011)

Almost summer time in Portugal, great weather as usual, and the perfect moment to eat sardines along with friends in open air esplanades; in fact, a lot of grilled sardines. We usually eat grilled sardines with a tomato-onion salad along with barbecued cherry peppers in salt and olive oil. That’s tasty, believe me. But not tasty enough however for me and one of my colleagues, David Rodrigues (blog link/twitter link). We decided to take this experience a little further on, creating the first shortest path sardine.

Fig. 2 – (click to enlarge) Our 1084 initial points depicting a TSP Portuguese sardine. Could you already envision a minimal tour between all these points?

As usual in Travelling Salesmen problems (TSP) we start it with a set of points, in our case 1084 points or cities (fig. 2). Given a list of cities and their pairwise distances, the task is now to find the shortest possible tour that visits each city exactly once. The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder. (link)

Fig. 3 – (click to enlarge) A well done and quite grilled shortest path sardine, where the optimal contour path (in blue: first fig. above) with 1084 points was filled in black colour. Nice T-shirt!

Even for toy-problems like the present 1084 TSP sardine, the amount of possible paths are incredible huge. And only one of those possible paths is the optimal (minimal) one. Consider for example a TSP with N=4 cities, A, B, C, and D. Starting in city A, the number of possible paths is 6: that is 1) A to B, B to C, C to D, and D to A, 2) A-B, B-D, D-C, C-A, 3) A-C, C-B, B-D and D-A, 4) A-C, C-D, D-B, and B-A, 5) A-D, D-C, C-B, and B-A, and finally 6) A-D, D-B, B-C, and C-A. I.e. there are (N1)! [i.e., N1 factorial] possible paths. For N=3 cities, 2×1=2 possible paths, for N=4 cities, 3x2x1=6 possible paths, for N=5 cities, 4x3x2x1=24 possible paths, … for N=20 cities, 121.645.100.408.832.000 possible paths, and so on.

The most direct solution would be to try all permutations (ordered combinations) and see which one is cheapest (using computational brute force search). The running time for this approach however, lies within a polynomial factor of O(n!), the factorial of the number of cities, so this solution becomes impractical even for only 20 cities. One of the earliest and oldest applications of dynamic programming is the Held–Karp algorithm which only solves the problem in time O(n22n).

In our present case (N=1084) we have had to deal with 1083 factorial possible paths, leading to the astronomical number of 1.19×102818 possible solutions. That’s roughly 1 followed by 2818 zeroes! – better now to check this Wikipedia entry on very large numbers. Our new Swarm-Intelligent based algorithm, running on a normal PC was however, able to formulate a minimal solution (fig.1) within just several minutes. We will soon post more about our novel self-organized stigmergic-based algorithmic approach, but meanwhile, if you enjoyed these drawings, do not hesitate in asking us for a grilled cherry pepper as well. We will be pleased to deliver you one by email.

p.s. – This is a joint twin post with David Rodrigues.

Fig. 4 – (click to enlarge) Zoom at the end sardine tail optimal contour path (in blue: first fig. above) filled in black, from a total set of 1084 initial points.

Video – “Be the one” track on Moby‘s Destroyed album. Destroyed is the tenth studio album by the American electronic artist Moby. It was released on 2011 by Mute Records. Network video animation by Taras Gesh.

From time to time some colleagues, students and friends do ask me what is a good introductory paper into Complex Networks. Indeed there are several, including some general books, but over the years I tend to point into “The Shortest Path to Complex Networks” by J.F.F. Mendes (thorough my life I met him several times now) and S.N. Dorogovtsev, released in 2004. For those who are beginners on the field, and do want to play or indeed program several concepts in the area, like clustering, small-worlds, edge condensation, cliques and communities as well as the preferential attachment (point 20.) among others, this is a rather simple and quick (besides is 25 pages) straightforward paper, well organized around 51 different entries, topics and concepts. Not all of them are here of course, like, among others, the age node issue which for several reasons tends to interest me. Point 1 starts with the birth of network science and it’s simple as : […] In 1735, in St Petersburg, Leonhard Euler solved the so-called Konigsberg bridge problem – walks on a simple small graph. This solution (actually, a proof) is usually considered as a starting point of the science of networks […]. The full paper is available through arXiv.org. Here is the link. Do enjoy. Like Moby says… be the one. The full reference list is also a must.

The following letter entitled “Darwin among the machines“, was sent on June 1863, by Samuel Butler, the novelist (signed here as Cellarius) to the editor of the Press, Christchurch, New Zealand (13 June, 1863). The article was much probably the first to raise the possibility that machines were a kind of “mechanical life” undergoing constant evolution, something that is now happening on the realm of Evolutionary Computation, Evolution Strategies along with other types of Genetic Algorithms, and that eventually machines might supplant humans as the dominant species. What follows are some excerpts. The full letter could be achieved here. It is truly, an historic visionary document:

[…] “Our present business lies with considerations which may somewhat tend to humble our pride and to make us think seriously of the future prospects of the human race. If we revert to the earliest primordial types of mechanical life, to the lever, the wedge, the inclined plane, the screw and the pulley, or (for analogy would lead us one step further) to that one primordial type from which all the mechanical kingdom has been developed. (…) We refer to the question: What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race. Inferior in power, inferior in that moral quality of self-control, we shall look up to them as the acme of all that the best and wisest man can ever dare to aim at. (…) Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question. (…) For the present we shall leave this subject, which we present gratis to the members of the Philosophical Society. Should they consent to avail themselves of the vast field which we have pointed out, we shall endeavour to labour in it ourselves at some future and indefinite period.” […]

[…] Dumb parts, properly connected into a swarm, yield smart results. […] ~ Kevin Kelly. / […] “Now make a four!” the voice booms. Within moments a “4” emerges. “Three.” And in a blink a “3” appears. Then in rapid succession, “Two… One…Zero.” The emergent thing is on a roll. […], Kevin Kelly, Out of Control, 1994.

video -‘Swarm Showreel’ by SwarmWorks Ltd. December 2009 (EVENTS ZUM SCHWÄRMEN – Von Entertainment bis Business).

[…] In a darkened Las Vegas conference room, a cheering audience waves cardboard wands in the air. Each wand is red on one side, green on the other. Far in back of the huge auditorium, a camera scans the frantic attendees. The video camera links the color spots of the wands to a nest of computers set up by graphics wizard Loren Carpenter. Carpenter’s custom software locates each red and each green wand in the auditorium. Tonight there are just shy of 5,000 wandwavers. The computer displays the precise location of each wand (and its color) onto an immense, detailed video map of the auditorium hung on the front stage, which all can see. More importantly, the computer counts the total red or green wands and uses that value to control software. As the audience wave the wands, the display screen shows a sea of lights dancing crazily in the dark, like a candlelight parade gone punk. The viewers see themselves on the map; they are either a red or green pixel. By flipping their own wands, they can change the color of their projected pixels instantly.
Loren Carpenter boots up the ancient video game of Pong onto the immense screen. Pong was the first commercial video game to reach pop consciousness. It’s a minimalist arrangement: a white dot bounces inside a square; two movable rectangles on each side act as virtual paddles. In short, electronic ping-pong. In this version, displaying the red side of your wand moves the paddle up. Green moves it down. More precisely, the Pong paddle moves as the average number of red wands in the auditorium increases or decreases. Your wand is just one vote.
Carpenter doesn’t need to explain very much. Every attendee at this 1991 conference of computer graphic experts was probably once hooked on Pong. His amplified voice booms in the hall, “Okay guys. Folks on the left side of the auditorium control the left paddle. Folks on the right side control the right paddle. If you think you are on the left, then you really are. Okay? Go!”
The audience roars in delight. Without a moment’s hesitation, 5,000 people are playing a reasonably good game of Pong. Each move of the paddle is the average of several thousand players’ intentions. The sensation is unnerving. The paddle usually does what you intend, but not always. When it doesn’t, you find yourself spending as much attention trying to anticipate the paddle as the incoming ball. One is definitely aware of another intelligence online: it’s this hollering mob.
The group mind plays Pong so well that Carpenter decides to up the ante. Without warning the ball bounces faster. The participants squeal in unison. In a second or two, the mob has adjusted to the quicker pace and is playing better than before. Carpenter speeds up the game further; the mob learns instantly.
“Let’s try something else,” Carpenter suggests. A map of seats in the auditorium appears on the screen. He draws a wide circle in white around the center. “Can you make a green ‘5’ in the circle?” he asks the audience. The audience stares at the rows of red pixels. The game is similar to that of holding a placard up in a stadium to make a picture, but now there are no preset orders, just a virtual mirror. Almost immediately wiggles of green pixels appear and grow haphazardly, as those who think their seat is in the path of the “5” flip their wands to green. A vague figure is materializing. The audience collectively begins to discern a “5” in the noise. Once discerned, the “5” quickly precipitates out into stark clarity. The wand-wavers on the fuzzy edge of the figure decide what side they “should” be on, and the emerging “5” sharpens up. The number assembles itself.
“Now make a four!” the voice booms. Within moments a “4” emerges. “Three.” And in a blink a “3” appears. Then in rapid succession, “Two… One…Zero.” The emergent thing is on a roll.
Loren Carpenter launches an airplane flight simulator on the screen. His instructions are terse: “You guys on the left are controlling roll; you on the right, pitch. If you point the plane at anything interesting, I’ll fire a rocket at it.” The plane is airborne. The pilot is…5,000 novices. For once the auditorium is completely silent. Everyone studies the navigation instruments as the scene outside the windshield sinks in. The plane is headed for a landing in a pink valley among pink hills. The runway looks very tiny. There is something both delicious and ludicrous about the notion of having the passengers of a plane collectively fly it. The brute democratic sense of it all is very appealing. As a passenger you get to vote for everything; not only where the group is headed, but when to trim the flaps.
But group mind seems to be a liability in the decisive moments of touchdown, where there is no room for averages. As the 5,000 conference participants begin to take down their plane for landing, the hush in the hall is ended by abrupt shouts and urgent commands. The auditorium becomes a gigantic cockpit in crisis. “Green, green, green!” one faction shouts. “More red!” a moment later from the crowd. “Red, red! REEEEED !” The plane is pitching to the left in a sickening way. It is obvious that it will miss the landing strip and arrive wing first. Unlike Pong, the flight simulator entails long delays in feedback from lever to effect, from the moment you tap the aileron to the moment it banks. The latent signals confuse the group mind. It is caught in oscillations of overcompensation. The plane is lurching wildly. Yet the mob somehow aborts the landing and pulls the plane up sensibly. They turn the plane around to try again.
How did they turn around? Nobody decided whether to turn left or right, or even to turn at all. Nobody was in charge. But as if of one mind, the plane banks and turns wide. It tries landing again. Again it approaches cockeyed. The mob decides in unison, without lateral communication, like a flock of birds taking off, to pull up once more. On the way up the plane rolls a bit. And then rolls a bit more. At some magical moment, the same strong thought simultaneously infects five thousand minds: “I wonder if we can do a 360?”
Without speaking a word, the collective keeps tilting the plane. There’s no undoing it. As the horizon spins dizzily, 5,000 amateur pilots roll a jet on their first solo flight. It was actually quite graceful. They give themselves a standing ovation. The conferees did what birds do: they flocked. But they flocked self- consciously. They responded to an overview of themselves as they co-formed a “5” or steered the jet. A bird on the fly, however, has no overarching concept of the shape of its flock. “Flockness” emerges from creatures completely oblivious of their collective shape, size, or alignment. A flocking bird is blind to the grace and cohesiveness of a flock in flight.
At dawn, on a weedy Michigan lake, ten thousand mallards fidget. In the soft pink glow of morning, the ducks jabber, shake out their wings, and dunk for breakfast. Ducks are spread everywhere. Suddenly, cued by some imperceptible signal, a thousand birds rise as one thing. They lift themselves into the air in a great thunder. As they take off they pull up a thousand more birds from the surface of the lake with them, as if they were all but part of a reclining giant now rising. The monstrous beast hovers in the air, swerves to the east sun, and then, in a blink, reverses direction, turning itself inside out. A second later, the entire swarm veers west and away, as if steered by a single mind. In the 17th century, an anonymous poet wrote: “…and the thousands of fishes moved as a huge beast, piercing the water. They appeared united, inexorably bound to a common fate. How comes this unity?”
A flock is not a big bird. Writes the science reporter James Gleick, “Nothing in the motion of an individual bird or fish, no matter how fluid, can prepare us for the sight of a skyful of starlings pivoting over a cornfield, or a million minnows snapping into a tight, polarized array….High-speed film [of flocks turning to avoid predators] reveals that the turning motion travels through the flock as a wave, passing from bird to bird in the space of about one-seventieth of a second. That is far less than the bird’s reaction time.” The flock is more than the sum of the birds.
In the film Batman Returns a horde of large black bats swarmed through flooded tunnels into downtown Gotham. The bats were computer generated. A single bat was created and given leeway to automatically flap its wings. The one bat was copied by the dozens until the animators had a mob. Then each bat was instructed to move about on its own on the screen following only a few simple rules encoded into an algorithm: don’t bump into another bat, keep up with your neighbors, and don’t stray too far away. When the algorithmic bats were run, they flocked like real bats.
The flocking rules were discovered by Craig Reynolds, a computer scientist working at Symbolics, a graphics hardware manufacturer. By tuning the various forces in his simple equation a little more cohesion, a little less lag time. Reynolds could shape the flock to behave like living bats, sparrows, or fish. Even the marching mob of penguins in Batman Returns were flocked by Reynolds’s algorithms. Like the bats, the computer-modeled 3-D penguins were cloned en masse and then set loose into the scene aimed in a certain direction. Their crowdlike jostling as they marched down the snowy street simply emerged, out of anyone’s control. So realistic is the flocking of Reynolds’s simple algorithms that biologists have gone back to their hi-speed films and concluded that the flocking behavior of real birds and fish must emerge from a similar set of simple rules. A flock was once thought to be a decisive sign of life, some noble formation only life could achieve. Via Reynolds’s algorithm it is now seen as an adaptive trick suitable for any distributed vivisystem, organic or made. […] in Kevin Kelly, “Out of Control – the New Biology of Machines, Social Systems and the Economic World“, pp. 11-12-13, 1994 (full pdf book)

[Vimeo=13119980]

Video – 16×9 Frame blended animation Tagtool drawing session. Drawing by Frances Sander, post production by Dmitri Berzon. Music by Samka.

Figure – A typical Tagtool Mini Setup (Drawing by Fanijo).

…Or should I say, Gestaltic?

The Tagtool is a performative visual instrument used on stage and on the street. It serves as a VJ tool, a creative video game, or an intuitive way of creating animation. The system is operated collaboratively by an artist drawing the pictures and an animator adding movement to the artwork with a gamepad. The design achieves virtually unlimited artistic complexity with a simple set of controls, which can be mastered even by children. The project is coordinated by OMA International. Being inspired by the open source movement,  relevant to the group also to all digital arts, their aim is that all knowledge acquired within the Tagtool project should be shared. (check out for more on their project website, http://www.tagtool.org ). All in all, a short documentary made by 4 Graz students. Everything, that ends by adding up non-linearly tends to be… well, you know…

[Vimeo=10649579]

Video – Dance performance by Elisabeth, Tagtool drawing and animation by Die.Puntigam, music by Jan, Seppy and Dima.

[…] Cuenta la leyenda que segundos antes de llegar a una curva, Juan Manuel Fangio dirigía una fugaz mirada a las hojas de los árboles. Si se movían, levantaba el pie del acelerador; si, por el contrario, no soplaba el viento, pisaba a fondo. […], in Ángel Luis Menéndez, “Los abuelos de Alonso”, Público.es (link)

[…] Ao contrário de muitos sistemas tecnológicos actuais como aqueles que são produzidos através da codificação de algoritmos feitos por empresas de software ditas state-of-the-art, ‘algoritmos em receita’, que se organizam através de comandos hierárquicos exteriores e estranhos em grande parte ao seu próprio caractér (incompativeis à simulação e modelização computacional de fenómenos largamente complexos e não-lineares, como a de um bando de aves em vôo, até à da propagação do El Niño pelo planeta, entre outros tantos exemplos necessários à vida em sociedade), está-se agora verdadeiramente a caminhar para a construção de novos sistemas artificiais, que se auto-organizam, tais como os naturais, através dos seus próprios processos internos, e esses desenvolvimentos estão simultâneamente a permitir conhecer mais sobre a própria natureza da Natureza […],

in V. Ramos, “Dois Caminhos divergiam na Floresta, e eu – eu tomei o menos viajado, e essa fez toda a diferença (*)”, palestra apresentada em “Horizontes da Física“, Univ. de Aveiro, Centro Cultural e de Congressos de Aveiro, Março 2007. (*) Tradução livre de “Two roads diverged in a wood, and I – I took the one less travelled by, And that has made all the difference“, Robert Frost (1874-1963), Mountain Interval, 1920.

___________ § ___________

The other day, I decided to pick 23 books from my own library. These are books which anyone could read. Even those who are not working in Science  could understand them, and that’s probably the second best feature they have in common. So by order of appearance here they are: Ernst Haeckel “Art Forms in Nature”, Dana Ballard “An Intro to Natural Computation”, Brian Goodwin “How the Leopard changed its Spots”, Camazine et al “Self-Organization in Biological Systems”, David Gale “Tracking the Automatic Ant”, Douglas Hofstadter “Godel Escher Bach”, Fortner Meyer “Number by colors”, George Dyson “Darwin among the Machines”, Herbert Simon “Sciences of the Artificial”, Ian Stewart “Nature’s Numbers”, John Barrow “The Constants of Nature”, John Holland “Emergence”, John Holland “Hidden Order”, Kevin Kelly “Out of Control”, Marvin Minsky “The Society of Mind”, Maturana and Varela “El Arbol del Conocimiento”, Peter Bentley “Digital Biology”, Peter Coveney and Roger Highfield “Frontiers of Complexity”, Richard Dawkins “Climbing Mount Improbable”, Steven Johnson “Emergence”, Steven Levy “Artificial Life”, Steven Strogatz “Sync”, Stuart Kauffman “At Home in the Universe”, and William Bartram “The search for Nature’s Design”.

Leave you also with a recent short film piece (above) inspired on numbers, geometry and nature, by Cristóbal Vila (Eterea studios, Zaragoza, Spain). The movie depicts among other concepts, Fibonacci series, Golden Ratio, Delaunay, Voronoi tesselations … (music by Wim Mertens, … of course); if you are really interested on Nature’s Nature and his ‘mysteries‘, forget the horrible Dan Brown’s “Da Vinci Code”. This is it. These are some of the books that really matter:

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

Archives

Blog Stats

  • 257,892 hits