You are currently browsing the tag archive for the ‘Self-Organization’ tag.

Vitorino Ramos - Citations2016Jan

2016 – Up now, an overall of 1567 citations among 74 works (including 3 books) on GOOGLE SCHOLAR (https://scholar.google.com/citations?user=gSyQ-g8AAAAJ&hl=en) [with an Hirsh h-index=19, and an average of 160.2 citations each for any work on my top five] + 900 citations among 57 works on the new RESEARCH GATE site (https://www.researchgate.net/profile/Vitorino_Ramos).

Refs.: Science, Artificial Intelligence, Swarm Intelligence, Data-Mining, Big-Data, Evolutionary Computation, Complex Systems, Image Analysis, Pattern Recognition, Data Analysis.

Complete circuit diagram with pheromone - Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; Figure – Neural circuit controller of the virtual ant (page 3, fig. 2). [URL: http://arxiv.org/abs/1507.08467 ]

Intelligence and decision in foraging ants. Individual or Collective? Internal or External? What is the right balance between the two. Can one have internal intelligence without external intelligence? Can one take examples from nature to build in silico artificial lives that present us with interesting patterns? We explore a model of foraging ants in this paper that will be presented in early September in Exeter, UK, at UKCI 2015. (available on arXiv [PDF] and ResearchGate)

Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; “A Model for Foraging Ants, Controlled by Spiking Neural Networks and Double Pheromones“, UKCI 2015 Computational Intelligence – University of Exeter, UK, September 2015.

Abstract: A model of an Ant System where ants are controlled by a spiking neural circuit and a second order pheromone mechanism in a foraging task is presented. A neural circuit is trained for individual ants and subsequently the ants are exposed to a virtual environment where a swarm of ants performed a resource foraging task. The model comprises an associative and unsupervised learning strategy for the neural circuit of the ant. The neural circuit adapts to the environment by means of classical conditioning. The initially unknown environment includes different types of stimuli representing food (rewarding) and obstacles (harmful) which, when they come in direct contact with the ant, elicit a reflex response in the motor neural system of the ant: moving towards or away from the source of the stimulus. The spiking neural circuits of the ant is trained to identify food and obstacles and move towards the former and avoid the latter. The ants are released on a landscape with multiple food sources where one ant alone would have difficulty harvesting the landscape to maximum efficiency. In this case the introduction of a double pheromone mechanism (positive and negative reinforcement feedback) yields better results than traditional ant colony optimization strategies. Traditional ant systems include mainly a positive reinforcement pheromone. This approach uses a second pheromone that acts as a marker for forbidden paths (negative feedback). This blockade is not permanent and is controlled by the evaporation rate of the pheromones. The combined action of both pheromones acts as a collective stigmergic memory of the swarm, which reduces the search space of the problem. This paper explores how the adaptation and learning abilities observed in biologically inspired cognitive architectures is synergistically enhanced by swarm optimization strategies. The model portraits two forms of artificial intelligent behaviour: at the individual level the spiking neural network is the main controller and at the collective level the pheromone distribution is a map towards the solution emerged by the colony. The presented model is an important pedagogical tool as it is also an easy to use library that allows access to the spiking neural network paradigm from inside a Netlogo—a language used mostly in agent based modelling and experimentation with complex systems.

References:

[1] C. G. Langton, “Studying artificial life with cellular automata,” Physica D: Nonlinear Phenomena, vol. 22, no. 1–3, pp. 120 – 149, 1986, proceedings of the Fifth Annual International Conference. [Online]. Available: http://www.sciencedirect.com/ science/article/pii/016727898690237X
[2] A. Abraham and V. Ramos, “Web usage mining using artificial ant colony clustering and linear genetic programming,” in Proceedings of the Congress on Evolutionary Computation. Australia: IEEE Press, 2003, pp. 1384–1391.
[3] V. Ramos, F. Muge, and P. Pina, “Self-organized data and image retrieval as a consequence of inter-dynamic synergistic relationships in artificial ant colonies,” Hybrid Intelligent Systems, vol. 87, 2002.
[4] V. Ramos and J. J. Merelo, “Self-organized stigmergic document maps: Environment as a mechanism for context learning,” in Proceddings of the AEB, Merida, Spain, February 2002. ´
[5] D. Sousa-Rodrigues and V. Ramos, “Traversing news with ant colony optimisation and negative pheromones,” in European Conference in Complex Systems, Lucca, Italy, Sep 2014.
[6] E. Bonabeau, G. Theraulaz, and M. Dorigo, Swarm Intelligence: From Natural to Artificial Systems, 1st ed., ser. Santa Fe Insitute Studies In The Sciences of Complexity. 198 Madison Avenue, New York: Oxford University Press, USA, Sep. 1999.
[7] M. Dorigo and L. M. Gambardella, “Ant colony system: A cooperative learning approach to the traveling salesman problem,” Universite Libre de Bruxelles, Tech. Rep. TR/IRIDIA/1996-5, ´ 1996.
[8] M. Dorigo, G. Di Caro, and L. M. Gambardella, “Ant algorithms for discrete optimization,” Artif. Life, vol. 5, no. 2, pp. 137– 172, Apr. 1999. [Online]. Available: http://dx.doi.org/10.1162/ 106454699568728
[9] L. M. Gambardella and M. Dorigo, “Ant-q: A reinforcement learning approach to the travelling salesman problem,” in Proceedings of the ML-95, Twelfth Intern. Conf. on Machine Learning, M. Kaufman, Ed., 1995, pp. 252–260.
[10] A. Gupta, V. Nagarajan, and R. Ravi, “Approximation algorithms for optimal decision trees and adaptive tsp problems,” in Proceedings of the 37th international colloquium conference on Automata, languages and programming, ser. ICALP’10. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 690–701. [Online]. Available: http://dl.acm.org/citation.cfm?id=1880918.1880993
[11] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Second order ˜ swarm intelligence,” in HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, ser. Lecture Notes in Computer Science, J.-S. Pan, M. Polycarpou, M. Wozniak, A. Carvalho, ´ H. Quintian, and E. Corchado, Eds. Salamanca, Spain: Springer ´ Berlin Heidelberg, Sep 2013, vol. 8073, pp. 411–420.
[12] W. Maass and C. M. Bishop, Pulsed Neural Networks. Cambridge, Massachusetts: MIT Press, 1998.
[13] E. M. Izhikevich and E. M. Izhikevich, “Simple model of spiking neurons.” IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, vol. 14, no. 6, pp. 1569–72, 2003. [Online]. Available: http://www.ncbi.nlm.nih. gov/pubmed/18244602
[14] C. Liu and J. Shapiro, “Implementing classical conditioning with spiking neurons,” in Artificial Neural Networks ICANN 2007, ser. Lecture Notes in Computer Science, J. de S, L. Alexandre, W. Duch, and D. Mandic, Eds. Springer Berlin Heidelberg, 2007, vol. 4668, pp. 400–410. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-74690-4 41
[15] J. Haenicke, E. Pamir, and M. P. Nawrot, “A spiking neuronal network model of fast associative learning in the honeybee,” Frontiers in Computational Neuroscience, no. 149, 2012. [Online]. Available: http://www.frontiersin.org/computational neuroscience/10.3389/conf.fncom.2012.55.00149/full
[16] L. I. Helgadottir, J. Haenicke, T. Landgraf, R. Rojas, and M. P. Nawrot, “Conditioned behavior in a robot controlled by a spiking neural network,” in International IEEE/EMBS Conference on Neural Engineering, NER, 2013, pp. 891–894.
[17] A. Cyr and M. Boukadoum, “Classical conditioning in different temporal constraints: an STDP learning rule for robots controlled by spiking neural networks,” pp. 257–272, 2012.
[18] X. Wang, Z. G. Hou, F. Lv, M. Tan, and Y. Wang, “Mobile robots’ modular navigation controller using spiking neural networks,” Neurocomputing, vol. 134, pp. 230–238, 2014.
[19] C. Hausler, M. P. Nawrot, and M. Schmuker, “A spiking neuron classifier network with a deep architecture inspired by the olfactory system of the honeybee,” in 2011 5th International IEEE/EMBS Conference on Neural Engineering, NER 2011, 2011, pp. 198–202.
[20] U. Wilensky, “Netlogo,” Evanston IL, USA, 1999. [Online]. Available: http://ccl.northwestern.edu/netlogo/
[21] C. Jimenez-Romero and J. Johnson, “Accepted abstract: Simulation of agents and robots controlled by spiking neural networks using netlogo,” in International Conference on Brain Engineering and Neuro-computing, Mykonos, Greece, Oct 2015.
[22] W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge University Press, 2002.
[23] J. v. H. W Gerstner, R Kempter and H. Wagner, “A neuronal learning rule for sub-millisecond temporal coding,” Nature, vol. 386, pp. 76–78, 1996.
[24] I. P. Pavlov, “Conditioned reflexes: An investigation of the activity of the cerebral cortex,” New York, 1927.
[25] E. J. H. Robinson, D. E. Jackson, M. Holcombe, and F. L. W. Ratnieks, “Insect communication: ‘no entry’ signal in ant foraging,” Nature, vol. 438, no. 7067, pp. 442–442, 11 2005. [Online]. Available: http://dx.doi.org/10.1038/438442a
[26] E. J. Robinson, D. Jackson, M. Holcombe, and F. L. Ratnieks, “No entry signal in ant foraging (hymenoptera: Formicidae): new insights from an agent-based model,” Myrmecological News, vol. 10, no. 120, 2007.
[27] D. Sousa-Rodrigues, J. Louçã, and V. Ramos, “From standard ˜ to second-order swarm intelligence phase-space maps,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, Sep 2011.
[28] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Spatio-temporal ˜ dynamics on co-evolved stigmergy,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, 9 2011.
[29] S. Tisue and U. Wilensky, “Netlogo: A simple environment for modeling complexity,” in International conference on complex systems. Boston, MA, 2004, pp. 16–21.

David MS Rodrigues Reading the News Through its Structure New Hybrid Connectivity Based ApproachesFigure – Two simplicies a and b connected by the 2-dimensional face, the triangle {1;2;3}. In the analysis of the time-line of The Guardian newspaper (link) the system used feature vectors based on frequency of words and them computed similarity between documents based on those feature vectors. This is a purely statistical approach that requires great computational power and that is difficult for problems that have large feature vectors and many documents. Feature vectors with 100,000 or more items are common and computing similarities between these documents becomes cumbersome. Instead of computing distance (or similarity) matrices between documents from feature vectors, the present approach explores the possibility of inferring the distance between documents from the Q-analysis description. Q-analysis is a very natural notion of connectivity between the simplicies of the structure and in the relation studied, documents are connected to each other through shared sets of tags entered by the journalists. Also in this framework, eccentricity is defined as a measure of the relatedness of one simplex in relation to another [7].

David M.S. Rodrigues and Vitorino Ramos, “Traversing News with Ant Colony Optimisation and Negative Pheromones” [PDF], accepted as preprint for oral presentation at the European Conference on Complex SystemsECCS14 in Lucca, Sept. 22-26, 2014, Italy.

Abstract: The past decade has seen the rapid development of the online newsroom. News published online are the main outlet of news surpassing traditional printed newspapers. This poses challenges to the production and to the consumption of those news. With those many sources of information available it is important to find ways to cluster and organise the documents if one wants to understand this new system. Traditional approaches to the problem of clustering documents usually embed the documents in a suitable similarity space. Previous studies have reported on the impact of the similarity measures used for clustering of textual corpora [1]. These similarity measures usually are calculated for bag of words representations of the documents. This makes the final document-word matrix high dimensional. Feature vectors with more than 10,000 dimensions are common and algorithms have severe problems with the high dimensionality of the data. A novel bio inspired approach to the problem of traversing the news is presented. It finds Hamiltonian cycles over documents published by the newspaper The Guardian. A Second Order Swarm Intelligence algorithm based on Ant Colony Optimisation was developed [2, 3] that uses a negative pheromone to mark unrewarding paths with a “no-entry” signal. This approach follows recent findings of negative pheromone usage in real ants [4].

In this case study the corpus of data is represented as a bipartite relation between documents and keywords entered by the journalists to characterise the news. A new similarity measure between documents is presented based on the Q-analysis description [5, 6, 7] of the simplicial complex formed between documents and keywords. The eccentricity between documents (two simplicies) is then used as a novel measure of similarity between documents. The results prove that the Second Order Swarm Intelligence algorithm performs better in benchmark problems of the travelling salesman problem, with faster convergence and optimal results. The addition of the negative pheromone as a non-entry signal improves the quality of the results. The application of the algorithm to the corpus of news of The Guardian creates a coherent navigation system among the news. This allows the users to navigate the news published during a certain period of time in a semantic sequence instead of a time sequence. This work as broader application as it can be applied to many cases where the data is mapped to bipartite relations (e.g. protein expressions in cells, sentiment analysis, brand awareness in social media, routing problems), as it highlights the connectivity of the underlying complex system.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Hamiltonian cycles, Text Mining, Textual Corpora, Information Retrieval, Knowledge Discovery, Sentiment Analysis, Q-Analysis, Data Mining, Journalism, The Guardian.

References:

[1] Alexander Strehl, Joydeep Ghosh, and Raymond Mooney. Impact of similarity measures on web-page clustering.  In Workshop on Artifcial Intelligence for Web Search (AAAI 2000), pages 58-64, 2000.
[2] David M. S. Rodrigues, Jorge Louçã, and Vitorino Ramos. From standard to second-order Swarm Intelligence  phase-space maps. In Stefan Thurner, editor, 8th European Conference on Complex Systems, Vienna, Austria,  9 2011.
[3] Vitorino Ramos, David M. S. Rodrigues, and Jorge Louçã. Second order Swarm Intelligence. In Jeng-Shyang  Pan, Marios M. Polycarpou, Micha l Wozniak, André C.P.L.F. Carvalho, Hector Quintian, and Emilio Corchado,  editors, HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, volume 8073 of Lecture  Notes in Computer Science, pages 411-420. Springer Berlin Heidelberg, Salamanca, Spain, 9 2013.
[4] Elva J.H. Robinson, Duncan Jackson, Mike Holcombe, and Francis L.W. Ratnieks. No entry signal in ant  foraging (hymenoptera: Formicidae): new insights from an agent-based model. Myrmecological News, 10(120), 2007.
[5] Ronald Harry Atkin. Mathematical Structure in Human A ffairs. Heinemann Educational Publishers, 48 Charles  Street, London, 1 edition, 1974.
[6] J. H. Johnson. A survey of Q-analysis, part 1: The past and present. In Proceedings of the Seminar on Q-analysis  and the Social Sciences, Universty of Leeds, 9 1983.
[7] David M. S. Rodrigues. Identifying news clusters using Q-analysis and modularity. In Albert Diaz-Guilera,  Alex Arenas, and Alvaro Corral, editors, Proceedings of the European Conference on Complex Systems 2013, Barcelona, 9 2013.

In order to solve hard combinatorial optimization problems (e.g. optimally scheduling students and teachers along a week plan on several different classes and classrooms), one way is to computationally mimic how ants forage the vicinity of their habitats searching for food. On a myriad of endless possibilities to find the optimal route (minimizing the travel distance), ants, collectively emerge the solution by using stigmergic signal traces, or pheromones, which also dynamically change under evaporation.

Current algorithms, however, make only use of a positive feedback type of pheromone along their search, that is, if they collectively visit a good low-distance route (a minimal pseudo-solution to the problem) they tend to reinforce that signal, for their colleagues. Nothing wrong with that, on the contrary, but no one knows however if a lower-distance alternative route is there also, just at the corner. On his global search endeavour, like a snowballing effect, positive feedbacks tend up to give credit to the exploitation of solutions but not on the – also useful – exploration side. The upcoming potential solutions can thus get crystallized, and freeze, while a small change on some parts of the whole route, could on the other-hand successfully increase the global result.

Influence of Negative Pheromone in Swarm IntelligenceFigure – Influence of negative pheromone on kroA100.tsp problem (fig.1 – page 6) (values on lines represent 1-ALPHA). A typical standard ACS (Ant Colony System) is represented here by the line with value 0.0, while better results could be found by our approach, when using positive feedbacks (0.95) along with negative feedbacks (0.05). Not only we obtain better results, as we found them earlier.

There is, however, an advantage when a second type of pheromone (a negative feedback one) co-evolves with the first type. And we decided to research for his impact. What we found out, is that by using a second type of global feedback, we can indeed increase a faster search while achieving better results. In a way, it’s like using two different types of evaporative traffic lights, in green and red, co-evolving together. And as a conclusion, we should indeed use a negative no-entry signal pheromone. In small amounts (0.05), but use it. Not only this prevents the whole system to freeze on some solutions, to soon, as it enhances a better compromise on the search space of potential routes. The pre-print article is available here at arXiv. Follows the abstract and keywords:

Vitorino Ramos, David M. S. Rodrigues, Jorge Louçã, “Second Order Swarm Intelligence” [PDF], in Hybrid Artificial Intelligent Systems, Lecture Notes in Computer Science, Springer-Verlag, Volume 8073, pp. 411-420, 2013.

Abstract: An artificial Ant Colony System (ACS) algorithm to solve general purpose combinatorial Optimization Problems (COP) that extends previous AC models [21] by the inclusion of a negative pheromone, is here described. Several Travelling Salesman Problem‘s (TSP) were used as benchmark. We show that by using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedbacks achieves better results than single positive feedback systems. The algorithm was tested against known NP complete combinatorial Optimization Problems, running on symmetrical TSPs. We show that the new algorithm compares favourably against these benchmarks, accordingly to recent biological findings by Robinson [26,27], and Grüter [28] where “No entry” signals and negative feedback allows a colony to quickly reallocate the majority of its foragers to superior food patches. This is the first time an extended ACS algorithm is implemented with these successful characteristics.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Travelling Salesman Problems (TSP).

von Neumann

There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself“. ~ John von Neumann, in his 1949 University of Illinois lectures on the Theory and Organization of Complicated Automata [J. von Neumann, Theory of self-reproducing automata, 1949 Univ. of Illinois Lectures on the Theory and Organization of Complicated Automata, ed. A.W. Burks (University of Illinois Press, Urbana, IL, 1966).].

Signal Traces - Sept. 2013 Vitorino RamosPhoto – Signal traces, September 2013, Vitorino Ramos.

[…] While pheromone reinforcement plays a role as system’s memory, evaporation allows the system to adapt and dynamically decide, without any type of centralized or hierarchical control […], below.

“[…] whereas signals tends to be conspicuous, since natural selection has shaped signals to be strong and effective displays, information transfer via cues is often more subtle and based on incidental stimuli in an organism’s social environment […]”, Seeley, T.D., “The Honey Bee Colony as a Super-Organism”, American Scientist, 77, pp.546-553, 1989.

[…] If an ant colony on his cyclic way from the nest to a food source (and back again), has only two possible branches around an obstacle, one bigger and the other smaller (the bridge experiment [7,52]), pheromone will accumulate – as times passes – on the shorter path, simple because any ant that sets out on that path will return sooner, passing the same points more frequently, and via that way, reinforcing the signal of that precise branch. Even if as we know, the pheromone evaporation rate is the same in both branches, the longer branch will faster vanish his pheromone, since there is not enough critical mass of individuals to keep it. On the other hand – in what appears to be a vastly pedagogic trick of Mother Nature – evaporation plays a critical role on the society. Without it, the final global decision or the phase transition will never happen. Moreover, without it, the whole colony can never adapt if the environment suddenly changes (e.g., the appearance of a third even shorter branch). While pheromone reinforcement plays a role as system’s memory, evaporation allows the system to adapt and dynamically decide, without any type of centralized or hierarchical control. […], in “Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes“, V. Ramos et al., available as pre-print on arXiV, 2005.

[…] There is some degree of communication among the ants, just enough to keep them from wandering of completely at random. By this minimal communication they can remind each other that they are not alone but are cooperating with team-mates. It takes a large number of ants, all reinforcing each other this way, to sustain any activity – such as trail building – for any length of time. Now my very hazy understanding of the operation of brain leads me to believe that something similar pertains to the firing of neurons… […] in, p. 316, Hofstadter, D.R., “Gödel, Escher, Bach: An Eternal Golden Braid“, New York: Basic Books, 1979).

[…] Since in Self-Organized (SO) systems their organization arises entirely from multiple interactions, it is of critical importance to question how organisms acquire and act upon information [9]. Basically through two forms: a) information gathered from one’s neighbours, and b) information gathered from work in progress, that is, stigmergy. In the case of animal groups, these internal interactions typically involve information transfers between individuals. Biologists have recently recognized that information can flow within groups via two distinct pathways – signals and cues. Signals are stimuli shaped by natural selection specifically to convey information, whereas cues are stimuli that convey information only incidentally [9]. The distinction between signals and cues is illustrated by the difference on ant and deer trails. The chemical trail deposited by ants as they return from a desirable food source is a signal. Over evolutionary time such trails have been moulded by natural selection for the purpose of sharing with nest mates information about the location of rich food sources. In contrast, the rutted trails made by deer walking through the woods is a cue, not shaped by natural selection for communication among deer but are a simple by-product of animals walking along the same path. SO systems are based on both, but whereas signals tends to be conspicuous, since natural selection has shaped signals to be strong and effective displays, information transfer via cues is often more subtle and based on incidental stimuli in an organism’s social environment [45] […], in “Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes“, V. Ramos et al., available as pre-print on arXiV, 2005.

Hybrid Artificial Intelligent Systems HAIS 2013 (pp. 411-420 Second Order Swarm Intelligence)Figure – Hybrid Artificial Intelligent Systems new LNAI (Lecture Notes on Artificial Intelligence) series volume 8073, Springer-Verlag Book [original photo by my colleague David M.S. Rodrigues].

New work, new book. Last week one of our latest works come out published on Springer. Edited by Jeng-Shyang Pan, Marios M. Polycarpou, Emilio Corchado et al. “Hybrid Artificial Intelligent Systems” comprises a full set of new papers on this hybrid area on Intelligent Computing (check the full articles list at Springer). Our new paper “Second Order Swarm Intelligence” (pp. 411-420, Springer books link) was published on the Bio-inspired Models and Evolutionary Computation section.

Octavio Aburto David and Goliath CaboPulmo NatGeo2012

During several years, Octavio Aburto thought of one photo. Now, he finally got it. The recently published photograph by Aburto, titled “David and Goliath” (it his in fact David Castro, one of his research science colleagues at the center of this stunning image) has been widely shared over the last few weeks. It was taken at Cabo Pulmo National Park (Mexico) and submitted to the National Geographic photo contest 2012. Here, he captures the sheer size of fish aggregations in perspective with a single human surrounded by abundant marine life. On a recent interview, he explains:

[…] … this “David and Goliath” image is speaking to the courtship behavior of one particular species of Jack fish. […] Many people say that a single image is worth a thousand words, but a single image can also represent thousands of data points and countless statistical analyses. One image, or a small series of images can tell a complicated story in a very simple way. […] The picture you see was taken November 1st, 2012. But this picture has been in my mind for three years — I have been trying to capture this image ever since I saw the behavior of these fish and witnessed the incredible tornado that they form during courtship. So, I guess you could say this image took almost three years. […], in mission-blue.org , Dec. 2012.

Video – Behind the scenes of David and Goliath image. This photo was taken at Cabo Pulmo National Park and submitted to the National Geographic photo contest 2012. You can see more of his images from this place and about Mexican seas on Octavio‘s web link.

Four different snapshots (click to enlarge) from one of my latest books, recently published in Japan: Ajith Abraham, Crina Grosan, Vitorino Ramos (Eds.), “Swarm Intelligence in Data Mining” (群知能と  データマイニング), Tokyo Denki University press [TDU], Tokyo, Japan, July 2012.

Fig.1 – (click to enlarge) The optimal shortest path among N=1265 points depicting a Portuguese Navalheira crab as a result of one of our latest Swarm-Intelligence based algorithms. The problem of finding the shortest path among N different points in space is NP-hard, known as the Travelling Salesmen Problem (TSP), being one of the major and hardest benchmarks in Combinatorial Optimization (link) and Artificial Intelligence. (V. Ramos, D. Rodrigues, 2012)

This summer my kids just grab a tiny Portuguese Navalheira crab on the shore. After a small photo-session and some baby-sitting with a lettuce leaf, it was time to release it again into the ocean. He not only survived my kids, as he is now entitled into a new World Wide Web on-line life. After the Shortest path Sardine (link) with 1084 points, here is the Crab with 1265 points. The algorithm just run as little as 110 iterations.

Fig. 2 – (click to enlarge) Our 1265 initial points depicting a TSP Portuguese Navalheira crab. Could you already envision a minimal tour between all these points?

As usual in Travelling Salesmen problems (TSP) we start it with a set of points, in our case 1084 points or cities (fig. 2). Given a list of cities and their pairwise distances, the task is now to find the shortest possible tour that visits each city exactly once. The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods.

Fig. 3 – (click to enlarge) Again the shortest path Navalheira crab, where the optimal contour path (in black: first fig. above) with 1265 points (or cities) was filled in dark orange.

TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder.

What follows (fig. 4) is the original crab photo after image segmentation and just before adding Gaussian noise in order to retrieve several data points for the initial TSP problem. The algorithm was then embedded with the extracted x,y coordinates of these data points (fig. 2) in order for him to discover the minimal path, in just 110 iterations. For extra details, pay a visit onto the Shortest path Sardine (link) done earlier.

Fig. 4 – (click to enlarge) The original crab photo after some image processing as well as segmentation and just before adding Gaussian noise in order to retrieve several data points for the initial TSP problem.

Figure – A classic example of emergence: The exact shape of a termite mound is not reducible to the actions of individual termites. Even if, there are already computer models who could achieve it (Check for more on “Stigmergic construction” or the full current blog Stigmergy tag)

The world can no longer be understood like a chessboard… It’s a Jackson Pollack painting” ~ Carne Ross, 2012.

[…] As pointed by Langton, there is more to life than mechanics – there is also dynamics. Life depends critically on principles of dynamical self-organization that have remained largely untouched by traditional analytic methods. There is a simple explanation for this – these self-organized dynamics are fundamentally non-linear phenomena, and non-linear phenomena in general depend critically on the interactions between parts: they necessarily disappear when parts are treated in isolation from one another, which is the basis for any analytic method. Rather, non-linear phenomena are most appropriately treated by a synthetic approach, where synthesis means “the combining of separate elements or substances to form a coherent whole”. In non-linear systems, the parts must be treated in each other’s presence, rather than independently from one another, because they behave very differently in each other’s presence than we would expect from a study of the parts in isolation. […] in Vitorino Ramos, 2002, http://arxiv.org/abs/cs /0412077.

What follows are passages from an important article on the consequences for Science at the moment of the recent discovery of the Higgs boson. Written by Ashutosh Jogalekar, “The Higgs boson and the future of science” (link) the article appeared at the Scientific American blog section (July 2012). And it starts discussing reductionism or how the Higgs boson points us to the culmination of reductionist thinking:

[…] And I say this with a suspicion that the Higgs boson may be the most fitting tribute to the limitations of what has been the most potent philosophical instrument of scientific discovery – reductionism. […]

[…] Yet as we enter the second decade of the twenty-first century, it is clear that reductionism as a principal weapon in our arsenal of discovery tools is no longer sufficient. Consider some of the most important questions facing modern science, almost all of which deal with complex, multi factorial systems. How did life on earth begin? How does biological matter evolve consciousness? What are dark matter and dark energy? How do societies cooperate to solve their most pressing problems? What are the properties of the global climate system? It is interesting to note at least one common feature among many of these problems; they result from the build-up rather than the breakdown of their operational entities. Their signature is collective emergence, the creation of attributes which are greater than the sum of their constituent parts. Whatever consciousness is for instance, it is definitely a result of neurons acting together in ways that are not obvious from their individual structures. Similarly, the origin of life can be traced back to molecular entities undergoing self-assembly and then replication and metabolism, a process that supersedes the chemical behaviour of the isolated components. The puzzle of dark matter and dark energy also have as their salient feature the behaviour of matter at large length and time scales. Studying cooperation in societies essentially involves studying group dynamics and evolutionary conflict. The key processes that operate in the existence of all these problems seem to almost intuitively involve the opposite of reduction; they all result from the agglomeration of molecules, matter, cells, bodies and human beings across a hierarchy of unique levels. In addition, and this is key, they involve the manifestation of unique principles emerging at every level that cannot be merely reduced to those at the underlying level. […]

[…] While emergence had been implicitly appreciated by scientists for a long time, its modern salvo was undoubtedly a 1972 paper in Science by the Nobel Prize winning physicist Philip Anderson (link) titled “More is Different” (PDF), a title that has turned into a kind of clarion call for emergence enthusiasts. In his paper Anderson (who incidentally first came up with the so-called Higgs mechanism) argued that emergence was nothing exotic; for instance, a lump of salt has properties very different from those of its highly reactive components sodium and chlorine. A lump of gold evidences properties like color that don’t exist at the level of individual atoms. Anderson also appealed to the process of broken symmetry, invoked in all kinds of fundamental events – including the existence of the Higgs boson – as being instrumental for emergence. Since then, emergent phenomena have been invoked in hundreds of diverse cases, ranging from the construction of termite hills to the flight of birds. The development of chaos theory beginning in the 60s further illustrated how very simple systems could give rise to very complicated and counter-intuitive patterns and behaviour that are not obvious from the identities of the individual components. […]

[…] Many scientists and philosophers have contributed to considered critiques of reductionism and an appreciation of emergence since Anderson wrote his paper. (…) These thinkers make the point that not only does reductionism fail in practice (because of the sheer complexity of the systems it purports to explain), but it also fails in principle on a deeper level. […]

[…] An even more forceful proponent of this contingency-based critique of reductionism is the complexity theorist Stuart Kauffman who has laid out his thoughts in two books. Just like Anderson, Kauffman does not deny the great value of reductionism in illuminating our world, but he also points out the factors that greatly limit its application. One of his favourite examples is the role of contingency in evolution and the object of his attention is the mammalian heart. Kauffman makes the case that no amount of reductionist analysis could explain tell you that the main function of the heart is to pump blood. Even in the unlikely case that you could predict the structure of hearts and the bodies that house them starting from the Higgs boson, such a deductive process could never tell you that of all the possible functions of the heart, the most important one is to pump blood. This is because the blood-pumping action of the heart is as much a result of historical contingency and the countless chance events that led to the evolution of the biosphere as it is of its bottom-up construction from atoms, molecules, cells and tissues. […]

[…] Reductionism then falls woefully short when trying to explain two things; origins and purpose. And one can see that if it has problems even when dealing with left-handed amino acids and human hearts, it would be in much more dire straits when attempting to account for say kin selection or geopolitical conflict. The fact is that each of these phenomena are better explained by fundamental principles operating at their own levels. […]

[…] Every time the end of science has been announced, science itself proved that claims of its demise were vastly exaggerated. Firstly, reductionism will always be alive and kicking since the general approach of studying anything by breaking it down into its constituents will continue to be enormously fruitful. But more importantly, it’s not so much the end of reductionism as the beginning of a more general paradigm that combines reductionism with new ways of thinking. The limitations of reductionism should be seen as a cause not for despair but for celebration since it means that we are now entering new, uncharted territory. […]

Figure (click to enlarge) – Cover from one of my books published last month (10 July 2012) “Swarm Intelligence in Data Mining” recently translated and edited in Japan (by Tokyo Denki University press [TDU]). Cover image from Amazon.co.jp (url). Title was translated into 群知能と  データマイニング. Funny also, to see my own name for the first time translated into Japanese – wonder if it’s Kanji. A brief synopsis follow:

(…) Swarm Intelligence (SI) is an innovative distributed intelligent paradigm for solving optimization problems that originally took its inspiration from the biological examples by swarming, flocking and herding phenomena in vertebrates. Particle Swarm Optimization (PSO) incorporates swarming behaviours observed in flocks of birds, schools of fish, or swarms of bees, and even human social behaviour, from which the idea is emerged. Ant Colony Optimization (ACO) deals with artificial systems that is inspired from the foraging behaviour of real ants, which are used to solve discrete optimization problems. Historically the notion of finding useful patterns in data has been given a variety of names including data mining, knowledge discovery, information extraction, etc. Data Mining is an analytic process designed to explore large amounts of data in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data. In order to achieve this, data mining uses computational techniques from statistics, machine learning and pattern recognition. Data mining and Swarm intelligence may seem that they do not have many properties in common. However, recent studies suggests that they can be used together for several real world data mining problems especially when other methods would be too expensive or difficult to implement. This book deals with the application of swarm intelligence methodologies in data mining. Addressing the various issues of swarm intelligence and data mining using different intelligent approaches is the novelty of this edited volume. This volume comprises of 11 chapters including an introductory chapters giving the fundamental definitions and some important research challenges. Chapters were selected on the basis of fundamental ideas/concepts rather than the thoroughness of techniques deployed. (…) (more)

Complex adaptive systems (CAS), including ecosystems, governments, biological cells, and markets, are characterized by intricate hierarchical arrangements of boundaries and signals. In ecosystems, for example, niches act as semi-permeable boundaries, and smells and visual patterns serve as signals; governments have departmental hierarchies with memoranda acting as signals; and so it is with other CAS. Despite a wealth of data and descriptions concerning different CAS, there remain many unanswered questions about “steering” these systems. In Signals and Boundaries, John Holland (Wikipedia entry) argues that understanding the origin of the intricate signal/border hierarchies of these systems is the key to answering such questions. He develops an overarching framework for comparing and steering CAS through the mechanisms that generate their signal/boundary hierarchies. Holland lays out a path for developing the framework that emphasizes agents, niches, theory, and mathematical models. He discusses, among other topics, theory construction; signal-processing agents; networks as representations of signal/boundary interaction; adaptation; recombination and reproduction; the use of tagged urn models (adapted from elementary probability theory) to represent boundary hierarchies; finitely generated systems as a way to tie the models examined into a single framework; the framework itself, illustrated by a simple finitely generated version of the development of a multi-celled organism; and Markov processes.

in, Introduction to John H. Holland, “Signals and Boundaries – Building blocks for Complex Adaptive Systems“, Cambridge, Mass. : ©MIT Press, 2012.

 

Coders are now habitat providers for the rest of the world.” ~ Vitorino Ramos, via Twitter, July, 17, 2012 (link).

Video lecture – Casey Reas (reas.com) at Eyeo2012 (uploaded 2 days ago on Vimeo): From a visual and conceptual point of view, the tension between order and chaos is a fertile space to explore. For over one hundred years, visual artists have focused on both in isolation and in tandem. As artists started to use software in the 1960s, the nature of this exploration expanded. This presentation features a series of revealing examples, historical research into the topic as developed for Reas‘ upcoming co-authored book “10 PRINT CHR$(205.5+RND(1)); : GOTO 10″ (MIT Press, 2012, book link; cover above), and a selection of Casey‘s artwork that relies on the relationship between chance operations and strict rules.

 

The above artificial ecosystem investigates the characteristics of the simulated environment through the use of agents reactive to pheromone trails. Pheromones spread through the fluid and are transported by it. The configuration of the reefs will be developed therefore in areas with less chance of stagnation of pheromones (done in Processing, from Alessandro Zomparelli, 2012).

I would like to thank flocks, herds, and schools for existing: nature is the ultimate source of inspiration for computer graphics and animation.” in Craig Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model“, (paper link) published in Computer Graphics, 21(4), July 1987, pp. 25-34. (ACM SIGGRAPH ’87 Conference Proceedings, Anaheim, California, July 1987.)

[…] In conclusion, much elegant work has been done starting from activated mono-nucleotides. However, the prebiotic synthesis of a specific macromolecular sequence does not seem to be at hand, giving us the same problem we have with polypeptide sequences. Since there is no ascertained prebiotic pathway to their synthesis, it may be useful to try to conceive some working hypothesis. In order to do that, I would first like to consider a preliminary question about the proteins we have on our Earth: “Why these proteins … and not other ones?”. Discussing this question can in fact give us some clue as to how orderly sequences might have originated. […] A grain of sand in the Sahara – This is indeed a central question in our world of proteins. How have they been selected out? There is a well-known arithmetic at the basis of this question, (see for example De Duve, 2002) which says that for a polypeptide chain with 100 residues, 20^100 different chains are in principle possible: a number so large that it does not convey any physical meaning. In order to grasp it somewhat, consider that the proteins existing on our planet are of the order of a few thousand billions, let us say around 10^13 (and with all isomers and mutations we may arrive at a few orders of magnitude more). This sounds like a large number. However, the ratio between the possible (say 20^100) and the actual chains (say 10^15) corresponds approximately to the ratio between the radius of the universe and the radius of a hydrogen atom! Or, to use another analogy, nearer to our experience, a ratio many orders of magnitude greater than the ratio between all the grains of sand in the vast Sahara and a single grain. The space outside “our atom”, or our grain of sand, is the space of the “never-born proteins”, the proteins that are not with us – either because they didn’t have the chance to be formed, or because they “came” and were then obliterated. This arithmetic, although trivial, bears an important message: in order to reproduce our proteins we would have to hit the target of that particular grain of sand in the whole Sahara. Christian De Duve, in order to avoid this “sequence paradox” (De Duve, 2002), assumes that all started with short polypeptides – and this is in fact reasonable. However, the theoretically possible total number of long chains does not change if you start with short peptides instead of amino acids. The only way to limit the final number of possible chains would be to assume, for example, that peptide synthesis started only under a particular set of conditions of composition and concentration, thus bringing contingency into the picture. As a corollary, then, this set of proteins born as a product of contingency would have been the one that happened to start life. Probably there is no way of eliminating contingency from the aetiology of our set of proteins. […]

Figure – The ratio between the theoretical number of possible proteins and their actual number is many orders of magnitude greater than the ratio between all sand of the vast Sahara and a single grain of sand (caption on page 69).

[…] The other objection to the numerical meaning suggested by Figure (above) is that the maximum number of proteins is much smaller because a great number of chain configurations are prohibited for energetic reasons. This is reasonable. Let us then assume that 99.9999% of theoretically possible protein chains cannot exist because of energy reasons. This would leave only one protein out of one million, reducing the number of never-born proteins from, say, 10^60 to 10^54. Not a big deal. Of course one could also assume that the total number of energetically allowed proteins is extremely small, no larger than, say, 10^10. This cannot be excluded a priori, but is tantamount to saying that there is something very special about “our” proteins, namely that they are energetically special. Whether or not this is so can be checked experimentally as will be seen later in a research project aimed at this target. The assumption that “our” proteins have something special from the energetic point of view, would correspond to a strict deterministic view that claims that the pathway leading to our proteins was determined, that there was no other possible route. Someone adhering strictly to a biochemical anthropic principle might even say that these proteins are the way they are in order to allow life and the development of mankind on Earth. The contingency view would recite instead the following: if our proteins or nucleic acids have no special properties from the point of view of thermodynamics, then run the tape again and a different “grain of sand” might be produced – one that perhaps would not have supported life. Some may say at this point that proteins derive in any case from nucleic-acid templates – perhaps through a primitive genetic code. However, this is really no argument – it merely shifts the problem of the etiology of peptide chains to etiology of oligonucleotide chains, all arithmetic problems remaining more or less the same. […] pp. 68-70, in Pier Luigi Luisi, “The Emergence of Life: From Chemical Origins to Synthetic Biology“, Cambridge University Press, US, 2006.

ECCS11 Spatio-Temporal Dynamics on Co-Evolved Stigmergy Vitorino Ramos David M.S. Rodrigues Jorge Louçã

Ever tried to solve a problem where its own problem statement is changing constantly? Have a look on our approach:

Vitorino Ramos, David M.S. Rodrigues, Jorge LouçãSpatio-Temporal Dynamics on Co-Evolved Stigmergy“, in European Conference on Complex Systems, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

Abstract: Research over hard NP-complete Combinatorial Optimization Problems (COP’s) has been focused in recent years, on several robust bio-inspired meta-heuristics, like those involving Evolutionary Computation (EC) algorithmic paradigms. One particularly successful well-know meta-heuristic approach is based on Swarm Intelligence (SI), i.e., the self-organized stigmergic-based property of a complex system whereby the collective behaviors of (unsophisticated) entities interacting locally with their environment cause coherent functional global patterns to emerge. This line of research recognized as Ant Colony Optimization (ACO), uses a set of stochastic cooperating ant-like agents to find good solutions, using self-organized stigmergy as an indirect form of communication mediated by artificial pheromone, whereas agents deposit pheromone-signs on the edges of the problem-related graph complex network, encompassing a family of successful algorithmic variations such as: Ant Systems (AS), Ant Colony Systems (ACS), Max-Min Ant Systems (MaxMin AS) and Ant-Q.

Albeit being extremely successful these algorithms mostly rely on positive feedback’s, causing excessive algorithmic exploitation over the entire combinatorial search space. This is particularly evident over well known benchmarks as the symmetrical Traveling Salesman Problem (TSP). Being these systems comprised of a large number of frequently similar components or events, the principal challenge is to understand how the components interact to produce a complex pattern feasible solution (in our case study, an optimal robust solution for hard NP-complete dynamic TSP-like combinatorial problems). A suitable approach is to first understand the role of two basic modes of interaction among the components of Self-Organizing (SO) Swarm-Intelligent-like systems: positive and negative feedback. While positive feedback promotes a snowballing auto-catalytic effect (e.g. trail pheromone upgrading over the network; exploitation of the search space), taking an initial change in a system and reinforcing that change in the same direction as the initial deviation (self-enhancement and amplification) allowing the entire colony to exploit some past and present solutions (environmental dynamic memory), negative feedback such as pheromone evaporation ensure that the overall learning system does not stables or freezes itself on a particular configuration (innovation; search space exploration). Although this kind of (global) delayed negative feedback is important (evaporation), for the many reasons given above, there is however strong assumptions that other negative feedbacks are present in nature, which could also play a role over increased convergence, namely implicit-like negative feedbacks. As in the case for positive feedbacks, there is no reason not to explore increasingly distributed and adaptive algorithmic variations where negative feedback is also imposed implicitly (not only explicitly) over each network edge, while the entire colony seeks for better answers in due time.

In order to overcome this hard search space exploitation-exploration compromise, our present algorithmic approach follows the route of very recent biological findings showing that forager ants lay attractive trail pheromones to guide nest mates to food, but where, the effectiveness of foraging networks were improved if pheromones could also be used to repel foragers from unrewarding routes. Increasing empirical evidences for such a negative trail pheromone exists, deployed by Pharaoh’s ants (Monomorium pharaonis) as a ‘no entry‘ signal to mark unrewarding foraging paths. The new algorithm comprises a second order approach to Swarm Intelligence, as pheromone-based no entry-signals cues, were introduced, co-evolving with the standard pheromone distributions (collective cognitive maps) in the aforementioned known algorithms.

To exhaustively test his adaptive response and robustness, we have recurred to different dynamic optimization problems. Medium-size and large-sized dynamic TSP problems were created. Settings and parameters such as, environmental upgrade frequencies, landscape changing or network topological speed severity, and type of dynamic were tested. Results prove that the present co-evolved two-type pheromone swarm intelligence algorithm is able to quickly track increasing swift changes on the dynamic TSP complex network, compared to standard algorithms.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Dynamical Symmetrical Traveling Salesman Problems (TSP).


Fig. – Recovery times over several dynamical stress tests at the fl1577 TSP problem (1577 node graph) – 460 iter max – Swift changes at every 150 iterations (20% = 314 nodes, 40% = 630 nodes, 60% = 946 nodes, 80% = 1260 nodes, 100% = 1576 nodes). [click to enlarge]

ECCS11 From Standard to Second Order Swarm Intelligence Phase-Space Maps David Rodrigues Jorge Louçã Vitorino Ramos

David M.S. Rodrigues, Jorge Louçã, Vitorino Ramos, “From Standard to Second Order Swarm Intelligence Phase-space maps“, in European Conference on Complex Systems, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

Abstract: Standard Stigmergic approaches to Swarm Intelligence encompasses the use of a set of stochastic cooperating ant-like agents to find optimal solutions, using self-organized Stigmergy as an indirect form of communication mediated by a singular artificial pheromone. Agents deposit pheromone-signs on the edges of the problem-related graph to give rise to a family of successful algorithmic approaches entitled Ant Systems (AS), Ant Colony Systems (ACS), among others. These mainly rely on positive feedback’s, to search for an optimal solution in a large combinatorial space. The present work shows how, using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedback’s achieves better results than single positive feedback systems. This follows the route of very recent biological findings showing that forager ants, while laying attractive trail pheromones to guide nest mates to food, also gained foraging effectiveness by the use of pheromones that repelled foragers from unrewarding routes. The algorithm presented here takes inspiration precisely from this biological observation.

The new algorithm was exhaustively tested on a series of well-known benchmarks over hard NP-complete Combinatorial Optimization Problems (COP’s), running on symmetrical Traveling Salesman Problems (TSP). Different network topologies and stress tests were conducted over low-size TSP’s (eil51.tsp; eil78.tsp; kroA100.tsp), medium-size (d198.tsp; lin318.tsp; pcb442.tsp; att532.tsp; rat783.tsp) as well as large sized ones (fl1577.tsp; d2103.tsp) [numbers here referring to the number of nodes in the network]. We show that the new co-evolved stigmergic algorithm compared favorably against the benchmark. The algorithm was able to equal or majorly improve every instance of those standard algorithms, not only in the realm of the Swarm Intelligent AS, ACS approach, as in other computational paradigms like Genetic Algorithms (GA), Evolutionary Programming (EP), as well as SOM (Self-Organizing Maps) and SA (Simulated Annealing). In order to deeply understand how a second co-evolved pheromone was useful to track the collective system into such results, a refined phase-space map was produced mapping the pheromones ratio between a pure Ant Colony System (where no negative feedback besides pheromone evaporation is present) and the present second-order approach. The evaporation rate between different pheromones was also studied and its influence in the outcomes of the algorithm is shown. A final discussion on the phase-map is included. This work has implications in the way large combinatorial problems are addressed as the double feedback mechanism shows improvements over the single-positive feedback mechanisms in terms of convergence speed and on major results.

Keywords: Stigmergy, Co-Evolution, Self-Organization, Swarm Intelligence, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Traveling Salesman Problems (TSP), phase-space.

Fig. – Comparing convergence results between Standard algorithms vs. Second Order Swarm Intelligence, over TSP fl1577 (click to enlarge).

Picture – The European Conference on Complex Systems (ECCS’11 – link) at one of the main Austrian newspapers Der Standard: “Die ganze Welt als Computersimulation” (link), Klaus Taschwer, Der Standard, 14 September [click to enlarge – photo taken at the conference on Sept. 15, Vienna 2011].

Take Darwin, for example: would Caltech have hired Darwin? Probably not. He had only vague ideas about some of the mechanisms underlying biological Evolution. He had no way of knowing about genetics, and he lived before the discovery of mutations. Nevertheless, he did work out, from the top down, the notion of natural selection and the magnificent idea of the relationship of all living things.” Murray Gell-Mann in “Plectics“, excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995).

To be honest, I didn’t enjoy this title, but all of us had a fair share with journalists, now and then by now. After all, 99% of us don’t do computer simulation. We are all after the main principles, and their direct applications.

During 5 days (12-16 Sept.), with around 700 attendees the Vienna 2011 conference evolved around main important themes as Complexity & Networks (XNet), Current Trends in Game Theory, Complexity in Energy Infrastructures, Emergent Properties in Natural and Artificial Complex Systems (EPNACS), Complexity and the Future of Transportation Systems, Econophysics, Cultural and Opinion Dynamics, Dynamics on and of Complex Networks, Frontiers in the Theory of Evolution, and – among many others – Dynamics of Human Interactions.

For those who know me (will definitely understand), I was mainly attending those sessions underlined above, the last one (Frontiers in Evolution) being one of my favorites, among all these ECCS years. All in all, the conference had highly quality works (daily, we had about 3-4 works I definitely think should be followed in the future) and to those, more attention should be deserved (my main critics to the conference organization goes in here). Naturally, the newspaper article also reflects on the FuturICT, being historically one of the major scientific European projects ever done (along, probably, with the Geneva LHC), which teams spread across Europe, including Portugal with a representative team of 7 members present on the conference, led by Jorge Louçã, the former editor and organizer on the previous ECCS’10 last year in Lisbon.

Video – “… they forgot to say: in principle!“. Ricard Solé addressing the topic of a Morphospace for Biological Computation at ECCS’11 (European Conference on Complex Systems), while keeping is good humor on.

Let me draw anyway your attention to 4 outstanding lectures: Peter Schuster (link) on the first day, dissected on the source of Complexity in Evolution, battling among – as he puts it – two paradoxes: (1) Evolution is an enormously complex process, and (2) biological evolution on Earth proceeds from lower towards higher complexity. Earlier on that morning – opening the conference -, Murray Gell-Mann (link) who co-founded the Santa Fe Institute in 1984, gave a wonderful lecture on Generalized Entropies. Besides his age, the 1969 Nobel Prize in physics for his work on the theory of elementary particles, gladly turned his interest in the 1990s to the theory of Complex Adaptive Systems (CAS). Next, Albert-László Barabási (link), tamed Complexity on Controlling Networks. Finally, at the last day, closing the conference in pure gold, Ricard Solé (link) addressed the topic of a Morphospace for Biological Computation, an amazing lecture with a powerful topic to which – nevertheless – I felt he had little time (20 minutes), for such a rich endeavor. However – by no means -, he have lost his good humor during the talk (check my video above). Next year, the conference will be held in Brussels, and by just judging at the poster design, it promises. Go ants, go … !

Picture – The European Conference on Complex Systems (ECCS’12 – link) poster design for next year in Brussels.

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

Archives

Blog Stats

  • 257,895 hits