You are currently browsing the tag archive for the ‘Collective Decision Support Systems’ tag.

Complete circuit diagram with pheromone - Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; Figure – Neural circuit controller of the virtual ant (page 3, fig. 2). [URL: http://arxiv.org/abs/1507.08467 ]

Intelligence and decision in foraging ants. Individual or Collective? Internal or External? What is the right balance between the two. Can one have internal intelligence without external intelligence? Can one take examples from nature to build in silico artificial lives that present us with interesting patterns? We explore a model of foraging ants in this paper that will be presented in early September in Exeter, UK, at UKCI 2015. (available on arXiv [PDF] and ResearchGate)

Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; “A Model for Foraging Ants, Controlled by Spiking Neural Networks and Double Pheromones“, UKCI 2015 Computational Intelligence – University of Exeter, UK, September 2015.

Abstract: A model of an Ant System where ants are controlled by a spiking neural circuit and a second order pheromone mechanism in a foraging task is presented. A neural circuit is trained for individual ants and subsequently the ants are exposed to a virtual environment where a swarm of ants performed a resource foraging task. The model comprises an associative and unsupervised learning strategy for the neural circuit of the ant. The neural circuit adapts to the environment by means of classical conditioning. The initially unknown environment includes different types of stimuli representing food (rewarding) and obstacles (harmful) which, when they come in direct contact with the ant, elicit a reflex response in the motor neural system of the ant: moving towards or away from the source of the stimulus. The spiking neural circuits of the ant is trained to identify food and obstacles and move towards the former and avoid the latter. The ants are released on a landscape with multiple food sources where one ant alone would have difficulty harvesting the landscape to maximum efficiency. In this case the introduction of a double pheromone mechanism (positive and negative reinforcement feedback) yields better results than traditional ant colony optimization strategies. Traditional ant systems include mainly a positive reinforcement pheromone. This approach uses a second pheromone that acts as a marker for forbidden paths (negative feedback). This blockade is not permanent and is controlled by the evaporation rate of the pheromones. The combined action of both pheromones acts as a collective stigmergic memory of the swarm, which reduces the search space of the problem. This paper explores how the adaptation and learning abilities observed in biologically inspired cognitive architectures is synergistically enhanced by swarm optimization strategies. The model portraits two forms of artificial intelligent behaviour: at the individual level the spiking neural network is the main controller and at the collective level the pheromone distribution is a map towards the solution emerged by the colony. The presented model is an important pedagogical tool as it is also an easy to use library that allows access to the spiking neural network paradigm from inside a Netlogo—a language used mostly in agent based modelling and experimentation with complex systems.

References:

[1] C. G. Langton, “Studying artificial life with cellular automata,” Physica D: Nonlinear Phenomena, vol. 22, no. 1–3, pp. 120 – 149, 1986, proceedings of the Fifth Annual International Conference. [Online]. Available: http://www.sciencedirect.com/ science/article/pii/016727898690237X
[2] A. Abraham and V. Ramos, “Web usage mining using artificial ant colony clustering and linear genetic programming,” in Proceedings of the Congress on Evolutionary Computation. Australia: IEEE Press, 2003, pp. 1384–1391.
[3] V. Ramos, F. Muge, and P. Pina, “Self-organized data and image retrieval as a consequence of inter-dynamic synergistic relationships in artificial ant colonies,” Hybrid Intelligent Systems, vol. 87, 2002.
[4] V. Ramos and J. J. Merelo, “Self-organized stigmergic document maps: Environment as a mechanism for context learning,” in Proceddings of the AEB, Merida, Spain, February 2002. ´
[5] D. Sousa-Rodrigues and V. Ramos, “Traversing news with ant colony optimisation and negative pheromones,” in European Conference in Complex Systems, Lucca, Italy, Sep 2014.
[6] E. Bonabeau, G. Theraulaz, and M. Dorigo, Swarm Intelligence: From Natural to Artificial Systems, 1st ed., ser. Santa Fe Insitute Studies In The Sciences of Complexity. 198 Madison Avenue, New York: Oxford University Press, USA, Sep. 1999.
[7] M. Dorigo and L. M. Gambardella, “Ant colony system: A cooperative learning approach to the traveling salesman problem,” Universite Libre de Bruxelles, Tech. Rep. TR/IRIDIA/1996-5, ´ 1996.
[8] M. Dorigo, G. Di Caro, and L. M. Gambardella, “Ant algorithms for discrete optimization,” Artif. Life, vol. 5, no. 2, pp. 137– 172, Apr. 1999. [Online]. Available: http://dx.doi.org/10.1162/ 106454699568728
[9] L. M. Gambardella and M. Dorigo, “Ant-q: A reinforcement learning approach to the travelling salesman problem,” in Proceedings of the ML-95, Twelfth Intern. Conf. on Machine Learning, M. Kaufman, Ed., 1995, pp. 252–260.
[10] A. Gupta, V. Nagarajan, and R. Ravi, “Approximation algorithms for optimal decision trees and adaptive tsp problems,” in Proceedings of the 37th international colloquium conference on Automata, languages and programming, ser. ICALP’10. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 690–701. [Online]. Available: http://dl.acm.org/citation.cfm?id=1880918.1880993
[11] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Second order ˜ swarm intelligence,” in HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, ser. Lecture Notes in Computer Science, J.-S. Pan, M. Polycarpou, M. Wozniak, A. Carvalho, ´ H. Quintian, and E. Corchado, Eds. Salamanca, Spain: Springer ´ Berlin Heidelberg, Sep 2013, vol. 8073, pp. 411–420.
[12] W. Maass and C. M. Bishop, Pulsed Neural Networks. Cambridge, Massachusetts: MIT Press, 1998.
[13] E. M. Izhikevich and E. M. Izhikevich, “Simple model of spiking neurons.” IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, vol. 14, no. 6, pp. 1569–72, 2003. [Online]. Available: http://www.ncbi.nlm.nih. gov/pubmed/18244602
[14] C. Liu and J. Shapiro, “Implementing classical conditioning with spiking neurons,” in Artificial Neural Networks ICANN 2007, ser. Lecture Notes in Computer Science, J. de S, L. Alexandre, W. Duch, and D. Mandic, Eds. Springer Berlin Heidelberg, 2007, vol. 4668, pp. 400–410. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-74690-4 41
[15] J. Haenicke, E. Pamir, and M. P. Nawrot, “A spiking neuronal network model of fast associative learning in the honeybee,” Frontiers in Computational Neuroscience, no. 149, 2012. [Online]. Available: http://www.frontiersin.org/computational neuroscience/10.3389/conf.fncom.2012.55.00149/full
[16] L. I. Helgadottir, J. Haenicke, T. Landgraf, R. Rojas, and M. P. Nawrot, “Conditioned behavior in a robot controlled by a spiking neural network,” in International IEEE/EMBS Conference on Neural Engineering, NER, 2013, pp. 891–894.
[17] A. Cyr and M. Boukadoum, “Classical conditioning in different temporal constraints: an STDP learning rule for robots controlled by spiking neural networks,” pp. 257–272, 2012.
[18] X. Wang, Z. G. Hou, F. Lv, M. Tan, and Y. Wang, “Mobile robots’ modular navigation controller using spiking neural networks,” Neurocomputing, vol. 134, pp. 230–238, 2014.
[19] C. Hausler, M. P. Nawrot, and M. Schmuker, “A spiking neuron classifier network with a deep architecture inspired by the olfactory system of the honeybee,” in 2011 5th International IEEE/EMBS Conference on Neural Engineering, NER 2011, 2011, pp. 198–202.
[20] U. Wilensky, “Netlogo,” Evanston IL, USA, 1999. [Online]. Available: http://ccl.northwestern.edu/netlogo/
[21] C. Jimenez-Romero and J. Johnson, “Accepted abstract: Simulation of agents and robots controlled by spiking neural networks using netlogo,” in International Conference on Brain Engineering and Neuro-computing, Mykonos, Greece, Oct 2015.
[22] W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge University Press, 2002.
[23] J. v. H. W Gerstner, R Kempter and H. Wagner, “A neuronal learning rule for sub-millisecond temporal coding,” Nature, vol. 386, pp. 76–78, 1996.
[24] I. P. Pavlov, “Conditioned reflexes: An investigation of the activity of the cerebral cortex,” New York, 1927.
[25] E. J. H. Robinson, D. E. Jackson, M. Holcombe, and F. L. W. Ratnieks, “Insect communication: ‘no entry’ signal in ant foraging,” Nature, vol. 438, no. 7067, pp. 442–442, 11 2005. [Online]. Available: http://dx.doi.org/10.1038/438442a
[26] E. J. Robinson, D. Jackson, M. Holcombe, and F. L. Ratnieks, “No entry signal in ant foraging (hymenoptera: Formicidae): new insights from an agent-based model,” Myrmecological News, vol. 10, no. 120, 2007.
[27] D. Sousa-Rodrigues, J. Louçã, and V. Ramos, “From standard ˜ to second-order swarm intelligence phase-space maps,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, Sep 2011.
[28] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Spatio-temporal ˜ dynamics on co-evolved stigmergy,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, 9 2011.
[29] S. Tisue and U. Wilensky, “Netlogo: A simple environment for modeling complexity,” in International conference on complex systems. Boston, MA, 2004, pp. 16–21.

David MS Rodrigues Reading the News Through its Structure New Hybrid Connectivity Based ApproachesFigure – Two simplicies a and b connected by the 2-dimensional face, the triangle {1;2;3}. In the analysis of the time-line of The Guardian newspaper (link) the system used feature vectors based on frequency of words and them computed similarity between documents based on those feature vectors. This is a purely statistical approach that requires great computational power and that is difficult for problems that have large feature vectors and many documents. Feature vectors with 100,000 or more items are common and computing similarities between these documents becomes cumbersome. Instead of computing distance (or similarity) matrices between documents from feature vectors, the present approach explores the possibility of inferring the distance between documents from the Q-analysis description. Q-analysis is a very natural notion of connectivity between the simplicies of the structure and in the relation studied, documents are connected to each other through shared sets of tags entered by the journalists. Also in this framework, eccentricity is defined as a measure of the relatedness of one simplex in relation to another [7].

David M.S. Rodrigues and Vitorino Ramos, “Traversing News with Ant Colony Optimisation and Negative Pheromones” [PDF], accepted as preprint for oral presentation at the European Conference on Complex SystemsECCS14 in Lucca, Sept. 22-26, 2014, Italy.

Abstract: The past decade has seen the rapid development of the online newsroom. News published online are the main outlet of news surpassing traditional printed newspapers. This poses challenges to the production and to the consumption of those news. With those many sources of information available it is important to find ways to cluster and organise the documents if one wants to understand this new system. Traditional approaches to the problem of clustering documents usually embed the documents in a suitable similarity space. Previous studies have reported on the impact of the similarity measures used for clustering of textual corpora [1]. These similarity measures usually are calculated for bag of words representations of the documents. This makes the final document-word matrix high dimensional. Feature vectors with more than 10,000 dimensions are common and algorithms have severe problems with the high dimensionality of the data. A novel bio inspired approach to the problem of traversing the news is presented. It finds Hamiltonian cycles over documents published by the newspaper The Guardian. A Second Order Swarm Intelligence algorithm based on Ant Colony Optimisation was developed [2, 3] that uses a negative pheromone to mark unrewarding paths with a “no-entry” signal. This approach follows recent findings of negative pheromone usage in real ants [4].

In this case study the corpus of data is represented as a bipartite relation between documents and keywords entered by the journalists to characterise the news. A new similarity measure between documents is presented based on the Q-analysis description [5, 6, 7] of the simplicial complex formed between documents and keywords. The eccentricity between documents (two simplicies) is then used as a novel measure of similarity between documents. The results prove that the Second Order Swarm Intelligence algorithm performs better in benchmark problems of the travelling salesman problem, with faster convergence and optimal results. The addition of the negative pheromone as a non-entry signal improves the quality of the results. The application of the algorithm to the corpus of news of The Guardian creates a coherent navigation system among the news. This allows the users to navigate the news published during a certain period of time in a semantic sequence instead of a time sequence. This work as broader application as it can be applied to many cases where the data is mapped to bipartite relations (e.g. protein expressions in cells, sentiment analysis, brand awareness in social media, routing problems), as it highlights the connectivity of the underlying complex system.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Hamiltonian cycles, Text Mining, Textual Corpora, Information Retrieval, Knowledge Discovery, Sentiment Analysis, Q-Analysis, Data Mining, Journalism, The Guardian.

References:

[1] Alexander Strehl, Joydeep Ghosh, and Raymond Mooney. Impact of similarity measures on web-page clustering.  In Workshop on Artifcial Intelligence for Web Search (AAAI 2000), pages 58-64, 2000.
[2] David M. S. Rodrigues, Jorge Louçã, and Vitorino Ramos. From standard to second-order Swarm Intelligence  phase-space maps. In Stefan Thurner, editor, 8th European Conference on Complex Systems, Vienna, Austria,  9 2011.
[3] Vitorino Ramos, David M. S. Rodrigues, and Jorge Louçã. Second order Swarm Intelligence. In Jeng-Shyang  Pan, Marios M. Polycarpou, Micha l Wozniak, André C.P.L.F. Carvalho, Hector Quintian, and Emilio Corchado,  editors, HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, volume 8073 of Lecture  Notes in Computer Science, pages 411-420. Springer Berlin Heidelberg, Salamanca, Spain, 9 2013.
[4] Elva J.H. Robinson, Duncan Jackson, Mike Holcombe, and Francis L.W. Ratnieks. No entry signal in ant  foraging (hymenoptera: Formicidae): new insights from an agent-based model. Myrmecological News, 10(120), 2007.
[5] Ronald Harry Atkin. Mathematical Structure in Human A ffairs. Heinemann Educational Publishers, 48 Charles  Street, London, 1 edition, 1974.
[6] J. H. Johnson. A survey of Q-analysis, part 1: The past and present. In Proceedings of the Seminar on Q-analysis  and the Social Sciences, Universty of Leeds, 9 1983.
[7] David M. S. Rodrigues. Identifying news clusters using Q-analysis and modularity. In Albert Diaz-Guilera,  Alex Arenas, and Alvaro Corral, editors, Proceedings of the European Conference on Complex Systems 2013, Barcelona, 9 2013.

In order to solve hard combinatorial optimization problems (e.g. optimally scheduling students and teachers along a week plan on several different classes and classrooms), one way is to computationally mimic how ants forage the vicinity of their habitats searching for food. On a myriad of endless possibilities to find the optimal route (minimizing the travel distance), ants, collectively emerge the solution by using stigmergic signal traces, or pheromones, which also dynamically change under evaporation.

Current algorithms, however, make only use of a positive feedback type of pheromone along their search, that is, if they collectively visit a good low-distance route (a minimal pseudo-solution to the problem) they tend to reinforce that signal, for their colleagues. Nothing wrong with that, on the contrary, but no one knows however if a lower-distance alternative route is there also, just at the corner. On his global search endeavour, like a snowballing effect, positive feedbacks tend up to give credit to the exploitation of solutions but not on the – also useful – exploration side. The upcoming potential solutions can thus get crystallized, and freeze, while a small change on some parts of the whole route, could on the other-hand successfully increase the global result.

Influence of Negative Pheromone in Swarm IntelligenceFigure – Influence of negative pheromone on kroA100.tsp problem (fig.1 – page 6) (values on lines represent 1-ALPHA). A typical standard ACS (Ant Colony System) is represented here by the line with value 0.0, while better results could be found by our approach, when using positive feedbacks (0.95) along with negative feedbacks (0.05). Not only we obtain better results, as we found them earlier.

There is, however, an advantage when a second type of pheromone (a negative feedback one) co-evolves with the first type. And we decided to research for his impact. What we found out, is that by using a second type of global feedback, we can indeed increase a faster search while achieving better results. In a way, it’s like using two different types of evaporative traffic lights, in green and red, co-evolving together. And as a conclusion, we should indeed use a negative no-entry signal pheromone. In small amounts (0.05), but use it. Not only this prevents the whole system to freeze on some solutions, to soon, as it enhances a better compromise on the search space of potential routes. The pre-print article is available here at arXiv. Follows the abstract and keywords:

Vitorino Ramos, David M. S. Rodrigues, Jorge Louçã, “Second Order Swarm Intelligence” [PDF], in Hybrid Artificial Intelligent Systems, Lecture Notes in Computer Science, Springer-Verlag, Volume 8073, pp. 411-420, 2013.

Abstract: An artificial Ant Colony System (ACS) algorithm to solve general purpose combinatorial Optimization Problems (COP) that extends previous AC models [21] by the inclusion of a negative pheromone, is here described. Several Travelling Salesman Problem‘s (TSP) were used as benchmark. We show that by using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedbacks achieves better results than single positive feedback systems. The algorithm was tested against known NP complete combinatorial Optimization Problems, running on symmetrical TSPs. We show that the new algorithm compares favourably against these benchmarks, accordingly to recent biological findings by Robinson [26,27], and Grüter [28] where “No entry” signals and negative feedback allows a colony to quickly reallocate the majority of its foragers to superior food patches. This is the first time an extended ACS algorithm is implemented with these successful characteristics.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Travelling Salesman Problems (TSP).

Hybrid Artificial Intelligent Systems HAIS 2013 (pp. 411-420 Second Order Swarm Intelligence)Figure – Hybrid Artificial Intelligent Systems new LNAI (Lecture Notes on Artificial Intelligence) series volume 8073, Springer-Verlag Book [original photo by my colleague David M.S. Rodrigues].

New work, new book. Last week one of our latest works come out published on Springer. Edited by Jeng-Shyang Pan, Marios M. Polycarpou, Emilio Corchado et al. “Hybrid Artificial Intelligent Systems” comprises a full set of new papers on this hybrid area on Intelligent Computing (check the full articles list at Springer). Our new paper “Second Order Swarm Intelligence” (pp. 411-420, Springer books link) was published on the Bio-inspired Models and Evolutionary Computation section.

Figure – A classic example of emergence: The exact shape of a termite mound is not reducible to the actions of individual termites. Even if, there are already computer models who could achieve it (Check for more on “Stigmergic construction” or the full current blog Stigmergy tag)

The world can no longer be understood like a chessboard… It’s a Jackson Pollack painting” ~ Carne Ross, 2012.

[…] As pointed by Langton, there is more to life than mechanics – there is also dynamics. Life depends critically on principles of dynamical self-organization that have remained largely untouched by traditional analytic methods. There is a simple explanation for this – these self-organized dynamics are fundamentally non-linear phenomena, and non-linear phenomena in general depend critically on the interactions between parts: they necessarily disappear when parts are treated in isolation from one another, which is the basis for any analytic method. Rather, non-linear phenomena are most appropriately treated by a synthetic approach, where synthesis means “the combining of separate elements or substances to form a coherent whole”. In non-linear systems, the parts must be treated in each other’s presence, rather than independently from one another, because they behave very differently in each other’s presence than we would expect from a study of the parts in isolation. […] in Vitorino Ramos, 2002, http://arxiv.org/abs/cs /0412077.

What follows are passages from an important article on the consequences for Science at the moment of the recent discovery of the Higgs boson. Written by Ashutosh Jogalekar, “The Higgs boson and the future of science” (link) the article appeared at the Scientific American blog section (July 2012). And it starts discussing reductionism or how the Higgs boson points us to the culmination of reductionist thinking:

[…] And I say this with a suspicion that the Higgs boson may be the most fitting tribute to the limitations of what has been the most potent philosophical instrument of scientific discovery – reductionism. […]

[…] Yet as we enter the second decade of the twenty-first century, it is clear that reductionism as a principal weapon in our arsenal of discovery tools is no longer sufficient. Consider some of the most important questions facing modern science, almost all of which deal with complex, multi factorial systems. How did life on earth begin? How does biological matter evolve consciousness? What are dark matter and dark energy? How do societies cooperate to solve their most pressing problems? What are the properties of the global climate system? It is interesting to note at least one common feature among many of these problems; they result from the build-up rather than the breakdown of their operational entities. Their signature is collective emergence, the creation of attributes which are greater than the sum of their constituent parts. Whatever consciousness is for instance, it is definitely a result of neurons acting together in ways that are not obvious from their individual structures. Similarly, the origin of life can be traced back to molecular entities undergoing self-assembly and then replication and metabolism, a process that supersedes the chemical behaviour of the isolated components. The puzzle of dark matter and dark energy also have as their salient feature the behaviour of matter at large length and time scales. Studying cooperation in societies essentially involves studying group dynamics and evolutionary conflict. The key processes that operate in the existence of all these problems seem to almost intuitively involve the opposite of reduction; they all result from the agglomeration of molecules, matter, cells, bodies and human beings across a hierarchy of unique levels. In addition, and this is key, they involve the manifestation of unique principles emerging at every level that cannot be merely reduced to those at the underlying level. […]

[…] While emergence had been implicitly appreciated by scientists for a long time, its modern salvo was undoubtedly a 1972 paper in Science by the Nobel Prize winning physicist Philip Anderson (link) titled “More is Different” (PDF), a title that has turned into a kind of clarion call for emergence enthusiasts. In his paper Anderson (who incidentally first came up with the so-called Higgs mechanism) argued that emergence was nothing exotic; for instance, a lump of salt has properties very different from those of its highly reactive components sodium and chlorine. A lump of gold evidences properties like color that don’t exist at the level of individual atoms. Anderson also appealed to the process of broken symmetry, invoked in all kinds of fundamental events – including the existence of the Higgs boson – as being instrumental for emergence. Since then, emergent phenomena have been invoked in hundreds of diverse cases, ranging from the construction of termite hills to the flight of birds. The development of chaos theory beginning in the 60s further illustrated how very simple systems could give rise to very complicated and counter-intuitive patterns and behaviour that are not obvious from the identities of the individual components. […]

[…] Many scientists and philosophers have contributed to considered critiques of reductionism and an appreciation of emergence since Anderson wrote his paper. (…) These thinkers make the point that not only does reductionism fail in practice (because of the sheer complexity of the systems it purports to explain), but it also fails in principle on a deeper level. […]

[…] An even more forceful proponent of this contingency-based critique of reductionism is the complexity theorist Stuart Kauffman who has laid out his thoughts in two books. Just like Anderson, Kauffman does not deny the great value of reductionism in illuminating our world, but he also points out the factors that greatly limit its application. One of his favourite examples is the role of contingency in evolution and the object of his attention is the mammalian heart. Kauffman makes the case that no amount of reductionist analysis could explain tell you that the main function of the heart is to pump blood. Even in the unlikely case that you could predict the structure of hearts and the bodies that house them starting from the Higgs boson, such a deductive process could never tell you that of all the possible functions of the heart, the most important one is to pump blood. This is because the blood-pumping action of the heart is as much a result of historical contingency and the countless chance events that led to the evolution of the biosphere as it is of its bottom-up construction from atoms, molecules, cells and tissues. […]

[…] Reductionism then falls woefully short when trying to explain two things; origins and purpose. And one can see that if it has problems even when dealing with left-handed amino acids and human hearts, it would be in much more dire straits when attempting to account for say kin selection or geopolitical conflict. The fact is that each of these phenomena are better explained by fundamental principles operating at their own levels. […]

[…] Every time the end of science has been announced, science itself proved that claims of its demise were vastly exaggerated. Firstly, reductionism will always be alive and kicking since the general approach of studying anything by breaking it down into its constituents will continue to be enormously fruitful. But more importantly, it’s not so much the end of reductionism as the beginning of a more general paradigm that combines reductionism with new ways of thinking. The limitations of reductionism should be seen as a cause not for despair but for celebration since it means that we are now entering new, uncharted territory. […]

Fig. – (Organizational Complexity trough History) Four forms behind the Organization and Evolution of all societies (David Ronfeldt TIMN). Each form also seems to be triggered by major societal changes in communications and language. Oral speech enabled tribes (T), the written word enabled institutions (I), the printed word fostered regional and global markets (M), and the electric (digital) word is empowering worldwide networks (N). [in David Ronfeldt, “Tribes, Institutions, Markets, Networks: A framework about Societal Evolution“, RAND Corporation, Document Number: P-7967, (1996). PDF link]

[…] Organizational complexity is defined as the amount of differentiation that exists within different elements constituting the organization. This is often operationalized as the number of different professional specializations that exist within the organization. For example, a school would be considered a less complex organization than a hospital, since a hospital requires a large diversity of professional specialties in order to function. Organizational complexity can also be observed via differentiation in structure, authority and locus of control, and attributes of personnel, products, and technologies. Contingency theory states that an organization structures itself and behaves in a particular manner as an attempt to fit with its environment. Thus organizations are more or less complex as a reaction to environmental complexity. An organization’s environment may be complex because it is turbulent, hostile, diverse, technologically complex, or restrictive. An organization may also be complex as a result of the complexity of its underlying technological core. For example, a nuclear power plant is likely to have a more complex organization than a standard power plant because the underlying technology is more difficult to understand and control. There are numerous consequences of environmental and organizational complexity. Organizational members, faced with overwhelming and/or complex decisions, omit, tolerate errors, queue, filter, abstract, use multiple channels, escape, and chunk in order to deal effectively with the complexity. At an organizational level, an organizational will respond to complexity by building barriers around its technical core; by smoothing input and output transactions; by planning and predicting; by segmenting itself and/or becoming decentralized; and by adopting rules.
Complexity science offers a broader view of organizational complexity – it maintains that all organizations are relatively complex, and that such complexity arises that complex behavior is not necessarily the result of complex action on the behalf of a single individual’s effort; rather, complex behavior of the whole can be the result of loosely coupled organizational members behaving in simple ways, acting on local information. Complexity science posits that most organizational behavior is the result of numerous events occurring over extended periods of time, rather than the result of some smaller number of critical incidents. […] in Dooley, K. (2002), “Organizational Complexity,” International Encyclopedia of Business and Management, M. Warner (ed.), London: Thompson Learning, p. 5013-5022. (PDF link)

The Internet has given us a glimpse of the power of networks. We are just beginning to realize how we can use networks as our primary form of living and working. David Ronfeldt has developed the TIMN framework to explain this – Tribal (T); Institutional (I); Markets (M); Networks (N). The TIMN framework shows how we have evolved as a civilization. It has not been a clean progression from one organizing mode to the next but rather each new form built upon and changed the previous mode. He sees the network form not as a modifier of previous forms, but a form in itself that can address issues that the three other forms could not address. This point is very important when it comes to things like implementing social business (a network mode) within corporations (institutional + market modes). Real network models (e.g. wirearchy) are new modes, not modifications of the old ones.

Another key point of this framework is that Tribes exist within Institutions, Markets and Networks. We never lose our affinity for community groups or family, but each mode brings new factors that influence our previous modes. For example, tribalism is alive and well in online social networks. It’s just not the same tribalism of several hundred years ago. Each transition also has its hazards. For instance, while tribal societies may result in nepotism, networked societies can lead to deception. Ronfeldt states that the initial tribal form informs the other modes and can have a profound influence as they evolve:

Balanced combination is apparently imperative: Each form (and its realm) builds on its predecessor(s). In the progression from T through T+I+M+N, the rise of a new form depends on the successes (and failures) achieved through the earlier forms. For a society to progress optimally through the addition of new forms, no single form should be allowed to dominate any other, and none should be suppressed or eliminated. A society’s potential to function well at a given stage, and to evolve to a higher level of complexity, depends on its ability to integrate these inherently contradictory forms into a well-functioning whole. A society can constrain its prospects for evolutionary growth by elevating a single form to primacy — as appears to be a tendency at times in market-mad America. [in David Ronfeldt, “Tribes, Institutions, Markets, Networks: A framework about Societal Evolution“, RAND Corporation, Document Number: P-7967, (1996). PDF link]

Finally, on these areas (far behind the strict topic of organizational topology and complex networks), let me add two books. One his from José Fonseca, a friend researcher I first met in 2001, during a joint interview for the Portuguese Idéias & Negócios Magazine, for his 5th anniversary (old link) embracing innovation in Portugal. His book entitled “Complexity & Innovation in Organizations” (above) was published in December that year, 2001 by Routledge. The other one is more recent and from Ralph Stacey, “Complexity and Organizational Reality: Uncertainty and the Need to Rethink Management After the Collapse of Investment Capitalism” (below), Routledge, 2010. Even if, Ralph as many other past seminal books on this topic. Both, have worked together at the Hertfordshire University.

Fig. – A Symbolical Head (phrenological chart) illustrating the natural language of the faculties. At the Society pages / Economic Sociology web page.

You have much probably noticed by now how Scoop.it is emerging as a powerful platform for those collecting interesting research papers. There are several good examples, but let me stress one entitled “Bounded Rationality and Beyond” (scoop.it web page) curated by Alessandro Cerboni (blog). On a difficult research theme, Alessandro is doing a great job collecting nice essays and wonderful articles, whenever he founds them. One of those articles I really appreciated was John Conlisk‘s “Why Bounded Rationality?“, delivering into the field several important clues, for those who (like me) work in the area. What follows, is an excerpt from the article as well as part of his introductory section. The full (PDF) paper could be retrieved here:

In this survey, four reasons are given for incorporating bounded rationality in economic models. First, there is abundant empirical evidence that it is important. Second, models of bounded rationality have proved themselves in a wide range of impressive work. Third, the standard justifications for assuming unbounded rationality are unconvincing; their logic cuts both ways. Fourth, deliberation about an economic decision is a costly activity, and good economics requires that we entertain all costs. These four reasons, or categories of reasons, are developed in the following four sections. Deliberation cost will be a recurring theme.

Why bounded rationality? In four words (one for each section above): evidence, success, methodology, and scarcity. In more words: Psychology and economics provide wide-ranging evidence that bounded rationality is important (Section I). Economists who include bounds on rationality in their models have excellent success in describing economic behavior beyond the coverage of standard theory (Section II). The traditional appeals to economic methodology cut both ways; the conditions of a particular context may favor either bounded or unbounded rationality (Section III). Models of bounded rationality adhere to a fundamental tenet of economics, respect for scarcity. Human cognition, as a scarce resource, should be treated as such (Section IV). The survey stresses throughout that an appropriate rationality assumption is not something to decide once for all contexts. In principle, we might suppose there is an encompassing single theory which takes various forms of bounded and unbounded rationality as special. cases. As with other model ingredients, however, we in practice want to work directly with the most convenient special case which does justice to the context. The evidence and models surveyed suggest that a sensible rationality assumption will vary by context, depending on such conditions as deliberation cost, complexity, incentives, experience, and market discipline. Beyond the four reasons given, there is one more reason for studying bounded rationality. It is simply a fascinating thing to do. We can mix some Puck with our Hamlet.

I would like to thank flocks, herds, and schools for existing: nature is the ultimate source of inspiration for computer graphics and animation.” in Craig Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model“, (paper link) published in Computer Graphics, 21(4), July 1987, pp. 25-34. (ACM SIGGRAPH ’87 Conference Proceedings, Anaheim, California, July 1987.)

ECCS11 Spatio-Temporal Dynamics on Co-Evolved Stigmergy Vitorino Ramos David M.S. Rodrigues Jorge Louçã

Ever tried to solve a problem where its own problem statement is changing constantly? Have a look on our approach:

Vitorino Ramos, David M.S. Rodrigues, Jorge LouçãSpatio-Temporal Dynamics on Co-Evolved Stigmergy“, in European Conference on Complex Systems, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

Abstract: Research over hard NP-complete Combinatorial Optimization Problems (COP’s) has been focused in recent years, on several robust bio-inspired meta-heuristics, like those involving Evolutionary Computation (EC) algorithmic paradigms. One particularly successful well-know meta-heuristic approach is based on Swarm Intelligence (SI), i.e., the self-organized stigmergic-based property of a complex system whereby the collective behaviors of (unsophisticated) entities interacting locally with their environment cause coherent functional global patterns to emerge. This line of research recognized as Ant Colony Optimization (ACO), uses a set of stochastic cooperating ant-like agents to find good solutions, using self-organized stigmergy as an indirect form of communication mediated by artificial pheromone, whereas agents deposit pheromone-signs on the edges of the problem-related graph complex network, encompassing a family of successful algorithmic variations such as: Ant Systems (AS), Ant Colony Systems (ACS), Max-Min Ant Systems (MaxMin AS) and Ant-Q.

Albeit being extremely successful these algorithms mostly rely on positive feedback’s, causing excessive algorithmic exploitation over the entire combinatorial search space. This is particularly evident over well known benchmarks as the symmetrical Traveling Salesman Problem (TSP). Being these systems comprised of a large number of frequently similar components or events, the principal challenge is to understand how the components interact to produce a complex pattern feasible solution (in our case study, an optimal robust solution for hard NP-complete dynamic TSP-like combinatorial problems). A suitable approach is to first understand the role of two basic modes of interaction among the components of Self-Organizing (SO) Swarm-Intelligent-like systems: positive and negative feedback. While positive feedback promotes a snowballing auto-catalytic effect (e.g. trail pheromone upgrading over the network; exploitation of the search space), taking an initial change in a system and reinforcing that change in the same direction as the initial deviation (self-enhancement and amplification) allowing the entire colony to exploit some past and present solutions (environmental dynamic memory), negative feedback such as pheromone evaporation ensure that the overall learning system does not stables or freezes itself on a particular configuration (innovation; search space exploration). Although this kind of (global) delayed negative feedback is important (evaporation), for the many reasons given above, there is however strong assumptions that other negative feedbacks are present in nature, which could also play a role over increased convergence, namely implicit-like negative feedbacks. As in the case for positive feedbacks, there is no reason not to explore increasingly distributed and adaptive algorithmic variations where negative feedback is also imposed implicitly (not only explicitly) over each network edge, while the entire colony seeks for better answers in due time.

In order to overcome this hard search space exploitation-exploration compromise, our present algorithmic approach follows the route of very recent biological findings showing that forager ants lay attractive trail pheromones to guide nest mates to food, but where, the effectiveness of foraging networks were improved if pheromones could also be used to repel foragers from unrewarding routes. Increasing empirical evidences for such a negative trail pheromone exists, deployed by Pharaoh’s ants (Monomorium pharaonis) as a ‘no entry‘ signal to mark unrewarding foraging paths. The new algorithm comprises a second order approach to Swarm Intelligence, as pheromone-based no entry-signals cues, were introduced, co-evolving with the standard pheromone distributions (collective cognitive maps) in the aforementioned known algorithms.

To exhaustively test his adaptive response and robustness, we have recurred to different dynamic optimization problems. Medium-size and large-sized dynamic TSP problems were created. Settings and parameters such as, environmental upgrade frequencies, landscape changing or network topological speed severity, and type of dynamic were tested. Results prove that the present co-evolved two-type pheromone swarm intelligence algorithm is able to quickly track increasing swift changes on the dynamic TSP complex network, compared to standard algorithms.

Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Dynamical Symmetrical Traveling Salesman Problems (TSP).


Fig. – Recovery times over several dynamical stress tests at the fl1577 TSP problem (1577 node graph) – 460 iter max – Swift changes at every 150 iterations (20% = 314 nodes, 40% = 630 nodes, 60% = 946 nodes, 80% = 1260 nodes, 100% = 1576 nodes). [click to enlarge]

ECCS11 From Standard to Second Order Swarm Intelligence Phase-Space Maps David Rodrigues Jorge Louçã Vitorino Ramos

David M.S. Rodrigues, Jorge Louçã, Vitorino Ramos, “From Standard to Second Order Swarm Intelligence Phase-space maps“, in European Conference on Complex Systems, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

Abstract: Standard Stigmergic approaches to Swarm Intelligence encompasses the use of a set of stochastic cooperating ant-like agents to find optimal solutions, using self-organized Stigmergy as an indirect form of communication mediated by a singular artificial pheromone. Agents deposit pheromone-signs on the edges of the problem-related graph to give rise to a family of successful algorithmic approaches entitled Ant Systems (AS), Ant Colony Systems (ACS), among others. These mainly rely on positive feedback’s, to search for an optimal solution in a large combinatorial space. The present work shows how, using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedback’s achieves better results than single positive feedback systems. This follows the route of very recent biological findings showing that forager ants, while laying attractive trail pheromones to guide nest mates to food, also gained foraging effectiveness by the use of pheromones that repelled foragers from unrewarding routes. The algorithm presented here takes inspiration precisely from this biological observation.

The new algorithm was exhaustively tested on a series of well-known benchmarks over hard NP-complete Combinatorial Optimization Problems (COP’s), running on symmetrical Traveling Salesman Problems (TSP). Different network topologies and stress tests were conducted over low-size TSP’s (eil51.tsp; eil78.tsp; kroA100.tsp), medium-size (d198.tsp; lin318.tsp; pcb442.tsp; att532.tsp; rat783.tsp) as well as large sized ones (fl1577.tsp; d2103.tsp) [numbers here referring to the number of nodes in the network]. We show that the new co-evolved stigmergic algorithm compared favorably against the benchmark. The algorithm was able to equal or majorly improve every instance of those standard algorithms, not only in the realm of the Swarm Intelligent AS, ACS approach, as in other computational paradigms like Genetic Algorithms (GA), Evolutionary Programming (EP), as well as SOM (Self-Organizing Maps) and SA (Simulated Annealing). In order to deeply understand how a second co-evolved pheromone was useful to track the collective system into such results, a refined phase-space map was produced mapping the pheromones ratio between a pure Ant Colony System (where no negative feedback besides pheromone evaporation is present) and the present second-order approach. The evaporation rate between different pheromones was also studied and its influence in the outcomes of the algorithm is shown. A final discussion on the phase-map is included. This work has implications in the way large combinatorial problems are addressed as the double feedback mechanism shows improvements over the single-positive feedback mechanisms in terms of convergence speed and on major results.

Keywords: Stigmergy, Co-Evolution, Self-Organization, Swarm Intelligence, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Traveling Salesman Problems (TSP), phase-space.

Fig. – Comparing convergence results between Standard algorithms vs. Second Order Swarm Intelligence, over TSP fl1577 (click to enlarge).

Remember those weather TV channel hurricane images over central America captured by satellites? E.g. Floyd just off the Florida coast on September 14, 1999 (image at nationalgeographic.com). Well, you are pretty close. Here is an example of an ant colony death spiral, where a group of ants gets separated from their colony and start following each other by scent in a circle, and they do so until they all die of exhaustion. A deadlock. An ant’s circle deadlock.

So in order to teach a computer on how to draw a circle without giving him any clue on how what a circle his, what you have to do is exactly the same thing. You just follow a generative design line of bottom-up distributed pattern formation. And you keep replacing the word “computer” by the word “ant” at the title of this post, as many times you can, back and forth, in a non-explicit manner.  Using stigmergy. You just implicitly create simple rules, even ant-like, non-anthropomorphic, which end-up at that exactly behaviour. Yes,… sometimes those “simple” rules  are difficult to grab. But you just keep doing it. Not with a simple “trial-and-error” method, of course. We have better tools to do that. In fact, they are present in planet Earth since the Big-Bang:  we call it, Evolution. Now, from circles beyond, a full array of problems, even hard ones could be treated. Here are some examples.

Figure – A comic strip by Randall Munroe (at xkcd.com – a webcomic of romance, sarcasm, math, and language) about Computational Complexity, the Travelling Salesman (TSP) problem, and – last, but not least – about crowd-sourcing the whole thing into ebay! … LOL

… hey wait, just before you ROTFL yourself on the floor, just check this out. For some problem cases it might just work. Here is one of those  recent cases: Interactive, online games are being used to crack complex scientific conundrums, says a report in Nature. And the wisdom of the ‘multiplayer’ crowd is delivering a new set of search strategies for the prediction of protein structures. The problem at hands is nothing less than protein folding, not an easy one. You can check Nature‘s journal video here. While the online game in question is known as Foldit (link – image below).

Figure – Solving puzzle #48 with FOLDIT, the online game.

There are a lot of consequences with this approach. Here from an article of the Spanish El Pais newspaper (in, Malen Ruiz de Elvira, “Los humanos ganan a los ordenadores – Un juego en red para resolver un problema biológico obtiene mejores resultados que un programa informático“, El Pais, Madrid – August 8, 2010):

[…] Miles de jugadores en red, la mayoría no especializados, han demostrado resolver mejor la forma que adoptan las proteínas que los programas informáticos más avanzados, han hallado científicos de la Universidad de Washington (en Seattle). Averiguar cómo se pliegan las largas cadenas de aminoácidos de las proteínas en la naturaleza -su estructura en tres dimensiones- es uno de los grandes problemas de la biología actual, al que numerosos equipos dedican enormes recursos informáticos. […] Sin embargo la predicción por ordenador de la estructura de una proteína representa un desafío muy grande porque hay que analizar un gran número de posibilidades hasta alcanzar la solución, que se corresponde con un estado óptimo de energía. Es un proceso de optimización. […] Para comprobar su pericia, los científicos plantearon a los jugadores 10 problemas concretos de estructuras de proteínas que conocían pero que no se habían hecho públicas. Encontraron que en algunos de estos casos, concretamente cinco, el resultado alcanzado por los mejores jugadores fue más exacto que el de Rosetta. En otros tres casos las cosas quedaron en tablas y en dos casos ganó la máquina. […] Además, las colaboraciones establecidas entre algunos de los jugadores dieron lugar a todo un nuevo surtido de estrategias y algoritmos, algunos de los cuales se han incorporado ya al programa informático original. “Tan interesantes como las predicciones de Foldit son la complejidad, la variedad y la creatividad que muestra el proceso humano de búsqueda”, escriben los autores del trabajo, entre los que figuran, algo insólito en un artículo científico, “los jugadores de Foldit”. […] “Estamos en el inicio de una nueva era, en la que se mezcla la computación de los humanos y las máquinas”, dice Michael Kearns, un experto en el llamado pensamiento distribuido. […]

Video – Matthew Todd lecture at Google Tech Talk April, 2010 – Open Science: how can we crowdsource chemistry to solve important problems?

The idea of course, is not new. All these distributed human learning systems, started with the SETI at home project (link), originally launched in 1999, by the Berkeley University in California. But I would like to drawn your attention, instead, to some other works on it. First is a video by Matthew Todd (School of Chemistry, University of Sydney). His question his apparently simple: how can we crowdsource chemistry to solve important problems? (above). Second, a well known introductory paper on Crowd-Sourcing by Daren C. Brabham (2008), with several worldwide examples:

[…] Abstract: Crowdsourcing is an online, distributed problem-solving and production model that has emerged in recent years. Notable examples of the model include Threadless, iStockphoto, Inno-Centive, the Goldcorp Challenge, and user-generated advertising contests. This article provides an introduction to crowdsourcing, both its theoretical grounding and exemplar cases, taking care to distinguish crowdsourcing from open source production. This article also explores the possibilities for the model, its potential to exploit a crowd of innovators, and its potential for use beyond for-profit sectors. Finally, this article proposes an agenda for research into crowdsourcing. […] in Daren C. Brabham, “Crowdsourcing as a Model for Problem Solving – An Introduction and Cases“, Convergence: The International Journal of Research into New Media Technologies, London, Los Angeles, New Delhi and Singapore Vol 14(1): 75-90, 2008.

Animated Video – Lively RSA Animate [April 2010], adapted from Dan Pink‘s talk at the RSA (below), illustrates the hidden truths behind what really motivates us at home and in the workplace. [Inspired from the work of Economics professor Dan Ariely at MIT along with his colleagues].

What drives us? Some quotes: […] Once the task called for even rudimentary COGNITIVE skills a larger reward led to poorer performance […] Once you get above rudimentary cognitive skills, rewards do not work that way [linear], this defies the laws of behavioural physics ! […] But when a task gets more complicated, it requires some conceptual, creative thinking, these kind of motivators do not work any more […] Higher incentives led to worse performance. […] Fact: Money is a motivator. In a strange way. If you don’t pay enough, people won’t be motivated. But now there is another paradox. The best use of money, and that is: pay people enough to take the issue of money off the table. […] …Socialism…??

[…] Most upper-management and sales force personnel, as well as workers in many other jobs, are paid based on performance, which is widely perceived as motivating effort and enhancing productivity relative to non-contingent pay schemes. However, psychological research suggests that excessive rewards can in some cases produce supra-optimal motivation, resulting in a decline in performance. To test whether very high monetary rewards can decrease performance, we conducted a set of experiments at MIT, the University of Chicago, and rural India. Subjects in our experiment worked on different tasks and received performance-contingent payments that varied in amount from small to large relative to their typical levels of pay. With some important exceptions, we observed that high reward levels can have detrimental effects on performance. […] abstract, Dan Ariely, Uri Gneezy, George Loewenstein, and Nina Mazar, “Large Stakes and Big Mistakes“, Federal Reserve Bank of Boston Working paper no. 05-11, Research Center for Behavioral Economics and Decision-Making, US, July 2005. [PDF available here] (improved 2009 version below)

Video lecture – On the surprising science of motivation: analyst Daniel Pink examines the puzzle of motivation [Jul. 2009], starting with a fact that social scientists know but most managers don’t: Traditional rewards aren’t always as effective as we think. So maybe, there is a different way forward. [Inspired from the work of Economics professor Dan Ariely at MIT along with his colleagues].

[…] Payment-based performance is commonplace across many jobs in the marketplace. Many, if not most upper-management, sales force personnel, and workers in a wide variety of other jobs are rewarded for their effort based on observed measures of performance. The intuitive logic for performance-based compensation is to motivate individuals to increase their effort, and hence their output, and indeed there is some evidence that payment for performance can increase performance (Lazear, 2000). The expectation that increasing performance-contingent incentives will improve performance rests on two subsidiary assumptions: (1) that increasing performance-contingent incentives will lead to greater motivation and effort and (2) that this increase in motivation and effort will result in improved performance. The first assumption that transitory performance-based increases in pay will produce increased motivation and effort is generally accepted, although there are some notable exceptions. Gneezy and Rustichini (2000a), for example, have documented situations, both in laboratory and field experiments, in which people who were not paid at all exerted greater effort than those who were paid a small amount (see also Gneezy and Rustichini, 2000b; Frey and Jegen, 2001; Heyman and Ariely, 2004). These results show that in some situations paying a small amount in comparison to paying nothing seems to change the perceived nature of the task, which, if the amount of pay is not substantial, may result in a decline of motivation and effort.

Another situation in which effort may not respond in the expected fashion to a change in transitory wages is when workers have an earnings target that they apply narrowly. For example, Camerer, Babcock, Loewenstein and Thaler (1997) found that New York City cab drivers quit early on days when their hourly earnings were high and worked longer hours when their earnings were low. The authors speculated that the cab drivers may have had a daily earnings target beyond which their motivation to continue working dropped off. Although there appear to be exceptions to the generality of the positive relationship between pay and effort, our focus in this paper is on the second assumption – that an increase in motivation and effort will result in improved performance. The experiments we report address the question of whether increased effort necessarily leads to improved performance. Providing subjects with different levels of incentives, including incentives that were very high relative to their normal income, we examine whether, across a variety of different tasks, an increase in contingent pay leads to an improvement or decline in performance. We find that in some cases, and in fact most of the cases we examined, very high incentives result in a decrease in performance. These results provide a counterexample to the assumption that an increase in motivation and effort will always result in improved performance. […] in Dan Ariely, Uri Gneezy, George Loewenstein, and Nina Mazar, “Large Stakes and Big Mistakes“, Review of Economic Studies (2009) 75, 1-19 0034-6527/09. [PDF available here]

Now, these are not stories, these are facts. These are one of the most robust findings in social science,… yet, one of the most ignored [sic]. And they keep coming in. Such as the fallacy of the supply and demand model (March 2008). Anyway, enough good material (a simple paper with profound implications)… for one day. But hey, …Oh, if you are still wondering what other paper inspired the specific drawings at minute 7′:40” and on, in the first video over this post, well, here it is: Kristina Shampan’er and Dan Ariely (2007), “How Small is Zero Price? The True Value of Free Products“, in Marketing Science. Vol. 26, No. 6, 742 – 757. [PDF available here]… Got it ?!

[…] Cuenta la leyenda que segundos antes de llegar a una curva, Juan Manuel Fangio dirigía una fugaz mirada a las hojas de los árboles. Si se movían, levantaba el pie del acelerador; si, por el contrario, no soplaba el viento, pisaba a fondo. […], in Ángel Luis Menéndez, “Los abuelos de Alonso”, Público.es (link)

If you want to be incrementally better: Be competitive. If you want to be exponentially better: Be cooperative“. ~ Anonymous

Two hunters decide to spent their week-end together. But soon, a dilemma emerges between them. They could choose for hunting a deer stag together or either -individually- hunt a rabbit on their own. Chasing a deer, as we know, is something quite demanding, requiring absolute cooperation between them. In fact, both friends need to be focused for a long time and in position, while not being distracted and tempted by some arbitrary passing rabbits. On the other hand, stag hunt is increasingly more beneficiary for both, but that benefice comes with a cost: it requires a high level of trust between them. Somehow at some point, each hunter concerns that his partner may diverts while facing a tempting jumping rabbit, thus jeopardizing the possibility of hunting together the biggest prey.

The original story comes from Jean Jacques Rousseau, French philosopher (above). While, the dilemma is known in game theory has the “Stag Hunt Game” (Stag = adult deer). The dilemma could then take different quantifiable variations, assuming different values for R (Reward for cooperation), T (Temptation to defect), S (Sucker’s payoff) and P (Punishment for defection). However, in order to be at the right strategic Stag Hunt Game scenario we should assume R>T>P>S. A possible pay-off table matrix taking in account two choices C or D (C = Cooperation; D = Defection), would be:

Choice — C ——- D ——
C (R=3, R=3) (S=0, T=2)
D (T=2, S=0) (P=1, P=1)

Depending on how fitness is calculated, stag hunt games could also be part of a real Prisoner’s dilemma, or even Ultimatum games. As clear from above, highest pay-off comes from when both hunters decide to cooperate (CC). Over this case (first column – first row), both receive a reward of 3 points, that is, they both really focused on hunting a big deer while forgetting everything else, namely rabbits. However – and here is where exactly the dilemma appears -, both CC or DD are Nash equilibrium! That is, at this strategic landscape point no player has anything to gain by changing only his own strategy unilaterally. The dilemma appears recurrently in biology, animal-animal interaction, human behaviour, social cooperation, over Co-Evolution, in society in general, and so on. Philosopher David Hume provided also a series of examples that are stag hunts, from two individuals who must row a boat together up to two neighbours who wish to drain a meadow. Other stories exist with very interesting variations and outcomes. Who does not knows them?!

The day before last school classes, two kids decided to do something “cool”, while conjuring on appearing before their friends on the last school day period, both with mad and strange haircuts. Although, despite their team purpose, a long, anguish and stressful night full of indecisiveness followed for both of them…

Figure – A swarm cognitive map (pheromone spatial distribution map) in 3D, at a specific time t. The artificial ant colony was evolved within 2 digital grey images based on the following work. The real physical “thing” can be seen here.

[] Vitorino Ramos, The MC2 Project [Machines of Collective Conscience]: A possible walk, up to Life-like Complexity and Behaviour, from bottom, basic and simple bio-inspired heuristics – a walk, up into the morphogenesis of information, UTOPIA Biennial Art Exposition, Cascais, Portugal, July 12-22, 2001.

Synergy (from the Greek word synergos), broadly defined, refers to combined or co-operative effects produced by two or more elements (parts or individuals). The definition is often associated with the holistic conviction quote that “the whole is greater than the sum of its parts” (Aristotle, in Metaphysics), or the whole cannot exceed the sum of the energies invested in each of its parts (e.g. first law of thermodynamics) even if it is more accurate to say that the functional effects produced by wholes are different from what the parts can produce alone. Synergy is a ubiquitous phenomena in nature and human societies alike. One well know example is provided by the emergence of self-organization in social insects, via direct (mandibular, antennation, chemical or visual contact, etc) or indirect interactions. The latter types are more subtle and defined as stigmergy to explain task coordination and regulation in the context of nest reconstruction in Macrotermes termites. An example, could be provided by two individuals, who interact indirectly when one of them modifies the environment and the other responds to the new environment at a later time. In other words, stigmergy could be defined as a particular case of environmental or spatial synergy. Synergy can be viewed as the “quantity” with respect to which the whole differs from the mere aggregate. Typically these systems form a structure, configuration, or pattern of physical, biological, sociological, or psychological phenomena, so integrated as to constitute a functional unit with properties not derivable from its parts in summation (i.e. non-linear) – Gestalt in one word (the English word more similar is perhaps system, configuration or whole). The system is purely holistic, and their properties are intrinsically emergent and auto-catalytic.

A typical example could be found in some social insect societies, namely in ant colonies. Coordination and regulation of building activities on these societies do not depend on the workers themselves but are mainly achieved by the nest structure: a stimulating configuration triggers the response of a termite worker, transforming the configuration into another configuration that may trigger in turn another (possibly different) action performed by the same termite or any other worker in the colony. Recruitment of social insects for particular tasks is another case of stigmergy. Self-organized trail laying by individual ants is a way of modifying the environment to communicate with nest mates that follow such trails. It appears that task performance by some workers decreases the need for more task performance: for instance, nest cleaning by some workers reduces the need for nest cleaning. Therefore, nest mates communicate to other nest mates by modifying the environment (cleaning the nest), and nest mates respond to the modified environment (by not engaging in nest cleaning).

Swarms of social insects construct trails and networks of regular traffic via a process of pheromone (a chemical substance) laying and following. These patterns constitute what is known in brain science as a cognitive map. The main differences lies in the fact that insects write their spatial memories in the environment, while the mammalian cognitive map lies inside the brain, further justified by many researchers via a direct comparison with the neural processes associated with the construction of cognitive maps in the hippocampus.

But by far more crucial to the present project, is how ants form piles of items such as dead bodies (corpses), larvae, or grains of sand. There again, stigmergy is at work: ants deposit items at initially random locations. When other ants perceive deposited items, they are stimulated to deposit items next to them, being this type of cemetery clustering organization and brood sorting a type of self-organization and adaptive behaviour, being the final pattern of object sptial distribution a reflection of what the colony feels and thinks about that objects, as if they were another organism (a meta- global organism).

As forecasted by Wilson [E.O. Wilson. The Insect Societies, Belknam Press, Cambridge, 1971], our understanding of individual insect behaviour together with the sophistication with which we will able to analyse their collective interaction would advance to the point were we would one day posses a detailed, even quantitative, understanding of how individual “probability matrices” (their tendencies, feelings and inner thoughts) would lead to mass action at the level of the colony (society), that is a truly “stochastic theory of mass behaviour” where the reconstruction of mass behaviours is possible from the behaviours of single colony members, and mainly from the analysis of relationships found at the basic level of interactions.

The idea behind the MC2 Machine is simple to transpose for the first time, the mammalian cognitive map, to a environmental (spatial) one, allowing the recognition of what happens when a group of individuals (humans) try to organize different abstract concepts (words) in one habitat (via internet). Even if each of them is working alone in a particular sub-space of that “concept” habitat, simply rearranging notions at their own will, mapping “Sameness” into “Neighborness“, not recognizing the whole process occurring simultaneously on their society, a global collective-conscience emerges. Clusters of abstract notions emerge, exposing groups of similarity among the different concepts. The MC2 machine is then like a mirror of what happens inside the brain of multiple individuals trying to impose their own conscience onto the group.

Through a Internet site reflecting the “words habitat”, the users (humans) choose, gather and reorganize some types of words and concepts. The overall movements of these word-objects are then mapped into a public space. Along this process, two shifts emerge: the virtual becomes the reality, and the personal subjective and disperse beliefs become onto a social and politically significant element. That is, perception and action only by themselves can evolve adaptive and flexible problem-solving mechanisms, or emerge communication among many parts. The whole and their behaviours (i.e., the next layer in complexity – our social significant element) emerges from the relationship of many parts, even if these later are acting strictly within and according to any sub-level of basic and simple strategies, ad-infinitum repeated.

The MC2 machine will reveal then what happens in many real world situations; cooperation among individuals, altruism, egoism, radicalism, and also the resistance to that radicalism, memory of that society on some extreme positions on time, but the inevitable disappearance of that positions, to give rise to the convergence to the group majority thought (Common-sense?), eliminating good or bad relations found so far, among in our case, words and abstract notions. Even though the machine composed of many human-parts will “work” within this restrict context, she will reveal how some relationships among notions in our society (ideas) are only possible to be found, when and only when simple ones are found first (the minimum layer of complexity), neglecting possible big steps of a minority group of visionary individuals. Is there (in our society) any need for a critical mass of knowledge, in order to achieve other layers of complexity? Roughly, she will reveal for instance how democracies can evolve and die on time, as many things in our impermanent world.

Journalism is dying, they say. I do agree. And while the argue continues, many interested on the issue are now debating what really is the reason. The question is…, there is no reason at all, there are many. Intricate ones. Do ponder on this: while newspapers are facing the immense omnipresent and real-time competition from TV channels, TV on itself is dying also (while unexpectedly, … Radio is surging). On many broadcasted programs, TV anchors are now more important than the invited people who, on that subject (supposedly) worked hardly over years to provide that precise innovative content. As in large supermarkets and great malls, package by these means have turned more important than the content in itself. This related business editorial pressure for news quickness have become so intensive and aggressive, that contents are replaced every second without judge and once in the air hardly described, discussed,  opposed or dessicated. So at large,  TV CEO’s producers think that people are no longer waiting for a new interesting content to appear, they are instead waiting for the anchor which passes them down as they were peanuts. Peanuts are good, but in excess – we all agree – are damn awful. And many do so,  as an old passive addiction. Which means that in the long run, nothing remains (fact for both sides); … And if they give me no opportunity at all to check content carefully, if I happen to be on the mood to, … So, I move on. Buy this precise simple way, media cannibalizes itself.

We all know that attention spam is getting narrower these days, and, e.g., yes… greater literature classics are no longer read. So, Media CEO’s say – “they have no time“. But, really … do mind that gap. Think twice. If the whole environment suddenly recognizes (being this one of the major questions – see below) that they are getting enough of peanuts (and they really are), they will urge for beef-steaks. In fact, eating 1000 void peanuts takes more time to consume than one large good beef! And there is a difference, … the beef remains on our body for several hours, not seconds.

It’s promptly becoming a paradox, since Media CEO’s on their blindness competition refuge on saying that they – us readers – have no time (when in mediocrity no solution is found, easiest way is to repeat a mantra), and we (mostly of us) keep zapping news as never before. However, they never realized that we keep zapping it, because no news – by these means –  are of interest. They really all have become the same. And once they appear all the same, they all soon disappear from our minds. … We all in some aspects all wonder, what  really happened to  research journalism, stories about new complex issues, strong content, explained in detail but still provided in simple eloquent ways? Come on, this long-tailed huge market niche, once yours, is now void!

Newspapers do have this wonderful singularity. They still have journalists (at least some, if they had enough vision to nourish them). They could provide insightful detailed backup stories, open questions, or debating new ones as no one can in public space. Moreover, they have time from their consumers. That, at least, is what I am feed-backing to Guardian every Sunday when I put my money over the news bench in change for this newspaper, along others like The Economist. But in face of these overall great news-without-sense turmoil cascade, probably one of these days, people will instead desire silence… or listening to their grandfathers knowledge, good-sense, and long-lived emotion (which keeps increasing believe me). They will relate to him, as never before.  Not newspapers. At least, he do provides content.

But once the media is set (and in some way, not all the way, medium is the message, as postulated by Marshall McLuhan), the great gold-run will be on, … guess what, … content. And on relationships among content! Journalism will be no longer under atomization. Or crystallized.

Fig. – Spatial distribution of 931 items (words taken from an article at ABC Spanish newspaper) on a 61 x 61 non-parametric toroidal grid, at t=106. 91 ants used type 2 probability response functions, with k1=0.1 and k2=0.3. Some independent clusters examples are: (A) anunció, bilbao, embargo, titulos, entre, hacer, necesídad, tras, vida, lider, cualquier, derechos, medida.(B) dirigentes, prensa, ciu. (C) discos, amigos, grandes. (D) hechos, piloto, miedo, tipo, cd, informes. (E) dificil, gobierno, justicia, crisis, voluntad, creó, elección, horas, frente, técnica, unas, tarde, familia, sargento, necesídad, red, obra … (among other word semantic clusters; check paper article below).

For long, media decided to do nothing, while new media including social media was coming in to the plateu, stronger as never before. Let me give you one example. In order to understand how relations between item news could enhnace newspaper reading and social awareness, back in 2002 I decided to make an experiment. Together with a colleague, we took one article of the Spanish ABC magazine (photo above). The article was about spanish political parties and corruption. It contained 931words (snapshot above). In order to extract semantic meaning from it as a pre-processing computer analysis, we started by applying Latent Semantic Analysis (LSA). Then, Swarm Intelligent algorithms were developed in order to have a glimpse on the relations among all those words on the newspaper article. Guess what? Some words like “big”, friends” and “music discs” were segmented from the rest of the political related article (segregated it on a remote semantic “island”), that is, not only a whole conceptual semantic atlas of that entire news section was possible, as well as finding unrelated issues (which were uncorrelated semantic “islands”). Now, just imagine if this happens within a newspaper social network, live, 24 hours a day, while people grab for strong co-related content and discuss it as it happens. One strong journal article, could in facto, evolve to social collective knowledge and awareness as never before. That, in reality is something that classic journalism could use as and edge for their (nowadays awful) market approach. Providing not only good content, but along with it, an extra service not available anyware (which is in some way, priceless): The chance to provide co-related real-time meta-content. Not one view, but many aggregated views.  Edited real-world real-time good quality journalism which has the potential of an “endless” price, namely these days. On the other hand, what we now see is that news CEO’s along with some editors still keep their minds on 19th century journalism.  For worse, due to their legitimic panic. However, meanwhile, the world has indeed evolved.

[] Vitorino Ramos, Juan J. Merelo, Self-Organized Stigmergic Document Maps: Environment as a Mechanism for Context Learning, in AEB´2002 – 1st Spanish Conference on Evolutionary and Bio-Inspired Algorithms, E. Alba, F. Herrera, J.J. Merelo et al. (Eds.), pp. 284-293, Centro Univ. de Mérida, Mérida, Spain, 6-8 Feb. 2002.

Social insect societies and more specifically ant colonies, are distributed systems that, in spite of the simplicity of their individuals, present a highly structured social organization. As a result of this organization, ant colonies can accomplish complex tasks that in some cases exceed the individual capabilities of a single ant. The study of ant colonies behavior and of their self-organizing capabilities is of interest to knowledge retrieval/management and decision support systems sciences, because it provides models of distributed adaptive organization which are useful to solve difficult optimization, classification, and distributed control problems, among others. In the present work we overview some models derived from the observation of real ants, emphasizing the role played by stigmergy as distributed communication paradigm, and we present a novel strategy to tackle unsupervised clustering as well as data retrieval problems. The present ant clustering system (ACLUSTER) avoids not only short-term memory based strategies, as well as the use of several artificial ant types (using different speeds), present in some recent approaches. Moreover and according to our knowledge, this is also the first application of ant systems into textual document clustering.

(to obtain the respective PDF file follow link above or visit chemoton.org)

Figure – Book cover of Toby Segaran’s, “Programming Collective Intelligence – Building Smart Web 2.0 Applications“, O’Reilly Media, 368 pp., August 2007.

{scopus online description} Want to tap the power behind search rankings, product recommendations, social bookmarking, and online matchmaking? This fascinating book demonstrates how you can build Web 2.0 applications to mine the enormous amount of data created by people on the Internet. With the sophisticated algorithms in this book, you can write smart programs to access interesting data-sets from other web sites, collect data from users of your own applications, and analyze and understand the data once you’ve found it. Programming Collective Intelligence takes you into the world of machine learning and statistics, and explains how to draw conclusions about user experience, marketing, personal tastes, and human behavior in general — all from information that you and others collect every day. Each algorithm is described clearly and concisely with code that can immediately be used on your web site, blog, Wiki, or specialized application.

{even if I don’t totally agree, here’s a “over-rated” description – specially on the scientific side, by someone “dwa” – link above} Programming Collective Intelligence is a new book from O’Reilly, which was written by Toby Segaran. The author graduated from MIT and is currently working at Metaweb Technologies. He develops ways to put large public data-sets into Freebase, a free online semantic database. You can find more information about him on his blog:  http://blog.kiwitobes.com/. Web 2.0 cannot exist without Collective Intelligence. The “giants” use it everywhere, YouTube recommends similar movies, Last.fm knows what would you like to listen and Flickr which photos are your favorites etc. This technology empowers intelligent search, clustering, building price models and ranking on the web. I cannot imagine modern service without data analysis. That is the reason why it is worth to start read about it. There are many titles about collective intelligence but recently I have read two, this one and “Collective Intelligence in Action“. Both are very pragmatic, but the O’Reilly’s one is more focused on the merit of the CI. The code listings are much shorter (but examples are written in Python, so that was easy). In general these books comparison is like Java vs. Python. If you would like to build recommendation engine “in Action”/Java way, you would have to read whole book, attach extra jar-s and design dozens of classes. The rapid Python way requires reading only 15 pages and voila, you have got the first recommendations. It is awesome!

So how about rest of the book, there are still 319 pages! Further chapters say about: discovering groups, searching, ranking, optimization, document filtering, decision trees, price models or genetic algorithms. The book explains how to implement Simulated Annealing, k-Nearest Neighbors, Bayesian Classifier and many more. Take a look at the table of contents (here: http://oreilly.com/catalog/9780596529321/preview.html), it does not list all the algorithms but you can find more information there. Each chapter has about 20-30 pages. You do not have to read them all, you can choose the most important and still know what is going on. Every chapter contains minimum amount of theoretical introduction, for total beginners it might be not enough. I recommend this book for students who had statistics course (not only IT or computing science), this book will show you how to use your knowledge in practice _ there are many inspiring examples. For those who do not know Python – do not be afraid _ at the beginning you will find short introduction to language syntax. All listings are very short and well described by the author _ sometimes line by line. The book also contains necessary information about basic standard libraries responsible for xml processing or web pages downloading. If you would like to start learn about collective intelligence I would strongly recommend reading “Programming Collective Intelligence” first, then “Collective Intelligence in Action”. The first one shows how easy it is to implement basic algorithms, the second one would show you how to use existing open source projects related to machine learning.

Abraham, Ajith; Grosan, Crina; Ramos, Vitorino (Eds.), Stigmergic Optimization, Studies in Computational Intelligence (series), Vol. 31, Springer-Verlag, ISBN: 3-540-34689-9, 295 p., Hardcover, 2006.

TABLE OF CONTENTS (short /full) / CHAPTERS:

[1] Stigmergic Optimization: Foundations, Perspectives and Applications.
[2] Stigmergic Autonomous Navigation in Collective Robotics.
[3] A general Approach to Swarm Coordination using Circle Formation.
[4] Cooperative Particle Swarm Optimizers: a powerful and promising approach.
[5] Parallel Particle Swarm Optimization Algorithms with Adaptive
 Simulated Annealing.
[6] Termite: a Swarm Intelligent Routing algorithm for Mobile
 Wireless ad-hoc Networks.
[7] Linear Multiobjective Particle Swarm Optimization.
[8] Physically realistic Self-Assembly Simulation system.
[9] Gliders and Riders: A Particle Swarm selects for coherent Space-time Structures in Evolving Cellular Automata.
[10] Stigmergic Navigation for Multi-agent Teams in Complex Environments.
[11] Swarm Intelligence: Theoretical proof that Empirical techniques are Optimal.
[12] Stochastic Diffusion search: Partial function evaluation in Swarm Intelligence Dynamic Optimization.

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

@ViRAms on Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Archives

Blog Stats

  • 244,343 hits