You are currently browsing the monthly archive for September 2012.

Fig.1 – (click to enlarge) The optimal shortest path among N=1265 points depicting a Portuguese Navalheira crab as a result of one of our latest Swarm-Intelligence based algorithms. The problem of finding the shortest path among N different points in space is NP-hard, known as the Travelling Salesmen Problem (TSP), being one of the major and hardest benchmarks in Combinatorial Optimization (link) and Artificial Intelligence. (V. Ramos, D. Rodrigues, 2012)

This summer my kids just grab a tiny Portuguese Navalheira crab on the shore. After a small photo-session and some baby-sitting with a lettuce leaf, it was time to release it again into the ocean. He not only survived my kids, as he is now entitled into a new World Wide Web on-line life. After the Shortest path Sardine (link) with 1084 points, here is the Crab with 1265 points. The algorithm just run as little as 110 iterations.

Fig. 2 – (click to enlarge) Our 1265 initial points depicting a TSP Portuguese Navalheira crab. Could you already envision a minimal tour between all these points?

As usual in Travelling Salesmen problems (TSP) we start it with a set of points, in our case 1084 points or cities (fig. 2). Given a list of cities and their pairwise distances, the task is now to find the shortest possible tour that visits each city exactly once. The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods.

Fig. 3 – (click to enlarge) Again the shortest path Navalheira crab, where the optimal contour path (in black: first fig. above) with 1265 points (or cities) was filled in dark orange.

TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder.

What follows (fig. 4) is the original crab photo after image segmentation and just before adding Gaussian noise in order to retrieve several data points for the initial TSP problem. The algorithm was then embedded with the extracted x,y coordinates of these data points (fig. 2) in order for him to discover the minimal path, in just 110 iterations. For extra details, pay a visit onto the Shortest path Sardine (link) done earlier.

Fig. 4 – (click to enlarge) The original crab photo after some image processing as well as segmentation and just before adding Gaussian noise in order to retrieve several data points for the initial TSP problem.

How wings are attached to the backs of Angels, Craig Welsh (1996) – Production by the National Film Board of Canada (nfb.ca): In this surreal exposition, we meet a man, obsessed with control. His intricate gadgets manipulate yet insulate, as his science dissects and reduces. How exactly are wings attached to the back of angels? In this invented world drained of emotion, where everything goes through the motions, he is brushed by indefinite longings. Whether he can transcend his obsessions and fears is the heart of the matter (from Vimeo).

Figure – A classic example of emergence: The exact shape of a termite mound is not reducible to the actions of individual termites. Even if, there are already computer models who could achieve it (Check for more on “Stigmergic construction” or the full current blog Stigmergy tag)

The world can no longer be understood like a chessboard… It’s a Jackson Pollack painting” ~ Carne Ross, 2012.

[…] As pointed by Langton, there is more to life than mechanics – there is also dynamics. Life depends critically on principles of dynamical self-organization that have remained largely untouched by traditional analytic methods. There is a simple explanation for this – these self-organized dynamics are fundamentally non-linear phenomena, and non-linear phenomena in general depend critically on the interactions between parts: they necessarily disappear when parts are treated in isolation from one another, which is the basis for any analytic method. Rather, non-linear phenomena are most appropriately treated by a synthetic approach, where synthesis means “the combining of separate elements or substances to form a coherent whole”. In non-linear systems, the parts must be treated in each other’s presence, rather than independently from one another, because they behave very differently in each other’s presence than we would expect from a study of the parts in isolation. […] in Vitorino Ramos, 2002, http://arxiv.org/abs/cs /0412077.

What follows are passages from an important article on the consequences for Science at the moment of the recent discovery of the Higgs boson. Written by Ashutosh Jogalekar, “The Higgs boson and the future of science” (link) the article appeared at the Scientific American blog section (July 2012). And it starts discussing reductionism or how the Higgs boson points us to the culmination of reductionist thinking:

[…] And I say this with a suspicion that the Higgs boson may be the most fitting tribute to the limitations of what has been the most potent philosophical instrument of scientific discovery – reductionism. […]

[…] Yet as we enter the second decade of the twenty-first century, it is clear that reductionism as a principal weapon in our arsenal of discovery tools is no longer sufficient. Consider some of the most important questions facing modern science, almost all of which deal with complex, multi factorial systems. How did life on earth begin? How does biological matter evolve consciousness? What are dark matter and dark energy? How do societies cooperate to solve their most pressing problems? What are the properties of the global climate system? It is interesting to note at least one common feature among many of these problems; they result from the build-up rather than the breakdown of their operational entities. Their signature is collective emergence, the creation of attributes which are greater than the sum of their constituent parts. Whatever consciousness is for instance, it is definitely a result of neurons acting together in ways that are not obvious from their individual structures. Similarly, the origin of life can be traced back to molecular entities undergoing self-assembly and then replication and metabolism, a process that supersedes the chemical behaviour of the isolated components. The puzzle of dark matter and dark energy also have as their salient feature the behaviour of matter at large length and time scales. Studying cooperation in societies essentially involves studying group dynamics and evolutionary conflict. The key processes that operate in the existence of all these problems seem to almost intuitively involve the opposite of reduction; they all result from the agglomeration of molecules, matter, cells, bodies and human beings across a hierarchy of unique levels. In addition, and this is key, they involve the manifestation of unique principles emerging at every level that cannot be merely reduced to those at the underlying level. […]

[…] While emergence had been implicitly appreciated by scientists for a long time, its modern salvo was undoubtedly a 1972 paper in Science by the Nobel Prize winning physicist Philip Anderson (link) titled “More is Different” (PDF), a title that has turned into a kind of clarion call for emergence enthusiasts. In his paper Anderson (who incidentally first came up with the so-called Higgs mechanism) argued that emergence was nothing exotic; for instance, a lump of salt has properties very different from those of its highly reactive components sodium and chlorine. A lump of gold evidences properties like color that don’t exist at the level of individual atoms. Anderson also appealed to the process of broken symmetry, invoked in all kinds of fundamental events – including the existence of the Higgs boson – as being instrumental for emergence. Since then, emergent phenomena have been invoked in hundreds of diverse cases, ranging from the construction of termite hills to the flight of birds. The development of chaos theory beginning in the 60s further illustrated how very simple systems could give rise to very complicated and counter-intuitive patterns and behaviour that are not obvious from the identities of the individual components. […]

[…] Many scientists and philosophers have contributed to considered critiques of reductionism and an appreciation of emergence since Anderson wrote his paper. (…) These thinkers make the point that not only does reductionism fail in practice (because of the sheer complexity of the systems it purports to explain), but it also fails in principle on a deeper level. […]

[…] An even more forceful proponent of this contingency-based critique of reductionism is the complexity theorist Stuart Kauffman who has laid out his thoughts in two books. Just like Anderson, Kauffman does not deny the great value of reductionism in illuminating our world, but he also points out the factors that greatly limit its application. One of his favourite examples is the role of contingency in evolution and the object of his attention is the mammalian heart. Kauffman makes the case that no amount of reductionist analysis could explain tell you that the main function of the heart is to pump blood. Even in the unlikely case that you could predict the structure of hearts and the bodies that house them starting from the Higgs boson, such a deductive process could never tell you that of all the possible functions of the heart, the most important one is to pump blood. This is because the blood-pumping action of the heart is as much a result of historical contingency and the countless chance events that led to the evolution of the biosphere as it is of its bottom-up construction from atoms, molecules, cells and tissues. […]

[…] Reductionism then falls woefully short when trying to explain two things; origins and purpose. And one can see that if it has problems even when dealing with left-handed amino acids and human hearts, it would be in much more dire straits when attempting to account for say kin selection or geopolitical conflict. The fact is that each of these phenomena are better explained by fundamental principles operating at their own levels. […]

[…] Every time the end of science has been announced, science itself proved that claims of its demise were vastly exaggerated. Firstly, reductionism will always be alive and kicking since the general approach of studying anything by breaking it down into its constituents will continue to be enormously fruitful. But more importantly, it’s not so much the end of reductionism as the beginning of a more general paradigm that combines reductionism with new ways of thinking. The limitations of reductionism should be seen as a cause not for despair but for celebration since it means that we are now entering new, uncharted territory. […]

Figure (click to enlarge) – Time dependence of FAO Food Price Index from January 2004 to May 2011. Red dashed vertical lines correspond to beginning dates of “food riots” and protests associated with the major recent unrest in North Africa and the Middle East. The overall death toll is reported in parentheses [26-55]. Blue vertical line indicates the date, December 13, 2010, on which we submitted a report to the U.S. government, warning of the link between food prices, social unrest and political instability [56]. Inset shows FAO Food Price Index from 1990 to 2011. [From arXiv:1108.2455, page 3]

Poverty is the parent of revolution and crime.” ~ Aristotle.

By crossing data on food price, and food price peaks with an ongoing trend of increasing prices, as well as the date of riots around the world, 3 of my colleagues at NECSI – the New England Complex Systems Institute (link), Boston,  found out a specific food price threshold above which protests become likely. By doing so, unveiled a model that accurately explained why the waves of unrest that swept the world in 2008 and 2011 crashed when they did. That was the past. NECSI team however, expects a perilous trend in rising food prices to continue (link). Even before the extreme weather scrambled food prices this year, their 2011 report predicted that the next great breach would occur in August 2013, and that the risk of more worldwide rioting would follow. So, if trends hold, these complex systems model say we’re less than one year and counting from a fireball of global unrest riots.

The abstract and PDF link into their work follows:

[…] Social unrest may reflect a variety of factors such as poverty, unemployment, and social injustice. Despite the many possible contributing factors, the timing of violent protests in North Africa and the Middle East in 2011 as well as earlier riots in 2008 coincides with large peaks in global food prices. We identify a specific food price threshold above which protests become likely. These observations suggest that protests may reflect not only long-standing political failings of governments, but also the sudden desperate straits of vulnerable populations. If food prices remain high, there is likely to be persistent and increasing global social disruption. Underlying the food price peaks we also found an ongoing trend of increasing prices. We extrapolate these trends and identify a crossing point to the domain of high impacts, even without price peaks, in 2012-2013. This implies that avoiding global food crises and associated social unrest requires rapid and concerted action. […] in Marco Lagi, Karla Z. Bertrand and Yaneer Bar-Yam, “The Food Crises and Political Instability in North Africa and the Middle East“, arXiv:1108.2455, August 10, 2011. [PDF link]

No, not the Grand Caynon neither the Epstein & Axtell Sugarscape (link) this time, instead a soundscape. A landscape made of sounds or grooves. Look at this as an ancient form of encapsulating data. Taken by Chris Supranowitz, a researcher at The Insitute of Optics at the University of Rochester (US), the image depicts a single groove on a vinyl record magnified 1000 times, using electron microscopy. Dark bits are the top of the grooves, i.e. the uncut vinyl, while even darker little bumps are dust on the record (e.g. centre right). For more images check SynthGear, and found out (image link) what have they discovered if we keep magnifying that image further still!

Remove one network edge and see what happens. Then, two… etc. This is the first illustration on Mark BuchananNexus: Small-worlds and the ground-breaking science of networks” 2002 book – Norton, New York (Prelude, page 17), representing a portion of the food web for the Benguela ecosystem, located off the western coast of South Africa (from Peter Yodzis). For a joint review of 3 general books in complex networks, including Barabási‘s “Linked“, Duncan WattsSmall-Worlds” and Buchanan‘s “Nexus” pay a visit into JASSSJournal of Artificial Societies and Social Simulation, ‘a review of three books’ entry by Frédéric Amblard (link).

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

@ViRAms on Twitter

Archives

Blog Stats

  • 255,000 hits