Video – Animated short film (by Shulamit Serfaty) based on Italo Calvino‘s story “The distance from the moon“, in Le Cosmicomiche (Cosmicomics), 1st edition, Einaudi, Italy, 1965.

[…] In reality, from the top of the ladder, standing erect on the last rung, you could just touch the Moon if you held your arms up. We had taken the measurements carefully (we didn’t yet suspect that she was moving away from us); the only thing you had to be very careful about was where you put your hands. I always chose a scale that seemed fast (we climbed up in groups of five or six at a time), then I would cling first with one hand, then with both, and immediately I would feel ladder and boat drifting away from below me, and the motion of the Moon would tear me from the Earth’s attraction. Yes, the Moon was so strong that she pulled you up; you realized this the moment you passed from one to the other: you had to swing up abruptly, with a kind of somersault, grabbing the scales, throwing your legs over your head, until your feet were on the Moon’s surface. Seen from the Earth, you looked as if you were hanging there with your head down, but for you, it was the normal position, and the only odd thing was that when you raised your eyes you saw the sea above you, glistening, with the boat and the others upside down, hanging like a bunch of grapes from the vine. […], in Italo Calvino, The distance from the moon“, Le Cosmicomiche (Cosmicomics), 1st edition, Einaudi, Italy, 1965.

Picture – (on the cover) “Calvino does what very few writers can do: he describes imaginary worlds with the most extraordinary precision and beauty…” – Gore Vidal, The New York Review of Books.

Finally one of the most recent Pixar animated short films, “La Luna” released last year. Directed by Enrico Casarosa, Pixar, June 2011:

What bothers us about primordial beauty is that it is no longer characteristic. Unspoiled places sadden us because they are, in an important sense, no longer true.” – Robert Adams.

Living and working mostly in Colorado for nearly 30 years, Robert Adams was mostly concerned about a palimpsest of alterations, unfolding in front of his camera in plain western America. Even if unperceivable for so many, the landscape in turmoil was his medium. And it was there, he found out what beauty is not. In 1975, New Topographics encapsulated an evolving Man-altered landscape in an exhibition that end-up by signalling a pivotal key moment in American landscape photography. His sensibility and aesthetic approach remains pertinent today among us. One needs to only replace random and lost inanimate landscapes with random lonely people.

Recent research have increasingly being focused on the relationship between Human-Human interaction, social networks (no, not the Facebook) and other Human-activity areas, like health. Nicholas Christakis (Harvard Univ. research link) points us that, people are inter-connected, and so as well, their health is inter-connected. This research engages two types of phenomena: the social, mathematical, and biological rules governing how social networks form (“Connection“) and the biological and social implications of how they operate to influence thoughts, feelings, and behaviours (“Contagion“), as in the self-organized stigmergy-like dynamics of Cognitive Collective Perception (link).

Above, Nicholas Christakis (in a 56m. documentary lecture produced by The Floating University, Sept. 2011) discusses the obvious tension and delicate balance between agency (one individual choices and actions) and structure (our collective responsibility), where here, structure refers not only to our co-evolving dynamic societal environment as well as to the permanent unfolding entangled nature of topological structure on complex networks, such as in human-human social networks, while asking: If you’re so free, why do you follow others? The documentary (YouTube link) resume states:

If you think you’re in complete control of your destiny or even your own actions, you’re wrong. Every choice you make, every behaviour you exhibit, and even every desire you have finds its roots in the social universe. Nicholas Christakis explains why individual actions are inextricably linked to sociological pressures; whether you’re absorbing altruism performed by someone you’ll never meet or deciding to jump off the Golden Gate Bridge, collective phenomena affect every aspect of your life. By the end of the lecture Christakis has revealed a startling new way to understand the world that ranks sociology as one of the most vitally important social sciences.”

While cooperation is central to the success of human societies and is widespread, cooperation in itself, however, poses a challenge in both the social and biological sciences: How can this high level of cooperation be maintained in the face of possible exploitation? One answer involves networked interactions and population structure.

As perceived, the balance between homophily (where “birds of a feather flock together”) and heterophily (one where most of genotypes are negatively correlated), do requires further research. In fact, in humans, one of the most replicated findings in the social sciences is that people tend to associate with other people that they resemble, a process precisely known as homophily. As Christakis points out, although phenotypic resemblance between friends might partly reflect the operation of social influence, our genotypes are not materially susceptible to change. Therefore, genotypic resemblance could result only from a process of selection. Such genotypic selection might in turn take several forms. For short, let me stress you two examples. What follows are two papers, as well as a quick reference (image below) to a recent general-audience of his books:

1) Rewiring your network fosters cooperation:

“Human populations are both highly cooperative and highly organized. Human interactions are not random but rather are structured in social networks. Importantly, ties in these networks often are dynamic, changing in response to the behavior of one’s social partners. This dynamic structure permits an important form of conditional action that has been explored theoretically but has received little empirical attention: People can respond to the cooperation and defection of those around them by making or breaking network links. Here, we present experimental evidence of the power of using strategic link formation and dissolution, and the network modification it entails, to stabilize cooperation in sizable groups. Our experiments explore large-scale cooperation, where subjects’ cooperative actions are equally beneficial to all those with whom they interact. Consistent with previous research, we find that cooperation decays over time when social networks are shuffled randomly every round or are fixed across all rounds. We also find that, when networks are dynamic but are updated only infrequently, cooperation again fails. However, when subjects can update their network connections frequently, we see a qualitatively different outcome: Cooperation is maintained at a high level through network rewiring. Subjects preferentially break links with defectors and form new links with cooperators, creating an incentive to cooperate and leading to substantial changes in network structure. Our experiments confirm the predictions of a set of evolutionary game theoretic models and demonstrate the important role that dynamic social networks can play in supporting large-scale human cooperation.”, abstract in D.G. Rand, S. Arbesman, and N.A. Christakis, “Dynamic Social Networks Promote Cooperation in Experiments with Humans,” PNAS: Proceedings of the National Academy of Sciences (October 2011). [full PDF];

Picture – (book cover) Along with James Fowler, Christakis has authored also a general-audience book on social networks: Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives, 2011 (book link). For a recent book review, access here.

2) We are surrounded by a sea of our friends’ genes:

“It is well known that humans tend to associate with other humans who have similar characteristics, but it is unclear whether this tendency has consequences for the distribution of genotypes in a population. Although geneticists have shown that populations tend to stratify genetically, this process results from geographic sorting or assortative mating, and it is unknown whether genotypes may be correlated as a consequence of nonreproductive associations or other processes. Here, we study six available genotypes from the National Longitudinal Study of Adolescent Health to test for genetic similarity between friends. Maps of the friendship networks show clustering of genotypes and, after we apply strict controls for population strati!cation, the results show that one genotype is positively correlated (homophily) and one genotype is negatively correlated (heterophily). A replication study in an independent sample from the Framingham Heart Study veri!es that DRD2 exhibits signi!cant homophily and that CYP2A6 exhibits signi!cant heterophily. These unique results show that homophily and heterophily obtain on a genetic (indeed, an allelic) level, which has implications for the study of population genetics and social behavior. In particular, the results suggest that association tests should include friends’ genes and that theories of evolution should take into account the fact that humans might, in some sense, be metagenomic with respect to the humans around them.”, abstract in J.H. Fowler, J.E. Settle, and N.A. Christakis, “Correlated Genotypes in Friendship Networks,” PNAS: Proceedings of the National Academy of Sciences (January 2011). [full PDF].

Four different snapshots (click to enlarge) from one of my latest books, recently published in Japan: Ajith Abraham, Crina Grosan, Vitorino Ramos (Eds.), “Swarm Intelligence in Data Mining” (群知能と  データマイニング), Tokyo Denki University press [TDU], Tokyo, Japan, July 2012.

Figure – Attractor basins (fig.2 pp.6 on Mária Ercsey-Ravasz and Zoltán Toroczkai, “Optimization hardness as transient chaos in an analog approach to constraint satisfaction“, Nature Physics, vol. 7, p. 966-970, 2011.)

Mária Ercsey-Ravasz and Zoltán Toroczkai have proposed a way of mapping satisfiability problems to differential equations and a deterministic algorithm that solves them in polynomial continuous time at the expense of exponential energy functions (so the discrete approximation of the algorithm does not run in polynomial time, and an analogue system would need exponential resources).

The map assigns a phase space to a problem; the algorithm chooses random initial conditions from within that phase space.  In the graphs above and below, they pick a 2-d subspace of the phase space and for each initial point in that space they illustrate 1) the particular solution the algorithm finds, 2) the corresponding “solution cluster”, an equivalence class of solutions that identifies two solutions if they differ in exactly one variable assignment, and 3) the time it takes to solve the problem.  Each row adds another clause to satisfy.

The especially interesting part of the paper is the notion of an escape rate,  the proportion of the trajectories still searching for a solution after a time t.  In a companion paper, they show that the escape rate for Sudoku combinatorial instances (The Chaos Within Sudoku, Nature, August 2012)  correlates strongly with human judgements of hardness. This escape rate is similar to the Kolmogorov complexity in that it gives a notion of hardness to individual problem instances rather than to classes of problems. Full paper could be retrieved from arXiv: Mária Ercsey-Ravasz and Zoltán Toroczkai, “Optimization hardness as transient chaos in an analog approach to constraint satisfaction“, Nature Physics, vol. 7, p. 966-970, 2011. (at arXiv on August 2012).

Figure – Attractor basins for 3-XORSAT (fig.8 pp.18 on Mária Ercsey-Ravasz and Zoltán Toroczkai, “Optimization hardness as transient chaos in an analog approach to constraint satisfaction“, Nature Physics, vol. 7, p. 966-970, 2011.)

Figure (click to enlarge) – Orders of common functions (via Wikipedia): A list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case, c is a constant and n increases without bound. The slower-growing functions are generally listed first. For each case, several examples are given.

“… words are not numbers, nor even signs. They are animals, alive and with a will of their own. Put together, they are invariably less or more than their sum. Words die in antisepsis. Asked to be neutral, they display allegiances and stubborn propensities. They assume the color of their new surroundings, like chameleons; they perversely develop echoes.” Guy Davenport, “Another Odyssey”, 1967. [above: painting by Mark Rothko – untitled]

Image – Reese Inman, DIVERGENCE II (2008), acrylic on panel 30 x 30 in Remix (Boston, 2008), a solo exhibition of handmade computer art works by Reese Inman, Gallery NAGA in Boston.

Apophenia is the experience of seeing meaningful patterns or connections in random or meaningless data. The term was coined in 1958[1] by Klaus Conrad,[2] who defined it as the “unmotivated seeing of connections” accompanied by a “specific experience of an abnormal meaningfulness”, but it has come to represent the human tendency to seek patterns in random information in general (such as with gambling). In statistics, apophenia is known as a Type I error – the identification of false patterns in data.[7] It may be compared with a so called false positive in other test situations. Two correlated terms are synchronicity and pareidolia (from Wikipedia):

Synchronicity: Carl Jung coined the term synchronicity for the “simultaneous occurrence of two meaningful but not causally connected events” creating a significant realm of philosophical exploration. This attempt at finding patterns within a world where coincidence does not exist possibly involves apophenia if a person’s perspective attributes their own causation to a series of events. “Synchronicity therefore means the simultaneous occurrence of a certain psychic state with one or more external events which appear as meaningful parallels to a momentary subjective state”. (C. Jung, 1960).

Pareidolia: Pareidolia is a type of apophenia involving the perception of images or sounds in random stimuli, for example, hearing a ringing phone while taking a shower. The noise produced by the running water gives a random background from which the patterned sound of a ringing phone might be “produced”. A more common human experience is perceiving faces in inanimate objects; this phenomenon is not surprising in light of how much processing the brain does in order to memorize and recall the faces of hundreds or thousands of different individuals. In one respect, the brain is a facial recognition, storage, and recall machine – and it is very good at it. A by-product of this acumen at recognizing faces is that people see faces even where there is no face: the headlights & grill of an auto-mobile can appear to be “grinning”, individuals around the world can see the “Man in the Moon”, and a drawing consisting of only three circles and a line which even children will identify as a face are everyday examples of this.[15].

Fig.1 – (click to enlarge) The optimal shortest path among N=1265 points depicting a Portuguese Navalheira crab as a result of one of our latest Swarm-Intelligence based algorithms. The problem of finding the shortest path among N different points in space is NP-hard, known as the Travelling Salesmen Problem (TSP), being one of the major and hardest benchmarks in Combinatorial Optimization (link) and Artificial Intelligence. (V. Ramos, D. Rodrigues, 2012)

This summer my kids just grab a tiny Portuguese Navalheira crab on the shore. After a small photo-session and some baby-sitting with a lettuce leaf, it was time to release it again into the ocean. He not only survived my kids, as he is now entitled into a new World Wide Web on-line life. After the Shortest path Sardine (link) with 1084 points, here is the Crab with 1265 points. The algorithm just run as little as 110 iterations.

Fig. 2 – (click to enlarge) Our 1265 initial points depicting a TSP Portuguese Navalheira crab. Could you already envision a minimal tour between all these points?

As usual in Travelling Salesmen problems (TSP) we start it with a set of points, in our case 1084 points or cities (fig. 2). Given a list of cities and their pairwise distances, the task is now to find the shortest possible tour that visits each city exactly once. The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods.

Fig. 3 – (click to enlarge) Again the shortest path Navalheira crab, where the optimal contour path (in black: first fig. above) with 1265 points (or cities) was filled in dark orange.

TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder.

What follows (fig. 4) is the original crab photo after image segmentation and just before adding Gaussian noise in order to retrieve several data points for the initial TSP problem. The algorithm was then embedded with the extracted x,y coordinates of these data points (fig. 2) in order for him to discover the minimal path, in just 110 iterations. For extra details, pay a visit onto the Shortest path Sardine (link) done earlier.

Fig. 4 – (click to enlarge) The original crab photo after some image processing as well as segmentation and just before adding Gaussian noise in order to retrieve several data points for the initial TSP problem.

How wings are attached to the backs of Angels, Craig Welsh (1996) – Production by the National Film Board of Canada ( In this surreal exposition, we meet a man, obsessed with control. His intricate gadgets manipulate yet insulate, as his science dissects and reduces. How exactly are wings attached to the back of angels? In this invented world drained of emotion, where everything goes through the motions, he is brushed by indefinite longings. Whether he can transcend his obsessions and fears is the heart of the matter (from Vimeo).

Figure – A classic example of emergence: The exact shape of a termite mound is not reducible to the actions of individual termites. Even if, there are already computer models who could achieve it (Check for more on “Stigmergic construction” or the full current blog Stigmergy tag)

The world can no longer be understood like a chessboard… It’s a Jackson Pollack painting” ~ Carne Ross, 2012.

[…] As pointed by Langton, there is more to life than mechanics – there is also dynamics. Life depends critically on principles of dynamical self-organization that have remained largely untouched by traditional analytic methods. There is a simple explanation for this – these self-organized dynamics are fundamentally non-linear phenomena, and non-linear phenomena in general depend critically on the interactions between parts: they necessarily disappear when parts are treated in isolation from one another, which is the basis for any analytic method. Rather, non-linear phenomena are most appropriately treated by a synthetic approach, where synthesis means “the combining of separate elements or substances to form a coherent whole”. In non-linear systems, the parts must be treated in each other’s presence, rather than independently from one another, because they behave very differently in each other’s presence than we would expect from a study of the parts in isolation. […] in Vitorino Ramos, 2002, /0412077.

What follows are passages from an important article on the consequences for Science at the moment of the recent discovery of the Higgs boson. Written by Ashutosh Jogalekar, “The Higgs boson and the future of science” (link) the article appeared at the Scientific American blog section (July 2012). And it starts discussing reductionism or how the Higgs boson points us to the culmination of reductionist thinking:

[…] And I say this with a suspicion that the Higgs boson may be the most fitting tribute to the limitations of what has been the most potent philosophical instrument of scientific discovery – reductionism. […]

[…] Yet as we enter the second decade of the twenty-first century, it is clear that reductionism as a principal weapon in our arsenal of discovery tools is no longer sufficient. Consider some of the most important questions facing modern science, almost all of which deal with complex, multi factorial systems. How did life on earth begin? How does biological matter evolve consciousness? What are dark matter and dark energy? How do societies cooperate to solve their most pressing problems? What are the properties of the global climate system? It is interesting to note at least one common feature among many of these problems; they result from the build-up rather than the breakdown of their operational entities. Their signature is collective emergence, the creation of attributes which are greater than the sum of their constituent parts. Whatever consciousness is for instance, it is definitely a result of neurons acting together in ways that are not obvious from their individual structures. Similarly, the origin of life can be traced back to molecular entities undergoing self-assembly and then replication and metabolism, a process that supersedes the chemical behaviour of the isolated components. The puzzle of dark matter and dark energy also have as their salient feature the behaviour of matter at large length and time scales. Studying cooperation in societies essentially involves studying group dynamics and evolutionary conflict. The key processes that operate in the existence of all these problems seem to almost intuitively involve the opposite of reduction; they all result from the agglomeration of molecules, matter, cells, bodies and human beings across a hierarchy of unique levels. In addition, and this is key, they involve the manifestation of unique principles emerging at every level that cannot be merely reduced to those at the underlying level. […]

[…] While emergence had been implicitly appreciated by scientists for a long time, its modern salvo was undoubtedly a 1972 paper in Science by the Nobel Prize winning physicist Philip Anderson (link) titled “More is Different” (PDF), a title that has turned into a kind of clarion call for emergence enthusiasts. In his paper Anderson (who incidentally first came up with the so-called Higgs mechanism) argued that emergence was nothing exotic; for instance, a lump of salt has properties very different from those of its highly reactive components sodium and chlorine. A lump of gold evidences properties like color that don’t exist at the level of individual atoms. Anderson also appealed to the process of broken symmetry, invoked in all kinds of fundamental events – including the existence of the Higgs boson – as being instrumental for emergence. Since then, emergent phenomena have been invoked in hundreds of diverse cases, ranging from the construction of termite hills to the flight of birds. The development of chaos theory beginning in the 60s further illustrated how very simple systems could give rise to very complicated and counter-intuitive patterns and behaviour that are not obvious from the identities of the individual components. […]

[…] Many scientists and philosophers have contributed to considered critiques of reductionism and an appreciation of emergence since Anderson wrote his paper. (…) These thinkers make the point that not only does reductionism fail in practice (because of the sheer complexity of the systems it purports to explain), but it also fails in principle on a deeper level. […]

[…] An even more forceful proponent of this contingency-based critique of reductionism is the complexity theorist Stuart Kauffman who has laid out his thoughts in two books. Just like Anderson, Kauffman does not deny the great value of reductionism in illuminating our world, but he also points out the factors that greatly limit its application. One of his favourite examples is the role of contingency in evolution and the object of his attention is the mammalian heart. Kauffman makes the case that no amount of reductionist analysis could explain tell you that the main function of the heart is to pump blood. Even in the unlikely case that you could predict the structure of hearts and the bodies that house them starting from the Higgs boson, such a deductive process could never tell you that of all the possible functions of the heart, the most important one is to pump blood. This is because the blood-pumping action of the heart is as much a result of historical contingency and the countless chance events that led to the evolution of the biosphere as it is of its bottom-up construction from atoms, molecules, cells and tissues. […]

[…] Reductionism then falls woefully short when trying to explain two things; origins and purpose. And one can see that if it has problems even when dealing with left-handed amino acids and human hearts, it would be in much more dire straits when attempting to account for say kin selection or geopolitical conflict. The fact is that each of these phenomena are better explained by fundamental principles operating at their own levels. […]

[…] Every time the end of science has been announced, science itself proved that claims of its demise were vastly exaggerated. Firstly, reductionism will always be alive and kicking since the general approach of studying anything by breaking it down into its constituents will continue to be enormously fruitful. But more importantly, it’s not so much the end of reductionism as the beginning of a more general paradigm that combines reductionism with new ways of thinking. The limitations of reductionism should be seen as a cause not for despair but for celebration since it means that we are now entering new, uncharted territory. […]

Figure (click to enlarge) – Time dependence of FAO Food Price Index from January 2004 to May 2011. Red dashed vertical lines correspond to beginning dates of “food riots” and protests associated with the major recent unrest in North Africa and the Middle East. The overall death toll is reported in parentheses [26-55]. Blue vertical line indicates the date, December 13, 2010, on which we submitted a report to the U.S. government, warning of the link between food prices, social unrest and political instability [56]. Inset shows FAO Food Price Index from 1990 to 2011. [From arXiv:1108.2455, page 3]

Poverty is the parent of revolution and crime.” ~ Aristotle.

By crossing data on food price, and food price peaks with an ongoing trend of increasing prices, as well as the date of riots around the world, 3 of my colleagues at NECSI – the New England Complex Systems Institute (link), Boston,  found out a specific food price threshold above which protests become likely. By doing so, unveiled a model that accurately explained why the waves of unrest that swept the world in 2008 and 2011 crashed when they did. That was the past. NECSI team however, expects a perilous trend in rising food prices to continue (link). Even before the extreme weather scrambled food prices this year, their 2011 report predicted that the next great breach would occur in August 2013, and that the risk of more worldwide rioting would follow. So, if trends hold, these complex systems model say we’re less than one year and counting from a fireball of global unrest riots.

The abstract and PDF link into their work follows:

[…] Social unrest may reflect a variety of factors such as poverty, unemployment, and social injustice. Despite the many possible contributing factors, the timing of violent protests in North Africa and the Middle East in 2011 as well as earlier riots in 2008 coincides with large peaks in global food prices. We identify a specific food price threshold above which protests become likely. These observations suggest that protests may reflect not only long-standing political failings of governments, but also the sudden desperate straits of vulnerable populations. If food prices remain high, there is likely to be persistent and increasing global social disruption. Underlying the food price peaks we also found an ongoing trend of increasing prices. We extrapolate these trends and identify a crossing point to the domain of high impacts, even without price peaks, in 2012-2013. This implies that avoiding global food crises and associated social unrest requires rapid and concerted action. […] in Marco Lagi, Karla Z. Bertrand and Yaneer Bar-Yam, “The Food Crises and Political Instability in North Africa and the Middle East“, arXiv:1108.2455, August 10, 2011. [PDF link]

No, not the Grand Caynon neither the Epstein & Axtell Sugarscape (link) this time, instead a soundscape. A landscape made of sounds or grooves. Look at this as an ancient form of encapsulating data. Taken by Chris Supranowitz, a researcher at The Insitute of Optics at the University of Rochester (US), the image depicts a single groove on a vinyl record magnified 1000 times, using electron microscopy. Dark bits are the top of the grooves, i.e. the uncut vinyl, while even darker little bumps are dust on the record (e.g. centre right). For more images check SynthGear, and found out (image link) what have they discovered if we keep magnifying that image further still!

Remove one network edge and see what happens. Then, two… etc. This is the first illustration on Mark BuchananNexus: Small-worlds and the ground-breaking science of networks” 2002 book – Norton, New York (Prelude, page 17), representing a portion of the food web for the Benguela ecosystem, located off the western coast of South Africa (from Peter Yodzis). For a joint review of 3 general books in complex networks, including Barabási‘s “Linked“, Duncan WattsSmall-Worlds” and Buchanan‘s “Nexus” pay a visit into JASSSJournal of Artificial Societies and Social Simulation, ‘a review of three books’ entry by Frédéric Amblard (link).

Figure (click to enlarge) – Cover from one of my books published last month (10 July 2012) “Swarm Intelligence in Data Mining” recently translated and edited in Japan (by Tokyo Denki University press [TDU]). Cover image from (url). Title was translated into 群知能と  データマイニング. Funny also, to see my own name for the first time translated into Japanese – wonder if it’s Kanji. A brief synopsis follow:

(…) Swarm Intelligence (SI) is an innovative distributed intelligent paradigm for solving optimization problems that originally took its inspiration from the biological examples by swarming, flocking and herding phenomena in vertebrates. Particle Swarm Optimization (PSO) incorporates swarming behaviours observed in flocks of birds, schools of fish, or swarms of bees, and even human social behaviour, from which the idea is emerged. Ant Colony Optimization (ACO) deals with artificial systems that is inspired from the foraging behaviour of real ants, which are used to solve discrete optimization problems. Historically the notion of finding useful patterns in data has been given a variety of names including data mining, knowledge discovery, information extraction, etc. Data Mining is an analytic process designed to explore large amounts of data in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data. In order to achieve this, data mining uses computational techniques from statistics, machine learning and pattern recognition. Data mining and Swarm intelligence may seem that they do not have many properties in common. However, recent studies suggests that they can be used together for several real world data mining problems especially when other methods would be too expensive or difficult to implement. This book deals with the application of swarm intelligence methodologies in data mining. Addressing the various issues of swarm intelligence and data mining using different intelligent approaches is the novelty of this edited volume. This volume comprises of 11 chapters including an introductory chapters giving the fundamental definitions and some important research challenges. Chapters were selected on the basis of fundamental ideas/concepts rather than the thoroughness of techniques deployed. (…) (more)

Video – TED lecture: Empathy, cooperation, fairness and reciprocity – caring about the well-being of others seems like a very human trait. But Frans de Waal shares some surprising videos of behavioural tests, on primates and other mammals, that show how many of these moral traits all of us share. (TED, Nov. 2011, link).

Evolutionary explanations are built around the principle that all that natural selection can work with are the effects of behaviour – not the motivation behind it. This means there is only one logical starting point for evolutionary accounts, as explained by Trivers (2002, p. 6): “You begin with the effect of behaviour on actors and recipients; you deal with the problem of internal motivation, which is a secondary problem, afterwards. . . . [I]f you start with motivation, you have given up the evolutionary analysis at the outset.” ~ Frans B.M. de Waal, 2008.

Do animals have morals? And above all, did morality evolved? The question is pertinent in a broad range of quite different areas, as in as well Computer Sciences and Norm Generation (e.g. link for an MSc thesis) in bio-inspired Computation and Artificial Life, but here new fresh answers come directly from Biology. Besides the striking video lecture above, what follows are 2 different excerpts (abstract and conclusions) from a 2008 paper by Frans B.M. de Waal (Living Links Center lab., Emory University, link): de Waal, F.B.M. (2008). Putting the altruism back in altruism: The evolution of empathy. Ann. Rev. Psychol. 59: 279-300 (full PDF link):

(…) Abstract: Evolutionary theory postulates that altruistic behaviour evolved for the return-benefits it bears the performer. For return-benefits to play a motivational role, however, they need to be experienced by the organism. Motivational analyses should restrict themselves, therefore, to the altruistic impulse and its knowable consequences. Empathy is an ideal candidate mechanism to underlie so-called directed altruism, i.e., altruism in response to another’s pain, need, or distress. Evidence is accumulating that this mechanism is phylogenetically ancient, probably as old as mammals and birds. Perception of the emotional state of another automatically activates shared representations causing a matching emotional state in the observer.With increasing cognition, state-matching evolved into more complex forms, including concern for the other and perspective-taking. Empathy-induced altruism derives its strength from the emotional stake it offers the self in the other’s welfare. The dynamics of the empathy mechanism agree with predictions from kin selection and reciprocal altruism theory. (…)

(…) Conclusion: More than three decades ago, biologists deliberately removed the altruism from altruism.There is now increasing evidence that the brain is hardwired for social connection, and that the same empathy mechanism proposed to underlie human altruism (Batson 1991) may underlie the directed altruism of other animals. Empathy could well provide the main motivation making individuals who have exchanged benefits in the past to continue doing so in the future. Instead of assuming learned expectations or calculations about future benefits, this approach emphasizes a spontaneous altruistic impulse and a mediating role of the emotions. It is summarized in the five conclusions below: 1. An evolutionarily parsimonious account (cf. de Waal 1999) of directed altruism assumes similar motivational processes in humans and other animals. 2. Empathy, broadly defined, is a phylogenetically ancient capacity. 3. Without the emotional engagement brought about by empathy, it is unclear what could motivate the extremely costly helping behavior occasionally observed in social animals. 4. Consistent with kin selection and reciprocal altruism theory, empathy favours familiar individuals and previous cooperators, and is biased against previous defectors. 5. Combined with perspective-taking abilities, empathy’s motivational autonomy opens the door to intentionally altruistic altruism in a few large-brained species.(…) in, de Waal, F.B.M. (2008). Putting the altruism back in altruism: The evolution of empathy. Ann. Rev. Psychol. 59: 279-300 (full PDF link).

Frans de Waal research work does not end up here, of course. He is a ubiquitous influence and writer on many related areas such as: Cognition, Communication, Crowding/Conflict Resolution, Empathy and Altruism, Social Learning and Culture, Sharing and Cooperation and last but not least, Behavioural Economics. All of his papers are free on-line, in a web page I do vividly recommend a long visit.

Complex adaptive systems (CAS), including ecosystems, governments, biological cells, and markets, are characterized by intricate hierarchical arrangements of boundaries and signals. In ecosystems, for example, niches act as semi-permeable boundaries, and smells and visual patterns serve as signals; governments have departmental hierarchies with memoranda acting as signals; and so it is with other CAS. Despite a wealth of data and descriptions concerning different CAS, there remain many unanswered questions about “steering” these systems. In Signals and Boundaries, John Holland (Wikipedia entry) argues that understanding the origin of the intricate signal/border hierarchies of these systems is the key to answering such questions. He develops an overarching framework for comparing and steering CAS through the mechanisms that generate their signal/boundary hierarchies. Holland lays out a path for developing the framework that emphasizes agents, niches, theory, and mathematical models. He discusses, among other topics, theory construction; signal-processing agents; networks as representations of signal/boundary interaction; adaptation; recombination and reproduction; the use of tagged urn models (adapted from elementary probability theory) to represent boundary hierarchies; finitely generated systems as a way to tie the models examined into a single framework; the framework itself, illustrated by a simple finitely generated version of the development of a multi-celled organism; and Markov processes.

in, Introduction to John H. Holland, “Signals and Boundaries – Building blocks for Complex Adaptive Systems“, Cambridge, Mass. : ©MIT Press, 2012.

Figure – Brain wave patterns (gamma-waves above 40 Hz). Gamma waves – 40 hz above – these are use for higher mental activity such as for problem solving, consciousness, fear. Beta waves – 13-39 Hz – these are for active thinking and active concentration, paranoia, cognition and arousal. Alpha waves – 7-13 Hz – these are for pre-sleep and pre-wake drowsiness and for relaxation. Theta waves – 4-7 Hz – these are for deep meditation, relaxation, dreams and rapid eye movement (REM) sleep. Delta waves – 4 Hz and below are for loss of body awareness and deep dreamless sleep (source: Medical School, link).

Photo – Rover’s Self Portrait (link): this Picasso-like self portrait of NASA’s Curiosity rover was taken by its Navigation cameras, located on the now-upright mast. The camera snapped pictures 360-degrees around the rover, while pointing down at the rover deck, up and straight ahead. Those images are shown here in a polar projection. Most of the tiles are thumbnails, or small copies of the full-resolution images that have not been sent back to Earth yet. Two of the tiles are full-resolution. Image credit: NASA/JPL-Caltech (August, 9, 2012). [6000 x 4500 full size link].

video – tshirtOS is the world’s first wearable, shareable, programmable t-shirt. A working, digital t-shirt that can be programmed by an iOS app to do whatever you can think of (by CuteCircuit, link).

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

@ViRAms on Twitter


Blog Stats

  • 244,569 hits