You are currently browsing the tag archive for the ‘biology’ tag.

von Neumann

There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself“. ~ John von Neumann, in his 1949 University of Illinois lectures on the Theory and Organization of Complicated Automata [J. von Neumann, Theory of self-reproducing automata, 1949 Univ. of Illinois Lectures on the Theory and Organization of Complicated Automata, ed. A.W. Burks (University of Illinois Press, Urbana, IL, 1966).].

Surfaces and Essences - Hofstadter Sander 2013

[…] Analogy is the core of all thinking. – This is the simple but unorthodox premise that Pulitzer Prize-winning author Douglas Hofstadter and French psychologist Emmanuel Sander defend in their new work. Hofstadter has been grappling with the mysteries of human thought for over thirty years. Now, with his trademark wit and special talent for making complex ideas vivid, he has partnered with Sander to put forth a highly novel perspective on cognition. We are constantly faced with a swirling and intermingling multitude of ill-defined situations. Our brain’s job is to try to make sense of this unpredictable, swarming chaos of stimuli. How does it do so? The ceaseless hail of input triggers analogies galore, helping us to pinpoint the essence of what is going on. Often this means the spontaneous evocation of words, sometimes idioms, sometimes the triggering of nameless, long-buried memories.

Why did two-year-old Camille proudly exclaim, “I undressed the banana!”? Why do people who hear a story often blurt out, “Exactly the same thing happened to me!” when it was a completely different event? How do we recognize an aggressive driver from a split-second glance in our rear-view mirror? What in a friend’s remark triggers the offhand reply, “That’s just sour grapes”?  What did Albert Einstein see that made him suspect that light consists of particles when a century of research had driven the final nail in the coffin of that long-dead idea? The answer to all these questions, of course, is analogy-making – the meat and potatoes, the heart and soul, the fuel and fire, the gist and the crux, the lifeblood and the wellsprings of thought. Analogy-making, far from happening at rare intervals, occurs at all moments, defining thinking from top to toe, from the tiniest and most fleeting thoughts to the most creative scientific insights.

Like Gödel, Escher, Bach before it, Surfaces and Essences will profoundly enrich our understanding of our own minds. By plunging the reader into an extraordinary variety of colorful situations involving language, thought, and memory, by revealing bit by bit the constantly churning cognitive mechanisms normally completely hidden from view, and by discovering in them one central, invariant core – the incessant, unconscious quest for strong analogical links to past experiences – this book puts forth a radical and deeply surprising new vision of the act of thinking. […] intro to “Surfaces and Essences – Analogy as the fuel and fire of thinking” by Douglas Hofstadter and Emmanuel Sander, Basic Books, NY, 2013 [link] (to be released May 1, 2013).

Octavio Aburto David and Goliath CaboPulmo NatGeo2012

During several years, Octavio Aburto thought of one photo. Now, he finally got it. The recently published photograph by Aburto, titled “David and Goliath” (it his in fact David Castro, one of his research science colleagues at the center of this stunning image) has been widely shared over the last few weeks. It was taken at Cabo Pulmo National Park (Mexico) and submitted to the National Geographic photo contest 2012. Here, he captures the sheer size of fish aggregations in perspective with a single human surrounded by abundant marine life. On a recent interview, he explains:

[…] … this “David and Goliath” image is speaking to the courtship behavior of one particular species of Jack fish. […] Many people say that a single image is worth a thousand words, but a single image can also represent thousands of data points and countless statistical analyses. One image, or a small series of images can tell a complicated story in a very simple way. […] The picture you see was taken November 1st, 2012. But this picture has been in my mind for three years — I have been trying to capture this image ever since I saw the behavior of these fish and witnessed the incredible tornado that they form during courtship. So, I guess you could say this image took almost three years. […], in mission-blue.org , Dec. 2012.

Video – Behind the scenes of David and Goliath image. This photo was taken at Cabo Pulmo National Park and submitted to the National Geographic photo contest 2012. You can see more of his images from this place and about Mexican seas on Octavio‘s web link.

Image – The frontispiece of William King Gregory’s two-volume Evolution Emerging. Gregory, 1951, Evolution Emerging: A Survey of Changing Patterns from Primeval Life to Man, vol. 2, p. 757; fig. 20.33; [courtesy of Mary DeJong, Mai Qaraman, and the American Museum of Natural History].

Recent research have increasingly being focused on the relationship between Human-Human interaction, social networks (no, not the Facebook) and other Human-activity areas, like health. Nicholas Christakis (Harvard Univ. research link) points us that, people are inter-connected, and so as well, their health is inter-connected. This research engages two types of phenomena: the social, mathematical, and biological rules governing how social networks form (“Connection“) and the biological and social implications of how they operate to influence thoughts, feelings, and behaviours (“Contagion“), as in the self-organized stigmergy-like dynamics of Cognitive Collective Perception (link).

Above, Nicholas Christakis (in a 56m. documentary lecture produced by The Floating University, Sept. 2011) discusses the obvious tension and delicate balance between agency (one individual choices and actions) and structure (our collective responsibility), where here, structure refers not only to our co-evolving dynamic societal environment as well as to the permanent unfolding entangled nature of topological structure on complex networks, such as in human-human social networks, while asking: If you’re so free, why do you follow others? The documentary (YouTube link) resume states:

If you think you’re in complete control of your destiny or even your own actions, you’re wrong. Every choice you make, every behaviour you exhibit, and even every desire you have finds its roots in the social universe. Nicholas Christakis explains why individual actions are inextricably linked to sociological pressures; whether you’re absorbing altruism performed by someone you’ll never meet or deciding to jump off the Golden Gate Bridge, collective phenomena affect every aspect of your life. By the end of the lecture Christakis has revealed a startling new way to understand the world that ranks sociology as one of the most vitally important social sciences.”

While cooperation is central to the success of human societies and is widespread, cooperation in itself, however, poses a challenge in both the social and biological sciences: How can this high level of cooperation be maintained in the face of possible exploitation? One answer involves networked interactions and population structure.

As perceived, the balance between homophily (where “birds of a feather flock together”) and heterophily (one where most of genotypes are negatively correlated), do requires further research. In fact, in humans, one of the most replicated findings in the social sciences is that people tend to associate with other people that they resemble, a process precisely known as homophily. As Christakis points out, although phenotypic resemblance between friends might partly reflect the operation of social influence, our genotypes are not materially susceptible to change. Therefore, genotypic resemblance could result only from a process of selection. Such genotypic selection might in turn take several forms. For short, let me stress you two examples. What follows are two papers, as well as a quick reference (image below) to a recent general-audience of his books:

1) Rewiring your network fosters cooperation:

“Human populations are both highly cooperative and highly organized. Human interactions are not random but rather are structured in social networks. Importantly, ties in these networks often are dynamic, changing in response to the behavior of one’s social partners. This dynamic structure permits an important form of conditional action that has been explored theoretically but has received little empirical attention: People can respond to the cooperation and defection of those around them by making or breaking network links. Here, we present experimental evidence of the power of using strategic link formation and dissolution, and the network modification it entails, to stabilize cooperation in sizable groups. Our experiments explore large-scale cooperation, where subjects’ cooperative actions are equally beneficial to all those with whom they interact. Consistent with previous research, we find that cooperation decays over time when social networks are shuffled randomly every round or are fixed across all rounds. We also find that, when networks are dynamic but are updated only infrequently, cooperation again fails. However, when subjects can update their network connections frequently, we see a qualitatively different outcome: Cooperation is maintained at a high level through network rewiring. Subjects preferentially break links with defectors and form new links with cooperators, creating an incentive to cooperate and leading to substantial changes in network structure. Our experiments confirm the predictions of a set of evolutionary game theoretic models and demonstrate the important role that dynamic social networks can play in supporting large-scale human cooperation.”, abstract in D.G. Rand, S. Arbesman, and N.A. Christakis, “Dynamic Social Networks Promote Cooperation in Experiments with Humans,” PNAS: Proceedings of the National Academy of Sciences (October 2011). [full PDF];

Picture – (book cover) Along with James Fowler, Christakis has authored also a general-audience book on social networks: Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives, 2011 (book link). For a recent book review, access here.

2) We are surrounded by a sea of our friends’ genes:

“It is well known that humans tend to associate with other humans who have similar characteristics, but it is unclear whether this tendency has consequences for the distribution of genotypes in a population. Although geneticists have shown that populations tend to stratify genetically, this process results from geographic sorting or assortative mating, and it is unknown whether genotypes may be correlated as a consequence of nonreproductive associations or other processes. Here, we study six available genotypes from the National Longitudinal Study of Adolescent Health to test for genetic similarity between friends. Maps of the friendship networks show clustering of genotypes and, after we apply strict controls for population strati!cation, the results show that one genotype is positively correlated (homophily) and one genotype is negatively correlated (heterophily). A replication study in an independent sample from the Framingham Heart Study veri!es that DRD2 exhibits signi!cant homophily and that CYP2A6 exhibits signi!cant heterophily. These unique results show that homophily and heterophily obtain on a genetic (indeed, an allelic) level, which has implications for the study of population genetics and social behavior. In particular, the results suggest that association tests should include friends’ genes and that theories of evolution should take into account the fact that humans might, in some sense, be metagenomic with respect to the humans around them.”, abstract in J.H. Fowler, J.E. Settle, and N.A. Christakis, “Correlated Genotypes in Friendship Networks,” PNAS: Proceedings of the National Academy of Sciences (January 2011). [full PDF].

Video – TED lecture: Empathy, cooperation, fairness and reciprocity – caring about the well-being of others seems like a very human trait. But Frans de Waal shares some surprising videos of behavioural tests, on primates and other mammals, that show how many of these moral traits all of us share. (TED, Nov. 2011, link).

Evolutionary explanations are built around the principle that all that natural selection can work with are the effects of behaviour – not the motivation behind it. This means there is only one logical starting point for evolutionary accounts, as explained by Trivers (2002, p. 6): “You begin with the effect of behaviour on actors and recipients; you deal with the problem of internal motivation, which is a secondary problem, afterwards. . . . [I]f you start with motivation, you have given up the evolutionary analysis at the outset.” ~ Frans B.M. de Waal, 2008.

Do animals have morals? And above all, did morality evolved? The question is pertinent in a broad range of quite different areas, as in as well Computer Sciences and Norm Generation (e.g. link for an MSc thesis) in bio-inspired Computation and Artificial Life, but here new fresh answers come directly from Biology. Besides the striking video lecture above, what follows are 2 different excerpts (abstract and conclusions) from a 2008 paper by Frans B.M. de Waal (Living Links Center lab., Emory University, link): de Waal, F.B.M. (2008). Putting the altruism back in altruism: The evolution of empathy. Ann. Rev. Psychol. 59: 279-300 (full PDF link):

(…) Abstract: Evolutionary theory postulates that altruistic behaviour evolved for the return-benefits it bears the performer. For return-benefits to play a motivational role, however, they need to be experienced by the organism. Motivational analyses should restrict themselves, therefore, to the altruistic impulse and its knowable consequences. Empathy is an ideal candidate mechanism to underlie so-called directed altruism, i.e., altruism in response to another’s pain, need, or distress. Evidence is accumulating that this mechanism is phylogenetically ancient, probably as old as mammals and birds. Perception of the emotional state of another automatically activates shared representations causing a matching emotional state in the observer.With increasing cognition, state-matching evolved into more complex forms, including concern for the other and perspective-taking. Empathy-induced altruism derives its strength from the emotional stake it offers the self in the other’s welfare. The dynamics of the empathy mechanism agree with predictions from kin selection and reciprocal altruism theory. (…)

(…) Conclusion: More than three decades ago, biologists deliberately removed the altruism from altruism.There is now increasing evidence that the brain is hardwired for social connection, and that the same empathy mechanism proposed to underlie human altruism (Batson 1991) may underlie the directed altruism of other animals. Empathy could well provide the main motivation making individuals who have exchanged benefits in the past to continue doing so in the future. Instead of assuming learned expectations or calculations about future benefits, this approach emphasizes a spontaneous altruistic impulse and a mediating role of the emotions. It is summarized in the five conclusions below: 1. An evolutionarily parsimonious account (cf. de Waal 1999) of directed altruism assumes similar motivational processes in humans and other animals. 2. Empathy, broadly defined, is a phylogenetically ancient capacity. 3. Without the emotional engagement brought about by empathy, it is unclear what could motivate the extremely costly helping behavior occasionally observed in social animals. 4. Consistent with kin selection and reciprocal altruism theory, empathy favours familiar individuals and previous cooperators, and is biased against previous defectors. 5. Combined with perspective-taking abilities, empathy’s motivational autonomy opens the door to intentionally altruistic altruism in a few large-brained species.(…) in, de Waal, F.B.M. (2008). Putting the altruism back in altruism: The evolution of empathy. Ann. Rev. Psychol. 59: 279-300 (full PDF link).

Frans de Waal research work does not end up here, of course. He is a ubiquitous influence and writer on many related areas such as: Cognition, Communication, Crowding/Conflict Resolution, Empathy and Altruism, Social Learning and Culture, Sharing and Cooperation and last but not least, Behavioural Economics. All of his papers are free on-line, in a web page I do vividly recommend a long visit.

Complex adaptive systems (CAS), including ecosystems, governments, biological cells, and markets, are characterized by intricate hierarchical arrangements of boundaries and signals. In ecosystems, for example, niches act as semi-permeable boundaries, and smells and visual patterns serve as signals; governments have departmental hierarchies with memoranda acting as signals; and so it is with other CAS. Despite a wealth of data and descriptions concerning different CAS, there remain many unanswered questions about “steering” these systems. In Signals and Boundaries, John Holland (Wikipedia entry) argues that understanding the origin of the intricate signal/border hierarchies of these systems is the key to answering such questions. He develops an overarching framework for comparing and steering CAS through the mechanisms that generate their signal/boundary hierarchies. Holland lays out a path for developing the framework that emphasizes agents, niches, theory, and mathematical models. He discusses, among other topics, theory construction; signal-processing agents; networks as representations of signal/boundary interaction; adaptation; recombination and reproduction; the use of tagged urn models (adapted from elementary probability theory) to represent boundary hierarchies; finitely generated systems as a way to tie the models examined into a single framework; the framework itself, illustrated by a simple finitely generated version of the development of a multi-celled organism; and Markov processes.

in, Introduction to John H. Holland, “Signals and Boundaries – Building blocks for Complex Adaptive Systems“, Cambridge, Mass. : ©MIT Press, 2012.

Photo – Venation network of young Populus tremuloides (quaking aspen) leaf (4X). By Benjamin Blonder, David Elliott (2011), University of Arizona, Department of Ecology & Evolutionary Biology, Tucson, Arizona, USA (link).

(…) Pando (Latin for “I spread“), also known as The Trembling Giant,[1][2] is a clonal colony of a single male Quaking Aspen (Populus tremuloides) determined to be a single living organism by identical genetic markers[3] and one massive underground root system. The plant is estimated to weigh collectively 6,000,000 kg (6,600 short tons),[4] making it the heaviest known organism.[5] The root system of Pando, at an estimated 80,000 years old, is among the oldest known living organisms.[6] (…) in, Pando (tree), Wikipedia [link].

Figure (clik to enlarge) – Applying P(0)=0.6; r=4; N=100000; for(n=0;n<=N;n++) { P(n+1)=r*P(n)*(1-P(n)); } Robert May Population Dynamics equation [1974-76] (do check on Logistic maps) for several iterations (generations). After 780 iterations, P is attracted to 1 (max. population), and then suddenly, for the next generations the very same population is almost extinguish.

Not only in research, but also in the everyday world of politics and economics, we would all be better off if more people realised that simple non-linear systems do not necessarily possess simple dynamical properties.” ~ Robert M. May, “Simple Mathematical models with very complicated Dynamics”, Nature, Vol. 261, p.459, June 10, 1976.

(…) The fact that the simple and deterministic equation (1) can possess dynamical trajectories which look like some sort of random noise has disturbing practical implications. It means, for example, that apparently erratic fluctuations in the census data for an animal population need not necessarily betoken either the vagaries of an unpredictable environment or sampling errors: they may simply derive from a rigidly deterministic population growth relationship such as equation (1). This point is discussed more fully and carefully elsewhere [1]. Alternatively, it may be observed that in the chaotic regime arbitrarily close initial conditions can lead to trajectories which, after a sufficiently long time, diverge widely. This means that, even if we have a simple model in which all the parameters are determined exactly, long term prediction is nevertheless impossible. In a meteorological context, Lorenz [15] has called this general phenomenon the “butterfly effect“: even if the atmosphere could be described by a deterministic model in which all parameters were known, the fluttering of a butterfly’s wings could alter the initial conditions, and thus (in the chaotic regime) alter the long term prediction. Fluid turbulence provides a classic example where, as a parameter (the Reynolds number) is tuned in a set of deterministic equations (the Navier-Stokes equations), the motion can undergo an abrupt transition from some stable configuration (for example, laminar flow) into an apparently stochastic, chaotic regime. Various models, based on the Navier-Stokes differential equations, have been proposed as mathematical metaphors for this process [15,40,41] . In a recent review of the theory of turbulence, Martin [42] has observed that the one-dimensional difference equation (1) may be useful in this context. Compared with the earlier models [15,40,41] it has the disadvantage of being even more abstractly metaphorical, and the advantage of having a spectrum of dynamical behaviour which is more richly complicated yet more amenable to analytical investigation. A more down-to-earth application is possible in the use of equation (1) to fit data [1,2,3,38,39,43] on biological populations with discrete, non-overlapping generations, as is the case for many temperate zone arthropods. (…) in pp. 13-14, Robert M. May, “Simple Mathematical models with very complicated Dynamics“, Nature, Vol. 261, p.459, June 10, 1976 [PDF link].

Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!” ~ The Red Queen, at “Through the looking-glass, and what Alice found there“, Charles Lutwidge Dogson, 1871.

Your move. Alice was suddenly found in a new strange world. And quickly needed to adapt. As C.L. Dogson (most known as Lewis Carroll) brilliantly puts it, all the running you can do, does not suffices at all. This is a world (Wonderland) with different “physical” laws or “societal norms”. Surprisingly, those patterns appear also to us quite familiar, here, on planet Earth. As an example, the quote above is mainly the paradigm for Biological Co-Evolution, in the form of the Red-Queen effect.

In Wonderland (1st book), Alice follows the white rabbit, which end-ups driving her on this strange habitat, where apparently “normal” “physical” laws do not apply. On this second book however, Alice now needs to overcome a series of great obstacles – structured as phases in a game of chess – in order to become a queen.  Though, as she moves on, several other enigmatic personages appear. Punctuated as well as surrounded by circular arguments and logical paradoxes, Alice must keep on, in order to found the “other side of the mirror“.

There are other funny parallel moves, also. The story goes on that Lewis Carroll gave a copy of “Alice in Wonderland” to Queen Victoria who then asked him in return to send her his next book as she fancied the first one. The joke is that the book was (!) … “An Elementary Treatise on Determinants, With Their Application to Simultaneous Linear Equations and Algebraic Equations (link)”. Lewis Carroll then went on to write a follow-on to Alice in Wonderland entitled “Through the Looking-Glass, and what Alice found there” that features a chess board on his first pages, and where chess was used to gave her, Queen Victoria, a glimpse on what Alice explored on this new world.

In fact, the diagram on the first pages contains not only the entire book chapters of this novel as well how Alice moved on. Where basically, each move, moves the reader to a new chapter (see below) representing it. The entire book could be found here in PDF format.  Besides the beauty and philosophical value of Dogson‘s novel on itself, and his repercussions on nowadays co-Evolution research as a metaphor, this is much probably the first “chess-literature” diagram ever composed. Now, of course, pieces are not white and black, but instead white and red (note that pieces in c1 – queen – and c6 – king – are white). Lewis Carroll novel, then goes on like this: White pawn (Alice) to play, and win in eleven moves.

However, in order to enter this world you must follow the “rules” of this new world. “Chess” in here is not normal, as Wonderland was not normal to Alice’s eyes. Remember: If you do all do run you could do, you will find yourself at the same place. Better if you could run twice as fast! First Lewis Carroll words on his second book (at the preface / PDF link above) advise us:

(…) As the chess-problem, given on a previous page, has puzzled some of my readers, it may be well to explain that it is correctly worked out, so far as the moves are concerned. The alternation of Red and White is perhaps not so strictly observed as it might be, and the ‘castling’ of the three Queens is merely a way of saying that they entered the palace; but the ‘check’ of the White King at move 6, the capture of the Red Knight at move 7, and the final ‘check-mate’ of the Red King, will be found, by any one who will take the trouble to set the pieces and play the moves as directed, to be strictly in accordance with the laws of the game. (…) Lewis Carroll, Christmas,1896.

Now, the solution, could be delivered in various format languages. But here is one I prefer. It was encoded on classic BASIC, running on a ZX Spectrum emulator. Here is an excerpt:

1750 LET t$=”11. Alice takes Red Queen & wins(checkmate)”: GO SUB 7000 (…)
9000 REM
9001 REM ** ZX SPECTRUM MANUAL Page 96 Chapter 14. **
9002 REM
9004 RESTORE 9000 (…)
9006 LET b=BIN 01111100: LET c=BIN 00111000: LET d=BIN 00010000
9010 FOR n=1 TO 6: READ p$: REM 6 pieces
9020 FOR f=0 TO 7: REM read piece into 8 bytes
9030 READ a: POKE USR p$+f,a
9040 NEXT f
9100 REM bishop
9110 DATA “b”,0,d,BIN 00101000,BIN 01000100
9120 DATA BIN 01101100,c,b,0
9130 REM king
9140 DATA “k”,0,d,c,d
9150 DATA c,BIN 01000100,c,0
9160 REM rook
9170 DATA “r”,0,BIN 01010100,b,c
9180 DATA c,b,b,0
9190 REM queen
9200 DATA “q”,0,BIN 01010100,BIN 00101000,d
9210 DATA BIN 01101100,b,b,0
9220 REM pawn
9230 DATA “p”,0,0,d,c
9240 DATA c,d,b,0
9250 REM knight
9260 DATA “n”,0,d,c,BIN 01111000
9270 DATA BIN 00011000,c,b,0

(…) full code on [link]

This is BASIC-ally Alice’s story …

I saw them hurrying from either side, and each shade kissed another, without pausing;  Each by the briefest society satisfied. (Ants in their dark ranks, meet exactly so, rubbing each other’s noses, to ask perhaps; What luck they’ve had, or which way they should go.)” — Dante, Purgatorio, Canto XXVI.

Video documentary: A 15-minute program produced from February 1949 to April 1952, Kieran’s Kaleidoscope presented its writer and host in his well-acquainted role as the learned and witty guide to the complexities of human knowledge (Production Company: Almanac Films). This is probably the most genuinely entertaining of all the John Kieran‘s Kaleidoscope films. On Ant City (1949) [Internet Archive] produced by Paul F. Moss, the poor ants are anthropomorphized to the nth degree; we even hear the Wedding March when the “queen” and her drone fly away from the nest. Kieran‘s patter has never been more meandering; he sounds like a befuddled uncle narrating home movies. Clumsy but enjoyable.

[…] In conclusion, much elegant work has been done starting from activated mono-nucleotides. However, the prebiotic synthesis of a specific macromolecular sequence does not seem to be at hand, giving us the same problem we have with polypeptide sequences. Since there is no ascertained prebiotic pathway to their synthesis, it may be useful to try to conceive some working hypothesis. In order to do that, I would first like to consider a preliminary question about the proteins we have on our Earth: “Why these proteins … and not other ones?”. Discussing this question can in fact give us some clue as to how orderly sequences might have originated. […] A grain of sand in the Sahara – This is indeed a central question in our world of proteins. How have they been selected out? There is a well-known arithmetic at the basis of this question, (see for example De Duve, 2002) which says that for a polypeptide chain with 100 residues, 20^100 different chains are in principle possible: a number so large that it does not convey any physical meaning. In order to grasp it somewhat, consider that the proteins existing on our planet are of the order of a few thousand billions, let us say around 10^13 (and with all isomers and mutations we may arrive at a few orders of magnitude more). This sounds like a large number. However, the ratio between the possible (say 20^100) and the actual chains (say 10^15) corresponds approximately to the ratio between the radius of the universe and the radius of a hydrogen atom! Or, to use another analogy, nearer to our experience, a ratio many orders of magnitude greater than the ratio between all the grains of sand in the vast Sahara and a single grain. The space outside “our atom”, or our grain of sand, is the space of the “never-born proteins”, the proteins that are not with us – either because they didn’t have the chance to be formed, or because they “came” and were then obliterated. This arithmetic, although trivial, bears an important message: in order to reproduce our proteins we would have to hit the target of that particular grain of sand in the whole Sahara. Christian De Duve, in order to avoid this “sequence paradox” (De Duve, 2002), assumes that all started with short polypeptides – and this is in fact reasonable. However, the theoretically possible total number of long chains does not change if you start with short peptides instead of amino acids. The only way to limit the final number of possible chains would be to assume, for example, that peptide synthesis started only under a particular set of conditions of composition and concentration, thus bringing contingency into the picture. As a corollary, then, this set of proteins born as a product of contingency would have been the one that happened to start life. Probably there is no way of eliminating contingency from the aetiology of our set of proteins. […]

Figure – The ratio between the theoretical number of possible proteins and their actual number is many orders of magnitude greater than the ratio between all sand of the vast Sahara and a single grain of sand (caption on page 69).

[…] The other objection to the numerical meaning suggested by Figure (above) is that the maximum number of proteins is much smaller because a great number of chain configurations are prohibited for energetic reasons. This is reasonable. Let us then assume that 99.9999% of theoretically possible protein chains cannot exist because of energy reasons. This would leave only one protein out of one million, reducing the number of never-born proteins from, say, 10^60 to 10^54. Not a big deal. Of course one could also assume that the total number of energetically allowed proteins is extremely small, no larger than, say, 10^10. This cannot be excluded a priori, but is tantamount to saying that there is something very special about “our” proteins, namely that they are energetically special. Whether or not this is so can be checked experimentally as will be seen later in a research project aimed at this target. The assumption that “our” proteins have something special from the energetic point of view, would correspond to a strict deterministic view that claims that the pathway leading to our proteins was determined, that there was no other possible route. Someone adhering strictly to a biochemical anthropic principle might even say that these proteins are the way they are in order to allow life and the development of mankind on Earth. The contingency view would recite instead the following: if our proteins or nucleic acids have no special properties from the point of view of thermodynamics, then run the tape again and a different “grain of sand” might be produced – one that perhaps would not have supported life. Some may say at this point that proteins derive in any case from nucleic-acid templates – perhaps through a primitive genetic code. However, this is really no argument – it merely shifts the problem of the etiology of peptide chains to etiology of oligonucleotide chains, all arithmetic problems remaining more or less the same. […] pp. 68-70, in Pier Luigi Luisi, “The Emergence of Life: From Chemical Origins to Synthetic Biology“, Cambridge University Press, US, 2006.

Picture – The European Conference on Complex Systems (ECCS’11 – link) at one of the main Austrian newspapers Der Standard: “Die ganze Welt als Computersimulation” (link), Klaus Taschwer, Der Standard, 14 September [click to enlarge – photo taken at the conference on Sept. 15, Vienna 2011].

Take Darwin, for example: would Caltech have hired Darwin? Probably not. He had only vague ideas about some of the mechanisms underlying biological Evolution. He had no way of knowing about genetics, and he lived before the discovery of mutations. Nevertheless, he did work out, from the top down, the notion of natural selection and the magnificent idea of the relationship of all living things.” Murray Gell-Mann in “Plectics“, excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995).

To be honest, I didn’t enjoy this title, but all of us had a fair share with journalists, now and then by now. After all, 99% of us don’t do computer simulation. We are all after the main principles, and their direct applications.

During 5 days (12-16 Sept.), with around 700 attendees the Vienna 2011 conference evolved around main important themes as Complexity & Networks (XNet), Current Trends in Game Theory, Complexity in Energy Infrastructures, Emergent Properties in Natural and Artificial Complex Systems (EPNACS), Complexity and the Future of Transportation Systems, Econophysics, Cultural and Opinion Dynamics, Dynamics on and of Complex Networks, Frontiers in the Theory of Evolution, and – among many others – Dynamics of Human Interactions.

For those who know me (will definitely understand), I was mainly attending those sessions underlined above, the last one (Frontiers in Evolution) being one of my favorites, among all these ECCS years. All in all, the conference had highly quality works (daily, we had about 3-4 works I definitely think should be followed in the future) and to those, more attention should be deserved (my main critics to the conference organization goes in here). Naturally, the newspaper article also reflects on the FuturICT, being historically one of the major scientific European projects ever done (along, probably, with the Geneva LHC), which teams spread across Europe, including Portugal with a representative team of 7 members present on the conference, led by Jorge Louçã, the former editor and organizer on the previous ECCS’10 last year in Lisbon.

Video – “… they forgot to say: in principle!“. Ricard Solé addressing the topic of a Morphospace for Biological Computation at ECCS’11 (European Conference on Complex Systems), while keeping is good humor on.

Let me draw anyway your attention to 4 outstanding lectures: Peter Schuster (link) on the first day, dissected on the source of Complexity in Evolution, battling among – as he puts it – two paradoxes: (1) Evolution is an enormously complex process, and (2) biological evolution on Earth proceeds from lower towards higher complexity. Earlier on that morning – opening the conference -, Murray Gell-Mann (link) who co-founded the Santa Fe Institute in 1984, gave a wonderful lecture on Generalized Entropies. Besides his age, the 1969 Nobel Prize in physics for his work on the theory of elementary particles, gladly turned his interest in the 1990s to the theory of Complex Adaptive Systems (CAS). Next, Albert-László Barabási (link), tamed Complexity on Controlling Networks. Finally, at the last day, closing the conference in pure gold, Ricard Solé (link) addressed the topic of a Morphospace for Biological Computation, an amazing lecture with a powerful topic to which – nevertheless – I felt he had little time (20 minutes), for such a rich endeavor. However – by no means -, he have lost his good humor during the talk (check my video above). Next year, the conference will be held in Brussels, and by just judging at the poster design, it promises. Go ants, go … !

Picture – The European Conference on Complex Systems (ECCS’12 – link) poster design for next year in Brussels.

Darwin by Peter Greenaway (1993) – Although British director Peter Greenaway is best known for feature films like The Cook, the Thief, His Wife and Her Lover, Prospero’s Books, and The Pillow Book, he has also completed several highly respected projects for television, including this 53-minute exploration (now free) of the life and work of Charles Darwin. Darwin is structured around 18 separate tableaux, each focusing on another chapter in the naturalist’s life, and each consisting of just one long uninterrupted shot. Other than the narrator’s voice-over, there is no dialogue.

It is very difficult to make good mistakes“, Tim Harford, July 2011.

TED talk (July 2011) by Tim Harford a writer on Economics who studies Complex Systems, exposing a surprising link among the successful ones: they were built through trial and error. In this sparkling talk from TEDGlobal 2011, he asks us to embrace our randomness and start making better mistakes [from TED]. Instead of the God complex, he purposes trial and error, or to be more precise, Genetic Algorithms and Evolutionary Computation (one of those examples over his talk  is indeed the evolutionary optimal design of an airplane nozzle).

Now, we may ask, if it’s clear to you from the talk whether the nozzle was computationally designed using evolutionary search as suggested by the imagery, or was the imagery designed to describe the process in the laboratory? … as a colleague ask me the other day over Google plus. A great question, since as I believe it will be not clear to everyone watching that lecture.

Though, it was clear to me from the beginning, for one simple reason. That is a well-know work in the Evolutionary Computation area, done by one of its pioneers, Professor Hans-Paul Schwefel from Germany, in 1974 I believe. Unfortunately, at least to me I must say, Tim Harford did not mentioned the author, neither he mentions over his talk, the entire Evolutionary Computation or Genetic Algorithms area, even if he makes a clear bridge between these concepts and the search for innovation. The optimal nozzle design was in fact produced for the first time, on Schwefel‘s PhD thesis (“Adaptive Mechanismen in der Biologischen Evolution und ihr Einfluß auf die Evolutiongeschwindigkeit“), and he did arrive at this results by using a branch of Evolutionary Computation know as (ES) Evolution Strategies [here is a Wikipedia entry]. The objective was to achieve the maximum thrust and for that some parameters should be adjusted, such as in which point the small aperture should be put between the two entrances. What follows is a rather old video from YouTube on the process:

The animation shows the evolution of a nozzle design since its initial configuration until the final one. After achieving such a design it was a a little difficult understanding why the surprising design was good and a team of physicists and engineers gathered to provide an investigation aiming at devising some explanation for the final nozzle configuration. Schwefel (later on with his German group) also investigated the algorithmic features of Evolution Strategies, what made possible different generalizations such as a surplus of offspring created, the use of non-elitist evolution strategies (the comma selection scheme), and the use of recombination beyond the well known mutation operator to generate the offspring. Here are some related links and papers (link).

Albeit these details … I did enjoyed the talk a lot as well as his quote above. There is still a subtle difference between “trial and error” and “Evolutionary search” even if linked, but when Tim Harford makes a connection between Innovation and Evolutionary Computation, it remembered me back the “actual” (one decade now, perhaps) work of David Goldberg (IlliGAL – Illinois Genetic Algorithms laboratory). Another founding father of the area, now dedicated to innovation, learning, etc… much on these precise lines. Mostly his books, (2002) The design of innovation: Lessons from and for competent genetic algorithms. Kluwer Academic Publishers, and (2006) The Entrepreneurial Engineer by Wiley.

Finally, let me add, that there are other beautiful examples of Evolutionary Design. The one I love most – however – (for several reasons, namely the powerful abstract message that is sends out into other conceptual fields) is this: a simple bridge. Enjoy, and for some seconds do think about your own area of work.

Figure – Understanding the Brain as a Computational Network: significant neuronal motifs of size 3.  Most over-represented colored motifs of size 3 in the C. elegans complex neuronal network. Green: sensory neuron; blue: motor neuron; red: interneuron. Arrows represent direction that the signal travels between the two cells. (from Adami et al. 2011 [Ref. below])

Abstract: […] Complex networks can often be decomposed into less complex sub-networks whose structures can give hints about the functional organization of the network as a whole. However, these structural motifs can only tell one part of the functional story because in this analysis each node and edge is treated on an equal footing. In real networks, two motifs that are topologically identical but whose nodes perform very different functions will play very different roles in the network. Here, we combine structural information derived from the topology of the neuronal network of the nematode C. elegans with information about the biological function of these nodes, thus coloring nodes by function. We discover that particular colorations of motifs are significantly more abundant in the worm brain than expected by chance, and have particular computational functions that emphasize the feed-forward structure of information processing in the network, while evading feedback loops. Interneurons are strongly over-represented among the common motifs, supporting the notion that these motifs process and transduce the information from the sensor neurons towards the muscles. Some of the most common motifs identified in the search for significant colored motifs play a crucial role in the system of neurons controlling the worm’s locomotion. The analysis of complex networks in terms of colored motifs combines two independent data sets to generate insight about these networks that cannot be obtained with either data set alone. The method is general and should allow a decomposition of any complex networks into its functional (rather than topological) motifs as long as both wiring and functional information is available. […] from Qian J, Hintze A, Adami C (2011) Colored Motifs Reveal Computational Building Blocks in the C. elegans Brain, PLoS ONE 6(3): e17013. doi:10.1371/journal.pone.0017013

[…] The role of evolution in producing these patterns is clear, said Adami. “Selection favors those motifs that impart high fitness to the organism, and suppresses those that work against the task at hand.” In this way, the efficient and highly functional motifs (such as the sensory neuron-interneuron-motor neuron motif) are very common in the nervous system, while those that would waste energy and give no benefit to, or even harm, the animal are not found in the network. “Adami and his team have used evolutionary computation to develop hypotheses about the evolution of neural circuits and find, for these nematode worms, that simplicity is the rule,” says George Gilchrist, program director in NSF’s Division of Environmental Biology (in the Directorate for Biological Sciences), which funds BEACON. “By including functional information about each node in the circuit, they have begun decoding the role of natural selection in shaping the architecture of neural circuits.” […] from Danielle J. Whittaker “Understanding the Brain as a Computational Network“, NSF, April 2011.

The following letter entitled “Darwin among the machines“, was sent on June 1863, by Samuel Butler, the novelist (signed here as Cellarius) to the editor of the Press, Christchurch, New Zealand (13 June, 1863). The article was much probably the first to raise the possibility that machines were a kind of “mechanical life” undergoing constant evolution, something that is now happening on the realm of Evolutionary Computation, Evolution Strategies along with other types of Genetic Algorithms, and that eventually machines might supplant humans as the dominant species. What follows are some excerpts. The full letter could be achieved here. It is truly, an historic visionary document:

[…] “Our present business lies with considerations which may somewhat tend to humble our pride and to make us think seriously of the future prospects of the human race. If we revert to the earliest primordial types of mechanical life, to the lever, the wedge, the inclined plane, the screw and the pulley, or (for analogy would lead us one step further) to that one primordial type from which all the mechanical kingdom has been developed. (…) We refer to the question: What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race. Inferior in power, inferior in that moral quality of self-control, we shall look up to them as the acme of all that the best and wisest man can ever dare to aim at. (…) Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question. (…) For the present we shall leave this subject, which we present gratis to the members of the Philosophical Society. Should they consent to avail themselves of the vast field which we have pointed out, we shall endeavour to labour in it ourselves at some future and indefinite period.” […]

With an eye for detail and an easy style, Peter Miller explains why swarm intelligence has scientists buzzing.” — Steven Strogatz, author of Sync, and Professor of Mathematics, Cornell University.

From the introduction of, Peter Miller, “Smart Swarms – How Understanding Flocks, Schools and Colonies Can Make Us Better at Communicating, Decision Making and Getting Things Done“. (…) The modern world may be obsessed with speed and productivity, but twenty-first century humans actually have much to learn from the ancient instincts of swarms. A fascinating new take on the concept of collective intelligence and its colourful manifestations in some of our most complex problems, Smart Swarm introduces a compelling new understanding of the real experts on solving our own complex problems relating to such topics as business, politics, and technology. Based on extensive globe-trotting research, this lively tour from National Geographic reporter Peter Miller introduces thriving throngs of ant colonies, which have inspired computer programs for streamlining factory processes, telephone networks, and truck routes; termites, used in recent studies for climate-control solutions; schools of fish, on which the U.S. military modelled a team of robots; and many other examples of the wisdom to be gleaned about the behaviour of crowds-among critters and corporations alike. In the tradition of James Surowiecki‘s The Wisdom of Crowds and the innovative works of Malcolm Gladwell, Smart Swarm is an entertaining yet enlightening look at small-scale phenomena with big implications for us all. (…)

(…) What do ants, bees, and birds know that we don’t? How can that give us an advantage? Consider: • Southwest Airlines used virtual ants to determine the best way to board a plane. • The CIA was inspired by swarm behavior to invent a more effective spy network. • Filmmakers studied flocks of birds as models for armies of Orcs in Lord of the Rings battle scenes. • Defense agencies sponsored teams of robots that can sense radioactivity, heat, or a chemical device as easily as a school of fish can locate food. Find out how “smart swarms” can teach us how to make better choices, create stronger networks, and organize our businesses more effectively than we ever thought possible. (…)

Drawing (Pedigree of Man, 1879) – Ernst Haeckel‘s “tree of life”, Darwin‘s metaphorical description of the pattern of universal common descent made literal by his greatest popularizer in the German scientific world. This is the English version of Ernst Haeckel‘s tree from the The Evolution of Man (published 1879), one of several depictions of a tree of life by Haeckel. “Man” is at the crown of the tree; for Haeckel, as for many early evolutionists, humans were considered the pinnacle of evolution.

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

Archives

Blog Stats

  • 257,892 hits