You are currently browsing the category archive for the ‘Research’ category.

Figure (click to enlarge) – Cover from one of my books published last month (10 July 2012) “Swarm Intelligence in Data Mining” recently translated and edited in Japan (by Tokyo Denki University press [TDU]). Cover image from Amazon.co.jp (url). Title was translated into 群知能と  データマイニング. Funny also, to see my own name for the first time translated into Japanese – wonder if it’s Kanji. A brief synopsis follow:

(…) Swarm Intelligence (SI) is an innovative distributed intelligent paradigm for solving optimization problems that originally took its inspiration from the biological examples by swarming, flocking and herding phenomena in vertebrates. Particle Swarm Optimization (PSO) incorporates swarming behaviours observed in flocks of birds, schools of fish, or swarms of bees, and even human social behaviour, from which the idea is emerged. Ant Colony Optimization (ACO) deals with artificial systems that is inspired from the foraging behaviour of real ants, which are used to solve discrete optimization problems. Historically the notion of finding useful patterns in data has been given a variety of names including data mining, knowledge discovery, information extraction, etc. Data Mining is an analytic process designed to explore large amounts of data in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data. In order to achieve this, data mining uses computational techniques from statistics, machine learning and pattern recognition. Data mining and Swarm intelligence may seem that they do not have many properties in common. However, recent studies suggests that they can be used together for several real world data mining problems especially when other methods would be too expensive or difficult to implement. This book deals with the application of swarm intelligence methodologies in data mining. Addressing the various issues of swarm intelligence and data mining using different intelligent approaches is the novelty of this edited volume. This volume comprises of 11 chapters including an introductory chapters giving the fundamental definitions and some important research challenges. Chapters were selected on the basis of fundamental ideas/concepts rather than the thoroughness of techniques deployed. (…) (more)

Video – TED lecture: Empathy, cooperation, fairness and reciprocity – caring about the well-being of others seems like a very human trait. But Frans de Waal shares some surprising videos of behavioural tests, on primates and other mammals, that show how many of these moral traits all of us share. (TED, Nov. 2011, link).

Evolutionary explanations are built around the principle that all that natural selection can work with are the effects of behaviour – not the motivation behind it. This means there is only one logical starting point for evolutionary accounts, as explained by Trivers (2002, p. 6): “You begin with the effect of behaviour on actors and recipients; you deal with the problem of internal motivation, which is a secondary problem, afterwards. . . . [I]f you start with motivation, you have given up the evolutionary analysis at the outset.” ~ Frans B.M. de Waal, 2008.

Do animals have morals? And above all, did morality evolved? The question is pertinent in a broad range of quite different areas, as in as well Computer Sciences and Norm Generation (e.g. link for an MSc thesis) in bio-inspired Computation and Artificial Life, but here new fresh answers come directly from Biology. Besides the striking video lecture above, what follows are 2 different excerpts (abstract and conclusions) from a 2008 paper by Frans B.M. de Waal (Living Links Center lab., Emory University, link): de Waal, F.B.M. (2008). Putting the altruism back in altruism: The evolution of empathy. Ann. Rev. Psychol. 59: 279-300 (full PDF link):

(…) Abstract: Evolutionary theory postulates that altruistic behaviour evolved for the return-benefits it bears the performer. For return-benefits to play a motivational role, however, they need to be experienced by the organism. Motivational analyses should restrict themselves, therefore, to the altruistic impulse and its knowable consequences. Empathy is an ideal candidate mechanism to underlie so-called directed altruism, i.e., altruism in response to another’s pain, need, or distress. Evidence is accumulating that this mechanism is phylogenetically ancient, probably as old as mammals and birds. Perception of the emotional state of another automatically activates shared representations causing a matching emotional state in the observer.With increasing cognition, state-matching evolved into more complex forms, including concern for the other and perspective-taking. Empathy-induced altruism derives its strength from the emotional stake it offers the self in the other’s welfare. The dynamics of the empathy mechanism agree with predictions from kin selection and reciprocal altruism theory. (…)

(…) Conclusion: More than three decades ago, biologists deliberately removed the altruism from altruism.There is now increasing evidence that the brain is hardwired for social connection, and that the same empathy mechanism proposed to underlie human altruism (Batson 1991) may underlie the directed altruism of other animals. Empathy could well provide the main motivation making individuals who have exchanged benefits in the past to continue doing so in the future. Instead of assuming learned expectations or calculations about future benefits, this approach emphasizes a spontaneous altruistic impulse and a mediating role of the emotions. It is summarized in the five conclusions below: 1. An evolutionarily parsimonious account (cf. de Waal 1999) of directed altruism assumes similar motivational processes in humans and other animals. 2. Empathy, broadly defined, is a phylogenetically ancient capacity. 3. Without the emotional engagement brought about by empathy, it is unclear what could motivate the extremely costly helping behavior occasionally observed in social animals. 4. Consistent with kin selection and reciprocal altruism theory, empathy favours familiar individuals and previous cooperators, and is biased against previous defectors. 5. Combined with perspective-taking abilities, empathy’s motivational autonomy opens the door to intentionally altruistic altruism in a few large-brained species.(…) in, de Waal, F.B.M. (2008). Putting the altruism back in altruism: The evolution of empathy. Ann. Rev. Psychol. 59: 279-300 (full PDF link).

Frans de Waal research work does not end up here, of course. He is a ubiquitous influence and writer on many related areas such as: Cognition, Communication, Crowding/Conflict Resolution, Empathy and Altruism, Social Learning and Culture, Sharing and Cooperation and last but not least, Behavioural Economics. All of his papers are free on-line, in a web page I do vividly recommend a long visit.

Complex adaptive systems (CAS), including ecosystems, governments, biological cells, and markets, are characterized by intricate hierarchical arrangements of boundaries and signals. In ecosystems, for example, niches act as semi-permeable boundaries, and smells and visual patterns serve as signals; governments have departmental hierarchies with memoranda acting as signals; and so it is with other CAS. Despite a wealth of data and descriptions concerning different CAS, there remain many unanswered questions about “steering” these systems. In Signals and Boundaries, John Holland (Wikipedia entry) argues that understanding the origin of the intricate signal/border hierarchies of these systems is the key to answering such questions. He develops an overarching framework for comparing and steering CAS through the mechanisms that generate their signal/boundary hierarchies. Holland lays out a path for developing the framework that emphasizes agents, niches, theory, and mathematical models. He discusses, among other topics, theory construction; signal-processing agents; networks as representations of signal/boundary interaction; adaptation; recombination and reproduction; the use of tagged urn models (adapted from elementary probability theory) to represent boundary hierarchies; finitely generated systems as a way to tie the models examined into a single framework; the framework itself, illustrated by a simple finitely generated version of the development of a multi-celled organism; and Markov processes.

in, Introduction to John H. Holland, “Signals and Boundaries – Building blocks for Complex Adaptive Systems“, Cambridge, Mass. : ©MIT Press, 2012.

Figure – Brain wave patterns (gamma-waves above 40 Hz). Gamma waves – 40 hz above – these are use for higher mental activity such as for problem solving, consciousness, fear. Beta waves – 13-39 Hz – these are for active thinking and active concentration, paranoia, cognition and arousal. Alpha waves – 7-13 Hz – these are for pre-sleep and pre-wake drowsiness and for relaxation. Theta waves – 4-7 Hz – these are for deep meditation, relaxation, dreams and rapid eye movement (REM) sleep. Delta waves – 4 Hz and below are for loss of body awareness and deep dreamless sleep (source: Medical School, link).

Photo – Rover’s Self Portrait (link): this Picasso-like self portrait of NASA’s Curiosity rover was taken by its Navigation cameras, located on the now-upright mast. The camera snapped pictures 360-degrees around the rover, while pointing down at the rover deck, up and straight ahead. Those images are shown here in a polar projection. Most of the tiles are thumbnails, or small copies of the full-resolution images that have not been sent back to Earth yet. Two of the tiles are full-resolution. Image credit: NASA/JPL-Caltech (August, 9, 2012). [6000 x 4500 full size link].

Figure (clik to enlarge) – Applying P(0)=0.6; r=4; N=100000; for(n=0;n<=N;n++) { P(n+1)=r*P(n)*(1-P(n)); } Robert May Population Dynamics equation [1974-76] (do check on Logistic maps) for several iterations (generations). After 780 iterations, P is attracted to 1 (max. population), and then suddenly, for the next generations the very same population is almost extinguish.

Not only in research, but also in the everyday world of politics and economics, we would all be better off if more people realised that simple non-linear systems do not necessarily possess simple dynamical properties.” ~ Robert M. May, “Simple Mathematical models with very complicated Dynamics”, Nature, Vol. 261, p.459, June 10, 1976.

(…) The fact that the simple and deterministic equation (1) can possess dynamical trajectories which look like some sort of random noise has disturbing practical implications. It means, for example, that apparently erratic fluctuations in the census data for an animal population need not necessarily betoken either the vagaries of an unpredictable environment or sampling errors: they may simply derive from a rigidly deterministic population growth relationship such as equation (1). This point is discussed more fully and carefully elsewhere [1]. Alternatively, it may be observed that in the chaotic regime arbitrarily close initial conditions can lead to trajectories which, after a sufficiently long time, diverge widely. This means that, even if we have a simple model in which all the parameters are determined exactly, long term prediction is nevertheless impossible. In a meteorological context, Lorenz [15] has called this general phenomenon the “butterfly effect“: even if the atmosphere could be described by a deterministic model in which all parameters were known, the fluttering of a butterfly’s wings could alter the initial conditions, and thus (in the chaotic regime) alter the long term prediction. Fluid turbulence provides a classic example where, as a parameter (the Reynolds number) is tuned in a set of deterministic equations (the Navier-Stokes equations), the motion can undergo an abrupt transition from some stable configuration (for example, laminar flow) into an apparently stochastic, chaotic regime. Various models, based on the Navier-Stokes differential equations, have been proposed as mathematical metaphors for this process [15,40,41] . In a recent review of the theory of turbulence, Martin [42] has observed that the one-dimensional difference equation (1) may be useful in this context. Compared with the earlier models [15,40,41] it has the disadvantage of being even more abstractly metaphorical, and the advantage of having a spectrum of dynamical behaviour which is more richly complicated yet more amenable to analytical investigation. A more down-to-earth application is possible in the use of equation (1) to fit data [1,2,3,38,39,43] on biological populations with discrete, non-overlapping generations, as is the case for many temperate zone arthropods. (…) in pp. 13-14, Robert M. May, “Simple Mathematical models with very complicated Dynamics“, Nature, Vol. 261, p.459, June 10, 1976 [PDF link].

Video Documentary – Code Rush (www.clickmovement.org/coderush), produced in 2000 and broadcast on PBS, is an inside look at living and working in Silicon Valley at the height of the dot-com era. The film follows a group of Netscape engineers as they pursue at that time a revolutionary venture to save their company – giving away the software recipe for Netscape’s browser in exchange for integrating improvements created by outside software developers.

” (…) code (…) Why is it important for the world? Because it’s the blood of the organism that is our culture, now. It’s what makes everything go.“, Jamie Zawinski, Code Rush, 2000.

The year is early 1998, at the height of dot-com era, and a small team of Netscape code writers frantically works to reconstruct the company’s Internet browser. In doing so they will rewrite the rules of software development by giving away the recipe for its browser in exchange for integrating improvements created by outside unpaid developers.  The fate of the entire company may well rest on their shoulders. Broadcast on PBS, the film capture the human and technological dramas that unfold in the collision between science, engineering, code, and commerce.

Video – Water has Memory (from Oasis HD, Canada; link): just a liquid or much more? Many researchers are convinced that water is capable of “memory” by storing information and retrieving it. The possible applications are innumerable: limitless retention and storage capacity and the key to discovering the origins of life on our planet. Research into water is just beginning.

Water capable of processing information as well as a huge possible “container” for data media, that is something remarkable. This theory was first proposed by the late French immunologist Jacques Benveniste, in a controversial article published in 1988 in Nature, as a way of explaining how homeopathy works (link). Benveniste’s theory has continued to be championed by some and disputed by others. The video clip above, from the Oasis HD Channel, shows some fascinating recent experiments with water “memory” from the Aerospace Institute of the University of Stuttgart in Germany. The results with the different types of flowers immersed in water are particularly evocative.

This line of research also remembers me back of an old and quite interesting paper by a colleague, Chrisantha Fernando. Together with Sampsa Sojakka, both have proved that waves produced on the surface of water can be used as the medium for a Wolfgang Maass’ “Liquid State Machine” (link) that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Amazingly, Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. What follows is the abstract of their paper entitled “Pattern Recognition in a Bucket“, as well a PDF link onto it:

Figure – Typical wave patterns for the XOR task. Top-Left: [0 1] (right motor on), Top-Right: [1 0] (left motor on), Bottom-Left: [1 1] (both motors on), Bottom-Right: [0 0] (still water). Sobel filtered and thresholded images on right. (from Fig. 3. in in Chrisantha Fernando and Sampsa Sojakka, “Pattern Recognition in a Bucket“, ECAL proc., European Conference on Artificial Life, 2003.

[…] Abstract. This paper demonstrates that the waves produced on the surface of water can be used as the medium for a “Liquid State Machine” that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Interference between waves allows non-linear parallel computation upon simultaneous sensory inputs. Temporal patterns of stimulation are converted to spatial patterns of water waves upon which a linear discrimination can be made. Whereas Wolfgang Maass’ Liquid State Machine requires fine tuning of the spiking neural network parameters, water has inherent self-organising properties such as strong local interactions, time-dependent spread of activation to distant areas, inherent stability to a wide variety of inputs, and high complexity. Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. An analogy is made between water molecules and neurons in a recurrent neural network. […] in Chrisantha Fernando and Sampsa Sojakka, Pattern Recognition in a Bucket“, ECAL proc., European Conference on Artificial Life, 2003. [PDF link]

Fig. – (Organizational Complexity trough History) Four forms behind the Organization and Evolution of all societies (David Ronfeldt TIMN). Each form also seems to be triggered by major societal changes in communications and language. Oral speech enabled tribes (T), the written word enabled institutions (I), the printed word fostered regional and global markets (M), and the electric (digital) word is empowering worldwide networks (N). [in David Ronfeldt, “Tribes, Institutions, Markets, Networks: A framework about Societal Evolution“, RAND Corporation, Document Number: P-7967, (1996). PDF link]

[…] Organizational complexity is defined as the amount of differentiation that exists within different elements constituting the organization. This is often operationalized as the number of different professional specializations that exist within the organization. For example, a school would be considered a less complex organization than a hospital, since a hospital requires a large diversity of professional specialties in order to function. Organizational complexity can also be observed via differentiation in structure, authority and locus of control, and attributes of personnel, products, and technologies. Contingency theory states that an organization structures itself and behaves in a particular manner as an attempt to fit with its environment. Thus organizations are more or less complex as a reaction to environmental complexity. An organization’s environment may be complex because it is turbulent, hostile, diverse, technologically complex, or restrictive. An organization may also be complex as a result of the complexity of its underlying technological core. For example, a nuclear power plant is likely to have a more complex organization than a standard power plant because the underlying technology is more difficult to understand and control. There are numerous consequences of environmental and organizational complexity. Organizational members, faced with overwhelming and/or complex decisions, omit, tolerate errors, queue, filter, abstract, use multiple channels, escape, and chunk in order to deal effectively with the complexity. At an organizational level, an organizational will respond to complexity by building barriers around its technical core; by smoothing input and output transactions; by planning and predicting; by segmenting itself and/or becoming decentralized; and by adopting rules.
Complexity science offers a broader view of organizational complexity – it maintains that all organizations are relatively complex, and that such complexity arises that complex behavior is not necessarily the result of complex action on the behalf of a single individual’s effort; rather, complex behavior of the whole can be the result of loosely coupled organizational members behaving in simple ways, acting on local information. Complexity science posits that most organizational behavior is the result of numerous events occurring over extended periods of time, rather than the result of some smaller number of critical incidents. […] in Dooley, K. (2002), “Organizational Complexity,” International Encyclopedia of Business and Management, M. Warner (ed.), London: Thompson Learning, p. 5013-5022. (PDF link)

The Internet has given us a glimpse of the power of networks. We are just beginning to realize how we can use networks as our primary form of living and working. David Ronfeldt has developed the TIMN framework to explain this – Tribal (T); Institutional (I); Markets (M); Networks (N). The TIMN framework shows how we have evolved as a civilization. It has not been a clean progression from one organizing mode to the next but rather each new form built upon and changed the previous mode. He sees the network form not as a modifier of previous forms, but a form in itself that can address issues that the three other forms could not address. This point is very important when it comes to things like implementing social business (a network mode) within corporations (institutional + market modes). Real network models (e.g. wirearchy) are new modes, not modifications of the old ones.

Another key point of this framework is that Tribes exist within Institutions, Markets and Networks. We never lose our affinity for community groups or family, but each mode brings new factors that influence our previous modes. For example, tribalism is alive and well in online social networks. It’s just not the same tribalism of several hundred years ago. Each transition also has its hazards. For instance, while tribal societies may result in nepotism, networked societies can lead to deception. Ronfeldt states that the initial tribal form informs the other modes and can have a profound influence as they evolve:

Balanced combination is apparently imperative: Each form (and its realm) builds on its predecessor(s). In the progression from T through T+I+M+N, the rise of a new form depends on the successes (and failures) achieved through the earlier forms. For a society to progress optimally through the addition of new forms, no single form should be allowed to dominate any other, and none should be suppressed or eliminated. A society’s potential to function well at a given stage, and to evolve to a higher level of complexity, depends on its ability to integrate these inherently contradictory forms into a well-functioning whole. A society can constrain its prospects for evolutionary growth by elevating a single form to primacy — as appears to be a tendency at times in market-mad America. [in David Ronfeldt, “Tribes, Institutions, Markets, Networks: A framework about Societal Evolution“, RAND Corporation, Document Number: P-7967, (1996). PDF link]

Finally, on these areas (far behind the strict topic of organizational topology and complex networks), let me add two books. One his from José Fonseca, a friend researcher I first met in 2001, during a joint interview for the Portuguese Idéias & Negócios Magazine, for his 5th anniversary (old link) embracing innovation in Portugal. His book entitled “Complexity & Innovation in Organizations” (above) was published in December that year, 2001 by Routledge. The other one is more recent and from Ralph Stacey, “Complexity and Organizational Reality: Uncertainty and the Need to Rethink Management After the Collapse of Investment Capitalism” (below), Routledge, 2010. Even if, Ralph as many other past seminal books on this topic. Both, have worked together at the Hertfordshire University.

Paul klee super-schach

Figure – Paul Klee painted this work in 1930-1931. He entitled it “Super Schach” (Super Chess). On his own way he was a remarkable visionary. It’s all about patterns, and relations between them. Frequently perceived in a few milliseconds. While playing, sometimes, I see the board almost like this, as only few pieces were there, and important places in the board at that precise moment were painted in vivid blue. Tilting to us on a special manner. Calling us, like a pointillist painting does.

The problem for me is the playoff will begin at 3AM my time. So you all better send lots of coffee which I don’t drink :)” – @brandnewAMIT have red bull instead, it’ll give you wings :) – “I’m probably the most boring GM ever. I don’t smoke or drink, not even red bull. Always try to eat healthy food and work out.” ~ Susan Polgar over Twitter on May 28, 2012. (link)

Whoever denies the high physical effort of a tournament player doesn’t know what he’s talking about. Many examinations prove that heart, frequency of breathing, blood pressure and skin are subjected to great strain, weight losses appear during a tournament – so chess players need a special way of life with regular training, practice of other keep-fit activities and healthy diet.”, Dr Willi Weyer speech on the 100th anniversary of the German chess federation in Bad Lauterberg on 12 March 1977.

[…] Cuenta la leyenda que segundos antes de llegar a una curva, Juan Manuel Fangio dirigía una fugaz mirada a las hojas de los árboles. Si se movían, levantaba el pie del acelerador; si, por el contrario, no soplaba el viento, pisaba a fondo. […], in Ángel Luis Menéndez, “Los abuelos de Alonso”, Público.es (link)

In a few hours, today, one of the most dramatic high-tension “F1 car races” ever will start. Though, it’s not only about sport, it will be about science, art, drift spatial aesthetics and psychology as well, … altogether taken to their very extreme. And, at the limit, it will end-up being about how two very different people and human characters behave while confronting each other trough puzzling millisecond brainwaves. While the hot race is on, I will be watching the board carefully, of course, but mainly – let me add – their faces. It’s their human side over an intensive battle, that ultimately interests me, and always pushes forward my focal attention.

For those who are not used to deal with high incommensurable pressure and stress while the clock is fast ticking, or ever educated themselves along their lives to perform with “grace under fire“, far behind their technical know-how something unique over an athlete, probably it will be hard to understand, among other things, why F1 and chess have so many things in common, and are in fact, so close to each other. Breathing in many sports are fundamental. In chess, it’s crucial.

It will be hard to imagine, for instance, that a regular chess player under enormous stress could loose up to 4Kg, just in one single important game, where everything is at stake. Not counting the increasing exponential adrenaline levels he must support (e.g. 772% on figure below), sometimes for long periods of time  contrary to other – surprise, yourself – “soft” sports. It’s brutal: ” (…) It’s chess. Many don’t think of it as a sport, because nobody moves. But Chess Masters will tell you it can be more brutal than boxing (…)”, in CBS 60 minutes “Mozart of Chess” (YouTube link), CBS news entry piece on Magnus Carlsen, last year.

photo – Anand looking pensive to Gelfand at the end of the last 12th game, deciding for the final tiebreaks. [Allow me to add one of his possible thoughts: “It’s all about quick risky moves now, Boris. No more chicken play-to-draw games ” (Source: http://moscow2012.fide.com/en/ )].

The apparently illogical link between chess, F1 drivers and F16 jet pilots at war does not end here, however. Unfortunately, they also come from the dark side. Increasingly rumors state that all this areas might be related by the use of PED’s, by some cheaters. The answer again, comes from what this human-activity areas desperately need. They need fast strategic and tactic responses, as well as imagination to surprise the adversary, while maintaining extreme accuracy. All those three characteristics tied together, think of it … that’s something very hard and uncommonly rare to find in us, Humans. For curiosity, just have a look on what an UK steroids company states:

(….) There has been reports of performance enhancing drugs being used in the game of chess. Now anabolic steroids and a ripped physique will not increase your mental capacity, but some drugs can be used to control blood pressure and meditate heart beat allowing a more controlled and balanced state of mind. Testing in Formula One has even been taken to a new level. Since F1 brought its drug testing standards up to the level required by the World Doping Agency, drug testing has been more thorough and more frequent. (…) in (link).

Performance enhancing drugs (PED’s) have been reported, mainly by the use of beta-blockers. Beta-blockers are known to slow heart rate, as well as adrenaline, while maintaining the other brain functions on, and quick. At F1, on F16 fighters, on chess as well as in science, as in some of our regular daily digital software life (yes, Nasdaq High-frequency trading are targets now), digital doping is also possible. Injecting performance-enhancing code seems unfortunately to be a current trend. Just recently I have testimony this over an Iterated Prisoner’s Dilemma online contest my-self.

Allow me however to drag you onto the healthy positive side and draw your attention to one MSc thesis entitled “Practical Recommendations to Chess Players from Sports Science” (PDF link) which I recommend to friends for long years now. From coffee (pp. 9) to beta-blockers (pp. 13), Kevin O’Connell (University of Essex; MSc Sports Science Dissertation, 1997) discusses several important issues. Here are some brief excerpts from his thesis:

(…) In connection with which I find it interesting to recall remarks made to me by a couple of chess players, one who used to play as a striker for the Norwegian national soccer team before being forcibly retired by a cruciate ligament injury and by a Chilean tennis professional who made it into the world’s top 50 as a tennis player, that chess is a harder sport, physically, than either of their other occupations. (…) Andreassi (1995) reported evidence that brain activity is influenced by cardiac events, “for example, the decreased HR that occurs under instructions to detect signals leads to a decrease in the inhibitory influence of baroreceptors on cortical function, resulting in enhanced brain activity and improved performance.” (…)

(…)  It is clear that fatigue is a major contributory cause of error in chess and that two of the five main metabolic causes of human fatigue (Newsholrne, 1995) are potentially relevant. These are the decrease of blood glucose concentration and an increase in the concentration ratio of the free tryptophan to branched chain amino acids in the bloodstream. (…) In brief, the central fatigue hypothesis (after Newsholme, 1995) runs as follows. During exercise there is an elevation in the blood adrenaline level and a decrease in that of insulin which results in fatty acid mobilization from adipose tissue, consequently increasing the level of fatty acids in plasma (formula one racing drivers, who experience similar adrenaline levels to chess players, have been noted for their ‘milky’ plasma). (…)

Figure –  (…) The heart rates measured by Hollinsky‘s team included peaks in excess of 220/min and a single maximum of 223/min. Table 2 shows the HR, and blood pressure graph for a player 27 years old and rated 2064 over the course of one game, from six p.m. until its conclusion just after midnight. Not surprisingly, at least to chess players, the peak HR is reached in the time-pressure phase towards the end of the sixth hour of play (…) (from K. O’Connell MSc thesis, link above).

Today however, at the Tretyakov state art gallery in Moscow (link), all these will happen quite fast, in just tiny seconds, while the whole world will be watching live. The actual champion Viswanathan Anand (defending his title) will have to fight it out in rapid chess tiebreaker against challenger Boris Gelfand after a tied 6-6 result in the World Chess Championship match. Both, now arrive at a situation when the match cannot be prolonged any further. To start with, there will be four games under rapid chess rules with 25 minutes to each player and a ten seconds increment after every move is made. In case of a 2-2 result, the two will play two blitz games with five minutes each with a three seconds increment per move.

Between them, Anand and Gelfand have in the past played 28 times in rapid chess and the Indian has won eight, lost one and drawn the remaining. In blitz, they have played seven games with three wins for Anand and the rest being drawn. Today, there will be five such matches if the tie persists and finally an Armageddon game will be played with five minutes to white and four to black and white will be forced to win should this arise. The whole fast race could be followed live at http://moscow2012.fide.com/en/ while the board, analysis, chat, etc here at http://livechess.chessdom.com/site/ . Games will start at 10:00 AM CET. I would guess both players are having their “beauty sleep” right now.

Today, the world chess champion will be known. It’s about all of us, Humans. Taken to our creative limits. As Fangio, even if the pressure is considerably high, always take a look at the trees surrounding you. If you blink, you will just miss it.

Make yourself an exercise. Have a glimpse again on Klee‘s Super Schach. Starting from the bottom left corner, count 6 squares to the right. Then, look precisely above at the second row. What’s that?! See it?!

Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!” ~ The Red Queen, at “Through the looking-glass, and what Alice found there“, Charles Lutwidge Dogson, 1871.

Your move. Alice was suddenly found in a new strange world. And quickly needed to adapt. As C.L. Dogson (most known as Lewis Carroll) brilliantly puts it, all the running you can do, does not suffices at all. This is a world (Wonderland) with different “physical” laws or “societal norms”. Surprisingly, those patterns appear also to us quite familiar, here, on planet Earth. As an example, the quote above is mainly the paradigm for Biological Co-Evolution, in the form of the Red-Queen effect.

In Wonderland (1st book), Alice follows the white rabbit, which end-ups driving her on this strange habitat, where apparently “normal” “physical” laws do not apply. On this second book however, Alice now needs to overcome a series of great obstacles – structured as phases in a game of chess – in order to become a queen.  Though, as she moves on, several other enigmatic personages appear. Punctuated as well as surrounded by circular arguments and logical paradoxes, Alice must keep on, in order to found the “other side of the mirror“.

There are other funny parallel moves, also. The story goes on that Lewis Carroll gave a copy of “Alice in Wonderland” to Queen Victoria who then asked him in return to send her his next book as she fancied the first one. The joke is that the book was (!) … “An Elementary Treatise on Determinants, With Their Application to Simultaneous Linear Equations and Algebraic Equations (link)”. Lewis Carroll then went on to write a follow-on to Alice in Wonderland entitled “Through the Looking-Glass, and what Alice found there” that features a chess board on his first pages, and where chess was used to gave her, Queen Victoria, a glimpse on what Alice explored on this new world.

In fact, the diagram on the first pages contains not only the entire book chapters of this novel as well how Alice moved on. Where basically, each move, moves the reader to a new chapter (see below) representing it. The entire book could be found here in PDF format.  Besides the beauty and philosophical value of Dogson‘s novel on itself, and his repercussions on nowadays co-Evolution research as a metaphor, this is much probably the first “chess-literature” diagram ever composed. Now, of course, pieces are not white and black, but instead white and red (note that pieces in c1 – queen – and c6 – king – are white). Lewis Carroll novel, then goes on like this: White pawn (Alice) to play, and win in eleven moves.

However, in order to enter this world you must follow the “rules” of this new world. “Chess” in here is not normal, as Wonderland was not normal to Alice’s eyes. Remember: If you do all do run you could do, you will find yourself at the same place. Better if you could run twice as fast! First Lewis Carroll words on his second book (at the preface / PDF link above) advise us:

(…) As the chess-problem, given on a previous page, has puzzled some of my readers, it may be well to explain that it is correctly worked out, so far as the moves are concerned. The alternation of Red and White is perhaps not so strictly observed as it might be, and the ‘castling’ of the three Queens is merely a way of saying that they entered the palace; but the ‘check’ of the White King at move 6, the capture of the Red Knight at move 7, and the final ‘check-mate’ of the Red King, will be found, by any one who will take the trouble to set the pieces and play the moves as directed, to be strictly in accordance with the laws of the game. (…) Lewis Carroll, Christmas,1896.

Now, the solution, could be delivered in various format languages. But here is one I prefer. It was encoded on classic BASIC, running on a ZX Spectrum emulator. Here is an excerpt:

1750 LET t$=”11. Alice takes Red Queen & wins(checkmate)”: GO SUB 7000 (…)
9000 REM
9001 REM ** ZX SPECTRUM MANUAL Page 96 Chapter 14. **
9002 REM
9004 RESTORE 9000 (…)
9006 LET b=BIN 01111100: LET c=BIN 00111000: LET d=BIN 00010000
9010 FOR n=1 TO 6: READ p$: REM 6 pieces
9020 FOR f=0 TO 7: REM read piece into 8 bytes
9030 READ a: POKE USR p$+f,a
9040 NEXT f
9100 REM bishop
9110 DATA “b”,0,d,BIN 00101000,BIN 01000100
9120 DATA BIN 01101100,c,b,0
9130 REM king
9140 DATA “k”,0,d,c,d
9150 DATA c,BIN 01000100,c,0
9160 REM rook
9170 DATA “r”,0,BIN 01010100,b,c
9180 DATA c,b,b,0
9190 REM queen
9200 DATA “q”,0,BIN 01010100,BIN 00101000,d
9210 DATA BIN 01101100,b,b,0
9220 REM pawn
9230 DATA “p”,0,0,d,c
9240 DATA c,d,b,0
9250 REM knight
9260 DATA “n”,0,d,c,BIN 01111000
9270 DATA BIN 00011000,c,b,0

(…) full code on [link]

This is BASIC-ally Alice’s story …

Fig. – A Symbolical Head (phrenological chart) illustrating the natural language of the faculties. At the Society pages / Economic Sociology web page.

You have much probably noticed by now how Scoop.it is emerging as a powerful platform for those collecting interesting research papers. There are several good examples, but let me stress one entitled “Bounded Rationality and Beyond” (scoop.it web page) curated by Alessandro Cerboni (blog). On a difficult research theme, Alessandro is doing a great job collecting nice essays and wonderful articles, whenever he founds them. One of those articles I really appreciated was John Conlisk‘s “Why Bounded Rationality?“, delivering into the field several important clues, for those who (like me) work in the area. What follows, is an excerpt from the article as well as part of his introductory section. The full (PDF) paper could be retrieved here:

In this survey, four reasons are given for incorporating bounded rationality in economic models. First, there is abundant empirical evidence that it is important. Second, models of bounded rationality have proved themselves in a wide range of impressive work. Third, the standard justifications for assuming unbounded rationality are unconvincing; their logic cuts both ways. Fourth, deliberation about an economic decision is a costly activity, and good economics requires that we entertain all costs. These four reasons, or categories of reasons, are developed in the following four sections. Deliberation cost will be a recurring theme.

Why bounded rationality? In four words (one for each section above): evidence, success, methodology, and scarcity. In more words: Psychology and economics provide wide-ranging evidence that bounded rationality is important (Section I). Economists who include bounds on rationality in their models have excellent success in describing economic behavior beyond the coverage of standard theory (Section II). The traditional appeals to economic methodology cut both ways; the conditions of a particular context may favor either bounded or unbounded rationality (Section III). Models of bounded rationality adhere to a fundamental tenet of economics, respect for scarcity. Human cognition, as a scarce resource, should be treated as such (Section IV). The survey stresses throughout that an appropriate rationality assumption is not something to decide once for all contexts. In principle, we might suppose there is an encompassing single theory which takes various forms of bounded and unbounded rationality as special. cases. As with other model ingredients, however, we in practice want to work directly with the most convenient special case which does justice to the context. The evidence and models surveyed suggest that a sensible rationality assumption will vary by context, depending on such conditions as deliberation cost, complexity, incentives, experience, and market discipline. Beyond the four reasons given, there is one more reason for studying bounded rationality. It is simply a fascinating thing to do. We can mix some Puck with our Hamlet.

NOTE – What follows (full content and graphics) is a current OPEN LETTER FOR SCIENCE IN SPAIN. This open Letter is the result of a consensus between the Confederation of Spanish Scientific Societies, Comisiones Obreras (I+D+i), the Federation of Young Researchers and the grassroots Investigación Digna. It will be delivered, together with the names of the assignees, to the Spanish Prime Minister and the members of the Spanish Congress and Senate. What is going on is unfortunately not different in countries like Italy, Portugal, Ireland or Greece (second graphic). Do check the Investigación Digna site for the original in Spanish:

OPEN LETTER FOR SPANISH SCIENCE

In the next few weeks, and contravening recommendation from the European Commission stating that public deficit control measures should not affect Research and Development (R&D) and innovation, the Spanish Government and Parliament could approve a State Budget that would cause considerable long-term damage to the already weakened Spanish research system, contributing to its collapse. This would imply the maintenance of an obsolete economic model that is not competitive and is especially vulnerable to all kinds of economic and political contingencies. Given the above, we ask the political representatives:

– To avoid a new reduction of the investment in R&D and innovation. In the last few years, the investment in R&D (chapter 46 of the State Budget) has suffered a cut of 4.2% in 2010 and 7.38% in 2011; for 2012, a further 8.65% cut is being considered (where the percentages refer to the cut with respect to the previous year). If the budget cut for 2012 is ratified, during those years the Public Research Organisms would have suffered an accumulated 30% reduction of the resources coming from the State Budget. Investment in R&D was 1.39% of GDP in 2010 and it is estimated that in 2011 it was less than 1.35%. In the mid-term, it should reach the mean EU-27 value of 2.3% and converge toward the European Council goal of 3%.

– To include R&D and innovation among the “priority sectors” allowing hiring in public research organisms, universities and technological centers during the fiscal year 2012. This will avoid a “brain drain” that would take decades to reverse.

“The Spanish production model (…) is exhausted, it is therefore necessary to promote a change through investment in research and innovation as a way to achieve a knowledge-based economy that guarantees a more balanced, diversified and sustainable growth.” These words, extracted from the Preamble of the Law of Science, Technology and Innovations, were approved in May 2011 by 99% of the members in the Spanish Congress and Senate, constituting a tacit National Agreement regarding the need to prioritize R+D and innovation. The diagnose is unequivocal and the solution has been identified. What is missing is that political leaders rise to their responsibilities by fulfilling this compromise. The approval of the 2012 budget by the Spanish Government and Parliament in the next few weeks is the time to demonstrate that compromise.

The budget cuts currently being considered for R&D and innovation would cause  grave long-term damage to the already weakened Spanish research system, both to its infrastructure and human resources. This would imply a loss in competitiveness, as has been recognized by the European Council. In the March 2, 2012 memorandum, the “European Council confirms research and innovation as drivers of growth and jobs (…). EU Heads of State and Government have today stressed (…) that Europe’s growth strategy and its comprehensive response to the challenges it is facing (…) requires the boost of innovation, research and development, (…) since they are a vital component of Europe’s future competitiveness and growth.” (MEMO/12/153). Given the above, we urge ask the Spanish political leaders to take the following considerations into account.

HUMAN RESOURCES IN R&D

The Royal Decree-Law 20/2011 of urgent measure to correct public deficit (BOE-A-2011-20638, Dec. 31st,  2011, Art. 3) establishes that “the hiring of personnel (…) will be restricted to sectors considered to be a priority”. It also says that during the year 2012, none of the permanent positions left vacant by retirees will be fulfilled, except in sectors considered to be a priority.

The preamble of the Law of Science, Technology and Innovation cited above establishes that R&D and innovation are a priority. Therefore, the Royal Decree-Law 20/2011 allows to reactive public hiring in R+D, essential to strengthen research institutions. During the last three years, these institutions have suffered a drastic decrease in the number of new positions. For  all public research organisms and the Spanish Research Council, and including all research levels (from laboratory personnel to research professors), the number of new positions amounted to a total of 681, 589, 106, 50 and 55, for the years of 2007, 2008, 2009, 2010 and 2011, respectively. The Government’s intention is to have zero positions in 2012. The situation is unsustainable: overall, the permanent staff at the public research organisms has an average age of 50-55 years, reaching 58 years at the Spanish Research Council. The number of researchers in the permanent staff is shrinking at an accelerated rate because, during the last year years, the positions left open due to retirements are not being filled. Meanwhile, the rest rest of the research staff is relegated, in the best scenario, to a concatenation of short-term contracts. The result is an important loss of competitiveness because forming a research group and obtain funding require a degree of stability that a great number of researchers in the peak of their scientific productivity do not enjoy, neither inside our outside civil service. In fact, it is urgent that the hiring system for researchers follows a more flexible model that allows the planning of human resources, indispensable to make strategic plans viable. Otherwise, the established goals will never be achieved and the abandonment of research lines will imply an important loss of investment. For example, CSIC, the largest research public organism constituted by 133 centers, received during the years 2010 and 2011 less than 20% of the minimum requirements in personnel establish in its strategic plan (Plan de Actuación 2010-2014). The other public research organisms are in a similar situation, or even worse.

The lack of stability in the human resources policy of the Spanish R&D system damages its credibility and undermines its competitiveness. The “Ramón y Cajal Programme” is a good example (but its not the only one). Nationwide, this program is the flagship of the Spanish research system in terms of human resources. It was established in 2001 with a vision whose commitment is, and always has been, to offer the possibility of tenure to the researchers in this program that pass the two evaluations established within the 5-year trial period (during the second and forth year): is the Spanish “tenure-track”. However, only 37% of the researchers from the 2006 call that have passed all evaluations have become tenured (compared to 90% from 2001). The rate is significantly smaller for researchers from the 2007 call, whose contracts will be finishing in the next few months. On average, researchers who have completed “or are about to end” their contracts, are 42 years old, have dedicated 17 years to research, lead their research activities, have extensive international experience and participate in a wide network of international collaborations. There are many other researchers with a similar profile in the same position. It is urgent that the Spanish research system fulfills the commitments of its current tenure-track, and that it is modified to allow the planning of human resources that makes the tenure-track hiring model viable (the so-called access contract  established by the Law of Science is far from being a tenure-track).

The characteristics of scientific research require decades for the formation of a skilled workforce. Spain does not harbour an R&D private sector that can absorb and take advantage of highly qualified researchers. This human resource, which has been trained thanks to a considerable national investment and is best prepared to contribute to the shift to a knowledge-based economy, will have no choice but to emigrate or leave research altogether. The country faces a multi-generational “brain drain” (from researchers starting their PhDs to those in the mid forties). Spain also risks the chance of undermining the interest towards science of the younger generations (now children and teenagers). Within a few years, Spain may have no choice but to import scientists. It will only be able to do so with costly offers that can compete with those of science-leading countries, whose human resource policies will have much greater credibility. If Spain does not take urgent action to preserve the scientific workforce of highest quality, the research system will take decades to recover, dragging down the desired shift to a knowledge-based economy.

INVESTMENT IN R&D

Investment in R&D needs to converge with the EU-27 average value and approach the 3% of GDP goal set by the European Council Lisbon Strategy. Investment in R&D was 1.39% of GDP in 2010 and it is estimated that in 2011 it was less than 1.35%. While the leading economies in the EU are near or above 2.5% (with three countries above 3%), the bailed-out countries or those that have suffered political intervention are well below 2.3% (the average investment in R&D in EU-27). Coincidence? Evidently not: none of the counties economically healthy that are in the leading group of the EU have allowed themselves to fall behind in R&D.

Investment in R&D must be stable and independent of political and economic cycles. The lack of stability, an endemic evil in the Spanish research system, causes a loss of effectiveness and credibility. In the last few years, the investment in R&D (chapter 46 of the State Budget) has suffered a cut of 4.2% in 2010 and 7.38% in 2011; for 2012, a further 8.65% cut is being considered (where the percentages refer to the cut with respect to the previous year). Spain follows a cyclical policy for R&D, which makes the country even more vulnerable when the economy is in crisis, cutting off possible means of recovery. Contrarily, many research-leading countries have adopted an anti-cyclical policy, increasing investment on R&D as the economy shrinks. In 2012, France has announced a stimulus package of € 35,000 M for research, while Germany, a champion of austerity, is rising by 5% the budget of its main research organizations until 2015 (including the Max Planck Institute and the Deutsche Forschungsgemeinschaft (German Research Foundation). Furthermore, on March 2, 2012, the European Commission, with the support of the Spanish government, proposed to significantly increase the European investment in R&D from € 55,000 M in 2007-2013 to € 80,000 M in 2014 -2020 (MEMO/12/153).

A knowledge-based economy will only be successful if it guarantees the stability of the research system in terms of financial and human resources, and if there is a private sector committed to research and innovation. To promote the latter, the European Investment Bank and the European Commission created in 2007 the Risk Sharing Finance Facility (RSFF). However, if Spain does not prevent the loss of researchers, the Spanish research system will take decades to recover due to a double factor: Spanish private companies will not find qualified research staff to take advantage of these European financial resources, nor will Spanish public research institutions have a workforce to benefit from the economic grants from the European Commission (€ 80,000 M in 2014-2020).

The change to a knowledge-based economy, which could take decades to achieve, should not be measured in legislature terms and requires a National Agreement that shields it from political and economic cycles. It is a matter of national importance and should be considered a priority. In the words of the Minister of Economy and Competitiveness, Luis de Guindos “we are going to make R&D the base for future development of the Spanish economy (…) and benefit from the human resources we have and develop a research career” (Plenary Session of the Congress, 02/21/2012).

Political leaders must be coherent with the message they are sending to the Spanish society and to other countries and investors: they cannot keep the rhetoric of change to a knowledge-based economy, while every step they take is in the opposite direction, producing inevitably serious short and long-term damage to the scientific infrastructure and its human resource that can only lead us to a knowledge-borrowed economy with little know-how. “If you think education is expensive, try ignorance” (Derek Bok).

 

The above artificial ecosystem investigates the characteristics of the simulated environment through the use of agents reactive to pheromone trails. Pheromones spread through the fluid and are transported by it. The configuration of the reefs will be developed therefore in areas with less chance of stagnation of pheromones (done in Processing, from Alessandro Zomparelli, 2012).

Well summed up“, says Hélder Barbosa over Twitter about this cartoon (@HelSimao). I agree.  It’s quite easy to “translate” from apples to oranges. However, to do something really new, first requires know-how, a well-heeled technique, absolute reconnaissance of the state-of-the-art, and then the hardest – soul; … imagination and courage. As well as to test the new idea, indefinitely, during months if necessary, with great patience. At this point, ironically, science needs inner faith, or the courage to drop everything on the trash can if it’s not good enough, starting all over again. Not all scientists share these features, altogether. Some are doomed to produce low impact papers all their life, translating the work of others from apples to oranges.

[Cartoon is from VADLO, a growing search engine in Biology, as well as in all branches of life sciences, including Molecular Biology, Cell Biology, Structural Biology, Evolutionary Biology, Genetics, Genomics, Proteomics, Botany, Zoology, Biochemistry, Biophysics, Biotechnology, Biostatistics, Pharmacology and Biomedical research.]

I would like to thank flocks, herds, and schools for existing: nature is the ultimate source of inspiration for computer graphics and animation.” in Craig Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model“, (paper link) published in Computer Graphics, 21(4), July 1987, pp. 25-34. (ACM SIGGRAPH ’87 Conference Proceedings, Anaheim, California, July 1987.)

There is an entire genealogy to be written from the point of view of the challenge posed by insect coordination, by “swarm intelligence.” Again and again, poetic, philosophical, and biological studies ask the same question: how does this “intelligent,” global organization emerge from a myriad of local, “dumb” interactions?” — Alex Galloway and Eugene Thacker, The Exploit.

[…] The interest in swarms was intimately connected to the research on emergence and “superorganisms” that arose during the early years of the twentieth century, especially in the 1920s. Even though the author of the notion of superorganisms was the now somewhat discredited writer Herbert Spencer,63 who introduced it in 1898, the idea was fed into contemporary discourse surrounding swarms and emergence through myrmecologist William Morton Wheeler. In 1911 Wheeler had published his classic article “The Ant Colony as an Organism” (in Journal of Morphology), and similar interests continued to be expressed in his subsequent writings. His ideas became well known in the 1990s in discussions concerning artificial life and holistic swarm-like organization. For writers such as Kevin Kelly, mentioned earlier in this chapter, Wheeler’s ideas regarding superorganisms stood as the inspiration for the hype surrounding emergent behavior.64 Yet the actual context of his paper was a lecture given at the Marine Biological Laboratory at Woods Hole in 1910.65 As Charlotte Sleigh points out, Wheeler saw himself as continuing the work of holistic philosophers, and later, in the 1910s and 1920s, found affinities with Bergson’s philosophy of temporality as well.66 In 1926, when emergence had already been discussed in terms of, for example, emergent evolution, evolutionary naturalism, creative synthesis, organicism, and emergent vitalism, Wheeler noted that this phenomenon seemed to challenge the basic dualisms of determinism versus freedom, mechanism versus vitalism, and the many versus the one.67 An animal phenomenon thus presented a crisis for the fundamental philosophical concepts that did not seem to apply to such a transversal mode of organization, or agencement to use the term that Wheeler coined. It was a challenge to philosophy and simultaneously to the physical, chemical, psychological, and social sciences, a phenomenon that seemed to cut through these seemingly disconnected spheres of reality.

In addition to Wheeler, one of the key writers on emergence – again also for Kelly in his Out of Control 68 – was C. Lloyd Morgan, whose Emergent Evolution (1927) proposed to see evolution in terms of emergent “relatedness”. Drawing on Bergson and Whitehead, Morgan rejected a mechanistic dissecting view that the interactions of entities “whether physical or mental” always resulted only in “mixings” that could be seen beforehand. Instead he proposed that the continuity of the mechanistic relations were supplemented with sudden changes at times. At times reminiscent of Lucretius’s view that there is a basic force, clinamen, that is the active differentiating principle of the world, Morgan focused on how qualitative changes in direction could affect the compositions and aggregates. He was interested in the question of the new and how novelty is possible. In his curious modernization of Spinoza, Morgan argued for the primacy of relations – or “relatedness,” to be accurate.69 Instead of speaking of agencies or activities, which implied a self-enclosed view of interactions, in Emergent Evolution Morgan propagated in a way an ethological view of the world. Entities and organisms are characterized by relatedness, the tendency to relate to their environment and, for example, other organisms. So actually, what emerge are relations:

If it be asked: What is it that you claim to be emergent? the brief reply is: Some new kind of relation. Revert to the atom, the molecule, the thing (e.g. a crystal), the organism, the person. At each ascending step there is a new entity in virtue of some new kind of relation, or set of relations, within it, or, as I phrase it, intrinsic to it. Each exhibits also new ways of acting on, and reacting to, other entities. There are new kinds of extrinsic relatedness“.70

The evolutionary levels of mind, life, and matter are in this scheme intimately related, with the lower levels continuously affording the emergence of so-called higher functions, like those of humans. Different levels of relatedness might not have any understanding of the relations that define other levels of existence, but still these other levels with their relations affect the other levels. Morgan tried, nonetheless, to steer clear of the idealistic notions of humanism that promoted the human mind as representing a superior stage in emergence. His stance was much closer to a certain monism in which mind and matter are continuously in some kind of intimate correspondence whereby even the simplest expressions of life participate in a wider field of relatedness. In Emergent Evolution Morgan described relations as completely concrete. He emphasized that the issue is not only about relations in terms but as much about terms in relation, with concrete situations, or events, stemming from their relations.71 In a way, other views on emergence put similar emphasis on the priority of relations, expressing a kind of radical empiricism in the vein of William James. Drawing on E. G. Spaulding’s 1918 study The New Rationalism, Wheeler noted the unpredictable potentials in connectionism: a connected whole is more than (or at least nor reducible to) its constituent parts, implying the impossibility to find causal determination of aggregates. Whereas existing sciences might be able to recognize and track down certain relationships that they have normalized or standardized, the relations might still produce properties that are beyond those of the initial conditions – and thus also demand a vector of analysis that parts from existing theories – dealing with properties that open up only in relation to themselves (as a “law unto themselves”). 72 Instead, a more complicated mode of development was at hand, in which aggregates, or agencements, simultaneously involved various levels of reality. This also implied that aggregates, emergent orders, have no one direction but are constituted of relations that extend in various directions:

We must also remember that most authors artificially isolate the emergent whole and fail to emphasize the fact that its parts have important relations not only with one another but also with the environment and that these external relations may contribute effectively towards producing both the whole and its novelty“.73 […]

in (passage from), Jussi Parikka, “Insect Media: An Archaeology of Animals and Technology“, Chapter II – Genesis of Form: Insect Architecture and Swarms, (section) Emergence and Relatedness: A Radical Empiricism – take one, pp. 51-53, University of Minnesota Press, Minneapolis, 2011.

I saw them hurrying from either side, and each shade kissed another, without pausing;  Each by the briefest society satisfied. (Ants in their dark ranks, meet exactly so, rubbing each other’s noses, to ask perhaps; What luck they’ve had, or which way they should go.)” — Dante, Purgatorio, Canto XXVI.

Video documentary: A 15-minute program produced from February 1949 to April 1952, Kieran’s Kaleidoscope presented its writer and host in his well-acquainted role as the learned and witty guide to the complexities of human knowledge (Production Company: Almanac Films). This is probably the most genuinely entertaining of all the John Kieran‘s Kaleidoscope films. On Ant City (1949) [Internet Archive] produced by Paul F. Moss, the poor ants are anthropomorphized to the nth degree; we even hear the Wedding March when the “queen” and her drone fly away from the nest. Kieran‘s patter has never been more meandering; he sounds like a befuddled uncle narrating home movies. Clumsy but enjoyable.

[…] In conclusion, much elegant work has been done starting from activated mono-nucleotides. However, the prebiotic synthesis of a specific macromolecular sequence does not seem to be at hand, giving us the same problem we have with polypeptide sequences. Since there is no ascertained prebiotic pathway to their synthesis, it may be useful to try to conceive some working hypothesis. In order to do that, I would first like to consider a preliminary question about the proteins we have on our Earth: “Why these proteins … and not other ones?”. Discussing this question can in fact give us some clue as to how orderly sequences might have originated. […] A grain of sand in the Sahara – This is indeed a central question in our world of proteins. How have they been selected out? There is a well-known arithmetic at the basis of this question, (see for example De Duve, 2002) which says that for a polypeptide chain with 100 residues, 20^100 different chains are in principle possible: a number so large that it does not convey any physical meaning. In order to grasp it somewhat, consider that the proteins existing on our planet are of the order of a few thousand billions, let us say around 10^13 (and with all isomers and mutations we may arrive at a few orders of magnitude more). This sounds like a large number. However, the ratio between the possible (say 20^100) and the actual chains (say 10^15) corresponds approximately to the ratio between the radius of the universe and the radius of a hydrogen atom! Or, to use another analogy, nearer to our experience, a ratio many orders of magnitude greater than the ratio between all the grains of sand in the vast Sahara and a single grain. The space outside “our atom”, or our grain of sand, is the space of the “never-born proteins”, the proteins that are not with us – either because they didn’t have the chance to be formed, or because they “came” and were then obliterated. This arithmetic, although trivial, bears an important message: in order to reproduce our proteins we would have to hit the target of that particular grain of sand in the whole Sahara. Christian De Duve, in order to avoid this “sequence paradox” (De Duve, 2002), assumes that all started with short polypeptides – and this is in fact reasonable. However, the theoretically possible total number of long chains does not change if you start with short peptides instead of amino acids. The only way to limit the final number of possible chains would be to assume, for example, that peptide synthesis started only under a particular set of conditions of composition and concentration, thus bringing contingency into the picture. As a corollary, then, this set of proteins born as a product of contingency would have been the one that happened to start life. Probably there is no way of eliminating contingency from the aetiology of our set of proteins. […]

Figure – The ratio between the theoretical number of possible proteins and their actual number is many orders of magnitude greater than the ratio between all sand of the vast Sahara and a single grain of sand (caption on page 69).

[…] The other objection to the numerical meaning suggested by Figure (above) is that the maximum number of proteins is much smaller because a great number of chain configurations are prohibited for energetic reasons. This is reasonable. Let us then assume that 99.9999% of theoretically possible protein chains cannot exist because of energy reasons. This would leave only one protein out of one million, reducing the number of never-born proteins from, say, 10^60 to 10^54. Not a big deal. Of course one could also assume that the total number of energetically allowed proteins is extremely small, no larger than, say, 10^10. This cannot be excluded a priori, but is tantamount to saying that there is something very special about “our” proteins, namely that they are energetically special. Whether or not this is so can be checked experimentally as will be seen later in a research project aimed at this target. The assumption that “our” proteins have something special from the energetic point of view, would correspond to a strict deterministic view that claims that the pathway leading to our proteins was determined, that there was no other possible route. Someone adhering strictly to a biochemical anthropic principle might even say that these proteins are the way they are in order to allow life and the development of mankind on Earth. The contingency view would recite instead the following: if our proteins or nucleic acids have no special properties from the point of view of thermodynamics, then run the tape again and a different “grain of sand” might be produced – one that perhaps would not have supported life. Some may say at this point that proteins derive in any case from nucleic-acid templates – perhaps through a primitive genetic code. However, this is really no argument – it merely shifts the problem of the etiology of peptide chains to etiology of oligonucleotide chains, all arithmetic problems remaining more or less the same. […] pp. 68-70, in Pier Luigi Luisi, “The Emergence of Life: From Chemical Origins to Synthetic Biology“, Cambridge University Press, US, 2006.

Did you just mention privatization, “increase in productivity” and self-interest as a solution? Well, the answer depends a lot if you are in a pre or post equilibrium physical state. The distribution curve in question is more or less a Bell-curve. So maybe it’s time for all of us, to make a proper balance in here, having a brief look onto it from a recent scientific perspective.

Let us consider over-exploitation. Imagine a situation where multiple herders share a common parcel of land, on which they are each entitled to let their cows graze. In Hardin‘s (1968) example (check his seminal paper below), it is in each herder’s interest to put the next (and succeeding) cows he acquires onto the land, even if the quality of the common is damaged for all as a result, through overgrazing. The herder receives all of the benefits from an additional cow, while the damage to the common is shared by the entire group. If all herders make this individually rational economic decision, the common will be depleted or even destroyed, to the detriment of all, causing over-exploitation.

Video – “Balance“: Wolfgang and Christoph Lauenstein (Directors), Germany, 1989. Academy Award for Best Animated Short (1989).

This huge dilemma, know as “The tragedy of the commons” arises from the situation in which multiple individuals, acting independently and rationally consulting their own self-interest, will ultimately deplete a shared limited resource, even when it is clear that it is not in anyone’s long-term interest for this to happen. On my own timeself-interest” allow me to start this post directly with a key passage, followed by two videos and a final abstract. First paper below, is in fact the seminal Garrett Hardin paper, an influential article titled precisely “The Tragedy of the Commons,” written in December 1968 and first published in journal Science (Science 162, 1243-1248, full PDF). One of the key passages goes on like this. Hardin asks:

[…] In a welfare state, how shall we deal with the family, the religion, the race, or the class (or indeed any distinguishable and cohesive group) that adopts overbreeding as a policy to secure its own aggrandizement (13)? To couple the concept of freedom to breed with the belief that everyone born has an equal right to the commons is to lock the world into a tragic course of action. […]

So the question is: driven by rational choice, are we as Humanity all doomed into over-exploitation in what regards our common resources? Will we all end-up in a situation where any tiny move will drive us into a disaster, as the last seconds on the animated short movie above clearly and brilliantly illustrate?

Fortunately, the answer is no, according to recent research. Besides Hardin‘s work has been criticized on the grounds of historical inaccuracy, and for failing to distinguish between common property and open access resources (Wikipedia entry), there is subsequent work by Elinor Ostrom and others suggesting that using Hardin‘s work to argue for privatization of resources is an “overstatement” of the case.

Video – Elinor Ostrom: “Beyond the tragedy of commons“. Stockholm whiteboard seminars. (video lecture, 8:26 min.)

In fact, according to Ostrom work in the study of common pool resources (CPR), awarded in 2009 for the Nobel Prize in Economic Sciences, there are eight design principles of stable local common pool resource management, possible to avoid the present dilemma. Among others, one of her works I definitely recommend reading is her Presidential address on the American Political Science Association, presented back in 1997, entitled, “A Behavioral Approach to the Rational Choice Theory of Collective Action” (The American Political Science Review Journal, Vol. 92, No. 1, pp. 1-22, Mar., 1998). Her impressive paper-work starts like this:

[…] Extensive empirical evidence and theoretical developments in multiple disciplines stimulate a need to expand the range of rational choice models to be used as a foundation for the study of social dilemmas and collective action. After an introduction to the problem of overcoming social dilemmas through collective action, the remainder of this article is divided into six sections. The first briefly reviews the theoretical predictions of currently accepted rational choice theory related to social dilemmas. The second section summarizes the challenges to the sole reliance on a complete model of rationality presented by extensive experimental research. In the third section, I discuss two major empirical findings that begin to show how individuals achieve results that are “better than rational” by building conditions where reciprocity, reputation, and trust can help to overcome the strong temptations of short-run self-interest. The fourth section raises the possibility of developing second-generation models of rationality, the fifth section develops an initial theoretical scenario, and the final section concludes by examining the implications of placing reciprocity, reputation, and trust at the core of an empirically tested, behavioral theory of collective action. […]

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

@ViRAms on Twitter

Archives

Blog Stats

  • 252,814 hits