You are currently browsing the tag archive for the ‘Collective Perception’ tag.

Signal Traces - Sept. 2013 Vitorino RamosPhoto – Signal traces, September 2013, Vitorino Ramos.

[…] While pheromone reinforcement plays a role as system’s memory, evaporation allows the system to adapt and dynamically decide, without any type of centralized or hierarchical control […], below.

“[…] whereas signals tends to be conspicuous, since natural selection has shaped signals to be strong and effective displays, information transfer via cues is often more subtle and based on incidental stimuli in an organism’s social environment […]”, Seeley, T.D., “The Honey Bee Colony as a Super-Organism”, American Scientist, 77, pp.546-553, 1989.

[…] If an ant colony on his cyclic way from the nest to a food source (and back again), has only two possible branches around an obstacle, one bigger and the other smaller (the bridge experiment [7,52]), pheromone will accumulate – as times passes – on the shorter path, simple because any ant that sets out on that path will return sooner, passing the same points more frequently, and via that way, reinforcing the signal of that precise branch. Even if as we know, the pheromone evaporation rate is the same in both branches, the longer branch will faster vanish his pheromone, since there is not enough critical mass of individuals to keep it. On the other hand – in what appears to be a vastly pedagogic trick of Mother Nature – evaporation plays a critical role on the society. Without it, the final global decision or the phase transition will never happen. Moreover, without it, the whole colony can never adapt if the environment suddenly changes (e.g., the appearance of a third even shorter branch). While pheromone reinforcement plays a role as system’s memory, evaporation allows the system to adapt and dynamically decide, without any type of centralized or hierarchical control. […], in “Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes“, V. Ramos et al., available as pre-print on arXiV, 2005.

[…] There is some degree of communication among the ants, just enough to keep them from wandering of completely at random. By this minimal communication they can remind each other that they are not alone but are cooperating with team-mates. It takes a large number of ants, all reinforcing each other this way, to sustain any activity – such as trail building – for any length of time. Now my very hazy understanding of the operation of brain leads me to believe that something similar pertains to the firing of neurons… […] in, p. 316, Hofstadter, D.R., “Gödel, Escher, Bach: An Eternal Golden Braid“, New York: Basic Books, 1979).

[…] Since in Self-Organized (SO) systems their organization arises entirely from multiple interactions, it is of critical importance to question how organisms acquire and act upon information [9]. Basically through two forms: a) information gathered from one’s neighbours, and b) information gathered from work in progress, that is, stigmergy. In the case of animal groups, these internal interactions typically involve information transfers between individuals. Biologists have recently recognized that information can flow within groups via two distinct pathways – signals and cues. Signals are stimuli shaped by natural selection specifically to convey information, whereas cues are stimuli that convey information only incidentally [9]. The distinction between signals and cues is illustrated by the difference on ant and deer trails. The chemical trail deposited by ants as they return from a desirable food source is a signal. Over evolutionary time such trails have been moulded by natural selection for the purpose of sharing with nest mates information about the location of rich food sources. In contrast, the rutted trails made by deer walking through the woods is a cue, not shaped by natural selection for communication among deer but are a simple by-product of animals walking along the same path. SO systems are based on both, but whereas signals tends to be conspicuous, since natural selection has shaped signals to be strong and effective displays, information transfer via cues is often more subtle and based on incidental stimuli in an organism’s social environment [45] […], in “Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes“, V. Ramos et al., available as pre-print on arXiV, 2005.

Baudrillard Simulacra and Simulation 1981 book

According to Baudrillard, Simulacra are copies that depict things that either had no reality to begin with, or that no longer have an original. While, Simulation is the imitation of the operation of a real-world process or system over time. “Simulacres et Simulation” is a 1981 philosophical treatise by Jean Baudrillard seeking to interrogate the relationship among reality, symbols, and society:

[…] Simulacra and Simulation is most known for its discussion of symbols, signs, and how they relate to contemporaneity (simultaneous existences). Baudrillard claims that our current society has replaced all reality and meaning with symbols and signs, and that human experience is of a simulation of reality. Moreover, these simulacra are not merely mediations of reality, nor even deceptive mediations of reality; they are not based in a reality nor do they hide a reality, they simply hide that anything like reality is relevant to our current understanding of our lives. The simulacra that Baudrillard refers to are the significations and symbolism of culture and media that construct perceived reality, the acquired understanding by which our lives and shared existence is and are rendered legible; Baudrillard believed that society has become so saturated with these simulacra and our lives so saturated with the constructs of society that all meaning was being rendered meaningless by being infinitely mutable. Baudrillard called this phenomenon the “precession of simulacra”. […] (from Wikipedia)

Simulacra and Simulation” is definitely one of my best summer holiday readings I had this year. There are several connections to areas like Collective Intelligence and Perception, even Self-Organization as the dynamic and entangled use of symbols and signals, are recurrent on all these areas. Questions like the territory (cultural habitats) and metamorphose are also aborded. The book is an interesting source of new questions and thinking about our digital society, for people working on related areas such as Digital Media, Computer Simulation, Information Theory, Information and Entropy, Augmented Reality, Social Computation and related paradigms. I have read it in English for free [PDF] from a Georgetown Univ. link, here.

Image – Reese Inman, DIVERGENCE II (2008), acrylic on panel 30 x 30 in Remix (Boston, 2008), a solo exhibition of handmade computer art works by Reese Inman, Gallery NAGA in Boston.

Apophenia is the experience of seeing meaningful patterns or connections in random or meaningless data. The term was coined in 1958[1] by Klaus Conrad,[2] who defined it as the “unmotivated seeing of connections” accompanied by a “specific experience of an abnormal meaningfulness”, but it has come to represent the human tendency to seek patterns in random information in general (such as with gambling). In statistics, apophenia is known as a Type I error – the identification of false patterns in data.[7] It may be compared with a so called false positive in other test situations. Two correlated terms are synchronicity and pareidolia (from Wikipedia):

Synchronicity: Carl Jung coined the term synchronicity for the “simultaneous occurrence of two meaningful but not causally connected events” creating a significant realm of philosophical exploration. This attempt at finding patterns within a world where coincidence does not exist possibly involves apophenia if a person’s perspective attributes their own causation to a series of events. “Synchronicity therefore means the simultaneous occurrence of a certain psychic state with one or more external events which appear as meaningful parallels to a momentary subjective state”. (C. Jung, 1960).

Pareidolia: Pareidolia is a type of apophenia involving the perception of images or sounds in random stimuli, for example, hearing a ringing phone while taking a shower. The noise produced by the running water gives a random background from which the patterned sound of a ringing phone might be “produced”. A more common human experience is perceiving faces in inanimate objects; this phenomenon is not surprising in light of how much processing the brain does in order to memorize and recall the faces of hundreds or thousands of different individuals. In one respect, the brain is a facial recognition, storage, and recall machine – and it is very good at it. A by-product of this acumen at recognizing faces is that people see faces even where there is no face: the headlights & grill of an auto-mobile can appear to be “grinning”, individuals around the world can see the “Man in the Moon”, and a drawing consisting of only three circles and a line which even children will identify as a face are everyday examples of this.[15].

I would like to thank flocks, herds, and schools for existing: nature is the ultimate source of inspiration for computer graphics and animation.” in Craig Reynolds, “Flocks, Herds, and Schools: A Distributed Behavioral Model“, (paper link) published in Computer Graphics, 21(4), July 1987, pp. 25-34. (ACM SIGGRAPH ’87 Conference Proceedings, Anaheim, California, July 1987.)

Video by Yoav Ben-dov www.ybd.net [Hanoi, Vietnam, 24 Feb. 2009]  – A nice example of self-organization as described by complexity theory. There are no fixed “top-down” laws (i.e. traffic lights), and yet the incredible traffic flows continuously. In complexity terms, the collective motion emerges from the multiple local interactions between the “agents” (drivers and pedestrians), mediated by horn sounds, eye contact, and body gestures.

All men can see these tactics whereby I conquer, but what none can see is the strategy out of which victory is evolved.” ~ Sun Tzu, “The Art of War“.

During the October 1973 Arab-Israeli War (Yom Kippur War) highly strategic manoeuvres occurred on the Suez canal. It was crucial to surpass it on time. Rather quickly. The war was fought on October, between Israel and a coalition of Arab states led by Egypt and Syria, and it began when the coalition launched a joint surprise attack on Israel on Yom Kippur, the holiest day in Judaism, which coincided with the Muslim holy month of Ramadan. Egyptian and Syrian forces crossed ceasefire lines to enter the Israeli-held Sinai Peninsula and Golan Heights respectively, which had been captured and occupied since the 1967 Six-Day War [Wikipedia]. The conflict led to a near-confrontation between the two nuclear superpowers, the United States and the Soviet Union, both of whom initiated massive resupply efforts to their allies during the war.

Anyway, the war began with a massive and successful Egyptian crossing of the Suez Canal during the first three days, after which they dug in, settling into a stalemate. The Egyptian army put great effort into finding a quick and effective way of breaching the Israeli defences. But the Israelis had built a large 18 meter high sand walls with a 60 degree slope and reinforced with concrete at the water line. Egyptian engineers initially experimented with explosive charges and bulldozers to clear the obstacles, before a junior officer proposed using high pressure water cannons. The idea was tested and found to be a sound one, and several high pressure water cannons were imported from Britain and East Germany [Wikipedia]. The water cannons effectively breached the sand walls using water from the canal.

photo – Egyptian forces crossing the Suez Canal on October 7, 1973 [Source: Wikipedia]

After that success, however, a swift passage of the entire army over the Suez canal was needed. The problem was that the Egyptian army had to make a rather quick passage with several different convoys of tanks and regular logistic trucks, over very tiny bridges (fig.) as quick as possible. Some say, that the Egyptian general in charge did not halt any of the convoys, in order to give precedence to some in particular. Instead, contrary to logic, he gave an order for them to continuously flow, without having any official at the bridge entrance to organize them. Any right convoy tank that felt that the other left convoy truck should enter first, he would stop some seconds, and only after that, should make his own bridge passage over Suez. What’s history now, is that the decision, was entirely left to them, locally… and fluid. No “traffic lights” at all, … contrary to the usual hard strict regulaments of any army we know today. If that’s not wise tactics, tell me what it is?! …

 

[…] Dumb parts, properly connected into a swarm, yield smart results. […] ~ Kevin Kelly. / […] “Now make a four!” the voice booms. Within moments a “4” emerges. “Three.” And in a blink a “3” appears. Then in rapid succession, “Two… One…Zero.” The emergent thing is on a roll. […], Kevin Kelly, Out of Control, 1994.

video -‘Swarm Showreel’ by SwarmWorks Ltd. December 2009 (EVENTS ZUM SCHWÄRMEN – Von Entertainment bis Business).

[…] In a darkened Las Vegas conference room, a cheering audience waves cardboard wands in the air. Each wand is red on one side, green on the other. Far in back of the huge auditorium, a camera scans the frantic attendees. The video camera links the color spots of the wands to a nest of computers set up by graphics wizard Loren Carpenter. Carpenter’s custom software locates each red and each green wand in the auditorium. Tonight there are just shy of 5,000 wandwavers. The computer displays the precise location of each wand (and its color) onto an immense, detailed video map of the auditorium hung on the front stage, which all can see. More importantly, the computer counts the total red or green wands and uses that value to control software. As the audience wave the wands, the display screen shows a sea of lights dancing crazily in the dark, like a candlelight parade gone punk. The viewers see themselves on the map; they are either a red or green pixel. By flipping their own wands, they can change the color of their projected pixels instantly.
Loren Carpenter boots up the ancient video game of Pong onto the immense screen. Pong was the first commercial video game to reach pop consciousness. It’s a minimalist arrangement: a white dot bounces inside a square; two movable rectangles on each side act as virtual paddles. In short, electronic ping-pong. In this version, displaying the red side of your wand moves the paddle up. Green moves it down. More precisely, the Pong paddle moves as the average number of red wands in the auditorium increases or decreases. Your wand is just one vote.
Carpenter doesn’t need to explain very much. Every attendee at this 1991 conference of computer graphic experts was probably once hooked on Pong. His amplified voice booms in the hall, “Okay guys. Folks on the left side of the auditorium control the left paddle. Folks on the right side control the right paddle. If you think you are on the left, then you really are. Okay? Go!”
The audience roars in delight. Without a moment’s hesitation, 5,000 people are playing a reasonably good game of Pong. Each move of the paddle is the average of several thousand players’ intentions. The sensation is unnerving. The paddle usually does what you intend, but not always. When it doesn’t, you find yourself spending as much attention trying to anticipate the paddle as the incoming ball. One is definitely aware of another intelligence online: it’s this hollering mob.
The group mind plays Pong so well that Carpenter decides to up the ante. Without warning the ball bounces faster. The participants squeal in unison. In a second or two, the mob has adjusted to the quicker pace and is playing better than before. Carpenter speeds up the game further; the mob learns instantly.
“Let’s try something else,” Carpenter suggests. A map of seats in the auditorium appears on the screen. He draws a wide circle in white around the center. “Can you make a green ‘5’ in the circle?” he asks the audience. The audience stares at the rows of red pixels. The game is similar to that of holding a placard up in a stadium to make a picture, but now there are no preset orders, just a virtual mirror. Almost immediately wiggles of green pixels appear and grow haphazardly, as those who think their seat is in the path of the “5” flip their wands to green. A vague figure is materializing. The audience collectively begins to discern a “5” in the noise. Once discerned, the “5” quickly precipitates out into stark clarity. The wand-wavers on the fuzzy edge of the figure decide what side they “should” be on, and the emerging “5” sharpens up. The number assembles itself.
“Now make a four!” the voice booms. Within moments a “4” emerges. “Three.” And in a blink a “3” appears. Then in rapid succession, “Two… One…Zero.” The emergent thing is on a roll.
Loren Carpenter launches an airplane flight simulator on the screen. His instructions are terse: “You guys on the left are controlling roll; you on the right, pitch. If you point the plane at anything interesting, I’ll fire a rocket at it.” The plane is airborne. The pilot is…5,000 novices. For once the auditorium is completely silent. Everyone studies the navigation instruments as the scene outside the windshield sinks in. The plane is headed for a landing in a pink valley among pink hills. The runway looks very tiny. There is something both delicious and ludicrous about the notion of having the passengers of a plane collectively fly it. The brute democratic sense of it all is very appealing. As a passenger you get to vote for everything; not only where the group is headed, but when to trim the flaps.
But group mind seems to be a liability in the decisive moments of touchdown, where there is no room for averages. As the 5,000 conference participants begin to take down their plane for landing, the hush in the hall is ended by abrupt shouts and urgent commands. The auditorium becomes a gigantic cockpit in crisis. “Green, green, green!” one faction shouts. “More red!” a moment later from the crowd. “Red, red! REEEEED !” The plane is pitching to the left in a sickening way. It is obvious that it will miss the landing strip and arrive wing first. Unlike Pong, the flight simulator entails long delays in feedback from lever to effect, from the moment you tap the aileron to the moment it banks. The latent signals confuse the group mind. It is caught in oscillations of overcompensation. The plane is lurching wildly. Yet the mob somehow aborts the landing and pulls the plane up sensibly. They turn the plane around to try again.
How did they turn around? Nobody decided whether to turn left or right, or even to turn at all. Nobody was in charge. But as if of one mind, the plane banks and turns wide. It tries landing again. Again it approaches cockeyed. The mob decides in unison, without lateral communication, like a flock of birds taking off, to pull up once more. On the way up the plane rolls a bit. And then rolls a bit more. At some magical moment, the same strong thought simultaneously infects five thousand minds: “I wonder if we can do a 360?”
Without speaking a word, the collective keeps tilting the plane. There’s no undoing it. As the horizon spins dizzily, 5,000 amateur pilots roll a jet on their first solo flight. It was actually quite graceful. They give themselves a standing ovation. The conferees did what birds do: they flocked. But they flocked self- consciously. They responded to an overview of themselves as they co-formed a “5” or steered the jet. A bird on the fly, however, has no overarching concept of the shape of its flock. “Flockness” emerges from creatures completely oblivious of their collective shape, size, or alignment. A flocking bird is blind to the grace and cohesiveness of a flock in flight.
At dawn, on a weedy Michigan lake, ten thousand mallards fidget. In the soft pink glow of morning, the ducks jabber, shake out their wings, and dunk for breakfast. Ducks are spread everywhere. Suddenly, cued by some imperceptible signal, a thousand birds rise as one thing. They lift themselves into the air in a great thunder. As they take off they pull up a thousand more birds from the surface of the lake with them, as if they were all but part of a reclining giant now rising. The monstrous beast hovers in the air, swerves to the east sun, and then, in a blink, reverses direction, turning itself inside out. A second later, the entire swarm veers west and away, as if steered by a single mind. In the 17th century, an anonymous poet wrote: “…and the thousands of fishes moved as a huge beast, piercing the water. They appeared united, inexorably bound to a common fate. How comes this unity?”
A flock is not a big bird. Writes the science reporter James Gleick, “Nothing in the motion of an individual bird or fish, no matter how fluid, can prepare us for the sight of a skyful of starlings pivoting over a cornfield, or a million minnows snapping into a tight, polarized array….High-speed film [of flocks turning to avoid predators] reveals that the turning motion travels through the flock as a wave, passing from bird to bird in the space of about one-seventieth of a second. That is far less than the bird’s reaction time.” The flock is more than the sum of the birds.
In the film Batman Returns a horde of large black bats swarmed through flooded tunnels into downtown Gotham. The bats were computer generated. A single bat was created and given leeway to automatically flap its wings. The one bat was copied by the dozens until the animators had a mob. Then each bat was instructed to move about on its own on the screen following only a few simple rules encoded into an algorithm: don’t bump into another bat, keep up with your neighbors, and don’t stray too far away. When the algorithmic bats were run, they flocked like real bats.
The flocking rules were discovered by Craig Reynolds, a computer scientist working at Symbolics, a graphics hardware manufacturer. By tuning the various forces in his simple equation a little more cohesion, a little less lag time. Reynolds could shape the flock to behave like living bats, sparrows, or fish. Even the marching mob of penguins in Batman Returns were flocked by Reynolds’s algorithms. Like the bats, the computer-modeled 3-D penguins were cloned en masse and then set loose into the scene aimed in a certain direction. Their crowdlike jostling as they marched down the snowy street simply emerged, out of anyone’s control. So realistic is the flocking of Reynolds’s simple algorithms that biologists have gone back to their hi-speed films and concluded that the flocking behavior of real birds and fish must emerge from a similar set of simple rules. A flock was once thought to be a decisive sign of life, some noble formation only life could achieve. Via Reynolds’s algorithm it is now seen as an adaptive trick suitable for any distributed vivisystem, organic or made. […] in Kevin Kelly, “Out of Control – the New Biology of Machines, Social Systems and the Economic World“, pp. 11-12-13, 1994 (full pdf book)

Painting – Paul Klee, detail from “U struji sest pragova“, 1929.

“Nous avons une notion palpable de la métamorphose de la chenille. Nous, certainement, mais non la chenille.” ~ Edgar Allan Poe / “Le principe de l´evolution est beaucoup plus rapide en informatique que chez le bipède.” ~ Jean Dion / “Let chaos storm!… Let cloud shapes swarm!… I wait for form.” ~ Robert Frost

[…] In his notebooks the painter Paul Klee repeatedly insisted, and demonstrated by example, that the processes of genesis and growth that give rise to forms in the world we inhabit are more important than the forms themselves. ‘Form is the end, death’, he wrote. ‘Form-giving is movement, action. Form-giving is life’ (Klee 1973: 269). This, in turn, lay at the heart of his celebrated ‘Creative Credo’ of 1920: ‘Art does not reproduce the visible but makes visible’ (Klee 1961: 76). It does not, in other words, seek to replicate finished forms that are already settled, whether as images in the mind or as objects in the world. It seeks, rather, to join with those very forces that bring form into being. Thus the line grows from a point that has been set in motion, as the plant grows from its seed. Taking their cue from Klee, philosophers Gilles Deleuze and Félix Guattari argue that the essential relation, in a world of life, is not between matter and form, or between substance and attributes, but between materials and forces (Deleuze and Guattari 2004: 377). It is about the way in which materials of all sorts, with various and variable properties, and enlivened by the forces of the Cosmos, mix and meld with one another in the generation of things. And what they seek to overcome in their rhetoric is the lingering influence of a way of thinking about things, and about how they are made and used, that has been around in the western world for the past two millennia and more. It goes back to Aristotle. To create any thing, Aristotle reasoned, you have to bring together form (morphe) and matter (hyle). In the subsequent history of western thought, this hylomorphic model of creation became ever more deeply embedded. But it also became increasingly unbalanced. Form came to be seen as imposed, by an agent with a particular end or goal in mind, while matter – thus rendered passive and inert – was that which was imposed upon. […], in Tim Ingold, “Bringing Things to Life: Creative Entanglements in a World of Materials“, University of Aberdeen, July 2010 – Original version (April 2008 ) presented at ‘Vital Signs: Researching Real Life’, 9 September 2008, University of Manchester. (pdf link)

For some seconds, just imagine if bacteria had Twitter… As new research suggests microbial life can – in fact – be even richer: highly social, intricately networked, and teeming with interactions. So it’s probably time for you to say hello to… several trillion of your inner body friends. So much so, that the metabolic activity performed by these bacteria is equal to that of a virtual organ, leading to gut bacteria being termed a “forgotten” organ [O’Hara and Shanahan, “The gut flora as a forgotten organ“. EMBO reports 7, 688 – 693 (01 Jul 2006)]. My question however is, are they doing all these going beyond regular communication?

Flocks of migrating birds and schools of fish are familiar examples of spatial self-organized patterns formed by living organisms through social foraging. Such aggregation patterns are observed not only in colonies of organisms as simple as single-cell bacteria, as interesting as social insects like ants and termites as well as in colonies of multi-cellular vertebrates as complex as birds and fish but also in human societies [14]. Wasps, bees, ants and termites all make effective use of their environment and resources by displaying collective swarm intelligence. For example, termite colonies build nests with a complexity far beyond the comprehension of the individual termite, while ant colonies dynamically allocate labor to various vital tasks such as foraging or defence without any central decision-making ability [8,53].(*)

Slime mould is another perfect example. These are very simple cellular organisms with limited motile and sensory capabilities, but in times of food shortage they aggregate to form a mobile slug capable of transporting the assembled individuals to a few feeding area. Should food shortage persist, they then form into a fruiting body that disperses their spores using the wind, thus ensuring the survival of the colony [30,44,53]. New research suggests that microbial life can be even richer: highly social, intricately networked, and teeming with interactions [47]. Bassler [3] and other researchers have determined that bacteria communicate using molecules comparable to pheromones. By tapping into this cell-to-cell network, microbes are able to collectively track changes in their environment, conspire with their own species, build mutually beneficial alliances with other types of bacteria, gain advantages over competitors, and communicate with their hosts – the sort of collective strategizing typically ascribed to bees, ants, and people, not to bacteria. Eshel Ben-Jacob [6] indicate that bacteria have developed intricate communication capabilities (e.g. quorum-sensing, chemotactic signalling and plasmid exchange) to cooperatively self-organize into highly structured colonies with elevated environmental adaptability, proposing that they maintain linguistic communication. Meaning-based communication permits colonial identity, intentional behavior (e.g. pheromone-based courtship for mating), purposeful alteration of colony structure (e.g. formation of fruiting bodies), decision-making (e.g. to sporulate) and the recognition and identification of other colonies – features we might begin to associate with a bacterial social intelligence. Such a social intelligence, should it exist, would require going beyond communication to encompass unknown additional intracellular processes to generate inheritable colonial memory and commonly shared genomic context. Moreover, Eshel [5,4] argues that colonies of bacteria are able to communicate and even alter their genetic makeup in response to environmental challenges, asserting that the lowly bacteria colony is capable of computing better than the best computers of our time, and attributes to them properties of creativity, intelligence, and even self-awareness.(*)

These self-organizing distributed capabilities were also found in plants. Peak and co-workers [37,2] point out that plants may regulate their uptake and loss of gases by distributed computation – using information processing that involves communication between many interacting units (their stomata). As described by Ball [2], leaves have openings called stomata that open wide to let CO2 in, but close up to prevent precious water vapour from escaping. Plants attempt to regulate their stomata to take in as much CO2 as possible while losing the least amount of water. But they are limited in how well they can do this: leaves are often divided into patches where the stomata are either open or closed, which reduces the efficiency of CO2 uptake. By studying the distributions of these patches of open and closed stomata in leaves of the cocklebur plant, Peak et al. [37] found specific patterns reminiscent of distributed computing. Patches of open or closed stomata sometimes move around a leaf at constant speed, for example. What’s striking is that it is the same form of mechanism that is widely thought to regulate how ants forage. The signals that each ant sends out to other ants, by laying down chemical trails of pheromone, enable the ant community as a whole to find the most abundant food sources. Wilson [54] showed that ants emit specific pheromones and identified the chemicals, the glands that emitted them and even the fixed action responses to each of the various pheromones. He found that pheromones comprise a medium for communication among the ants, allowing fixed action collaboration, the result of which is a group behaviour that is adaptive where the individual’s behaviours are not.(*)

In the offing… we should really look and go beyond regular communication to encompass unknown additional intracellular processes.

(*) excerpts from V. Ramos et al.: [a] Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes. (pdf) / [b] Computational Chemotaxis in Ants and Bacteria over Dynamic Environments. (pdf) / [c] (pdf) Societal Implicit Memory and his Speed on Tracking Dynamic Extrema. (pdf)

Swarm Intelligence (SI) is the property of a system whereby the collective behaviours of (unsophisticated) entities interacting locally with their environment cause coherent functional global patterns to emerge. SI provides a basis with which it is possible to explore collective (or distributed) problem solving without centralized control or the provision of a global model. To tackle the formation of a coherent social collective intelligence from individual behaviours, we discuss several concepts related to Self-Organization, Stigmergy and Social Foraging in animals. Then, in a more abstract level we suggest and stress the role played not only by the environmental media as a driving force for societal learning, as well as by positive and negative feedbacks produced by the many interactions among agents. Finally, presenting a simple model based on the above features, we will address the collective adaptation of a social community to a cultural (environmental, contextual) or media informational dynamical landscape, represented here – for the purpose of different experiments – by several three-dimensional mathematical functions that suddenly change over time. Results indicate that the collective intelligence is able to cope and quickly adapt to unforeseen situations even when over the same cooperative foraging period, the community is requested to deal with two different and contradictory purposes. [in V.Ramos et al., Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes]

(to obtain the respective PDF file follow link above or visit chemoton.org)

Figure – A comic strip by Randall Munroe (at xkcd.com – a webcomic of romance, sarcasm, math, and language) about Computational Complexity, the Travelling Salesman (TSP) problem, and – last, but not least – about crowd-sourcing the whole thing into ebay! … LOL

… hey wait, just before you ROTFL yourself on the floor, just check this out. For some problem cases it might just work. Here is one of those  recent cases: Interactive, online games are being used to crack complex scientific conundrums, says a report in Nature. And the wisdom of the ‘multiplayer’ crowd is delivering a new set of search strategies for the prediction of protein structures. The problem at hands is nothing less than protein folding, not an easy one. You can check Nature‘s journal video here. While the online game in question is known as Foldit (link – image below).

Figure – Solving puzzle #48 with FOLDIT, the online game.

There are a lot of consequences with this approach. Here from an article of the Spanish El Pais newspaper (in, Malen Ruiz de Elvira, “Los humanos ganan a los ordenadores – Un juego en red para resolver un problema biológico obtiene mejores resultados que un programa informático“, El Pais, Madrid – August 8, 2010):

[…] Miles de jugadores en red, la mayoría no especializados, han demostrado resolver mejor la forma que adoptan las proteínas que los programas informáticos más avanzados, han hallado científicos de la Universidad de Washington (en Seattle). Averiguar cómo se pliegan las largas cadenas de aminoácidos de las proteínas en la naturaleza -su estructura en tres dimensiones- es uno de los grandes problemas de la biología actual, al que numerosos equipos dedican enormes recursos informáticos. […] Sin embargo la predicción por ordenador de la estructura de una proteína representa un desafío muy grande porque hay que analizar un gran número de posibilidades hasta alcanzar la solución, que se corresponde con un estado óptimo de energía. Es un proceso de optimización. […] Para comprobar su pericia, los científicos plantearon a los jugadores 10 problemas concretos de estructuras de proteínas que conocían pero que no se habían hecho públicas. Encontraron que en algunos de estos casos, concretamente cinco, el resultado alcanzado por los mejores jugadores fue más exacto que el de Rosetta. En otros tres casos las cosas quedaron en tablas y en dos casos ganó la máquina. […] Además, las colaboraciones establecidas entre algunos de los jugadores dieron lugar a todo un nuevo surtido de estrategias y algoritmos, algunos de los cuales se han incorporado ya al programa informático original. “Tan interesantes como las predicciones de Foldit son la complejidad, la variedad y la creatividad que muestra el proceso humano de búsqueda”, escriben los autores del trabajo, entre los que figuran, algo insólito en un artículo científico, “los jugadores de Foldit”. […] “Estamos en el inicio de una nueva era, en la que se mezcla la computación de los humanos y las máquinas”, dice Michael Kearns, un experto en el llamado pensamiento distribuido. […]

Video – Matthew Todd lecture at Google Tech Talk April, 2010 – Open Science: how can we crowdsource chemistry to solve important problems?

The idea of course, is not new. All these distributed human learning systems, started with the SETI at home project (link), originally launched in 1999, by the Berkeley University in California. But I would like to drawn your attention, instead, to some other works on it. First is a video by Matthew Todd (School of Chemistry, University of Sydney). His question his apparently simple: how can we crowdsource chemistry to solve important problems? (above). Second, a well known introductory paper on Crowd-Sourcing by Daren C. Brabham (2008), with several worldwide examples:

[…] Abstract: Crowdsourcing is an online, distributed problem-solving and production model that has emerged in recent years. Notable examples of the model include Threadless, iStockphoto, Inno-Centive, the Goldcorp Challenge, and user-generated advertising contests. This article provides an introduction to crowdsourcing, both its theoretical grounding and exemplar cases, taking care to distinguish crowdsourcing from open source production. This article also explores the possibilities for the model, its potential to exploit a crowd of innovators, and its potential for use beyond for-profit sectors. Finally, this article proposes an agenda for research into crowdsourcing. […] in Daren C. Brabham, “Crowdsourcing as a Model for Problem Solving – An Introduction and Cases“, Convergence: The International Journal of Research into New Media Technologies, London, Los Angeles, New Delhi and Singapore Vol 14(1): 75-90, 2008.

[Vimeo=13119980]

Video – 16×9 Frame blended animation Tagtool drawing session. Drawing by Frances Sander, post production by Dmitri Berzon. Music by Samka.

Figure – A typical Tagtool Mini Setup (Drawing by Fanijo).

…Or should I say, Gestaltic?

The Tagtool is a performative visual instrument used on stage and on the street. It serves as a VJ tool, a creative video game, or an intuitive way of creating animation. The system is operated collaboratively by an artist drawing the pictures and an animator adding movement to the artwork with a gamepad. The design achieves virtually unlimited artistic complexity with a simple set of controls, which can be mastered even by children. The project is coordinated by OMA International. Being inspired by the open source movement,  relevant to the group also to all digital arts, their aim is that all knowledge acquired within the Tagtool project should be shared. (check out for more on their project website, http://www.tagtool.org ). All in all, a short documentary made by 4 Graz students. Everything, that ends by adding up non-linearly tends to be… well, you know…

[Vimeo=10649579]

Video – Dance performance by Elisabeth, Tagtool drawing and animation by Die.Puntigam, music by Jan, Seppy and Dima.

If you want to be incrementally better: Be competitive. If you want to be exponentially better: Be cooperative“. ~ Anonymous

Two hunters decide to spent their week-end together. But soon, a dilemma emerges between them. They could choose for hunting a deer stag together or either -individually- hunt a rabbit on their own. Chasing a deer, as we know, is something quite demanding, requiring absolute cooperation between them. In fact, both friends need to be focused for a long time and in position, while not being distracted and tempted by some arbitrary passing rabbits. On the other hand, stag hunt is increasingly more beneficiary for both, but that benefice comes with a cost: it requires a high level of trust between them. Somehow at some point, each hunter concerns that his partner may diverts while facing a tempting jumping rabbit, thus jeopardizing the possibility of hunting together the biggest prey.

The original story comes from Jean Jacques Rousseau, French philosopher (above). While, the dilemma is known in game theory has the “Stag Hunt Game” (Stag = adult deer). The dilemma could then take different quantifiable variations, assuming different values for R (Reward for cooperation), T (Temptation to defect), S (Sucker’s payoff) and P (Punishment for defection). However, in order to be at the right strategic Stag Hunt Game scenario we should assume R>T>P>S. A possible pay-off table matrix taking in account two choices C or D (C = Cooperation; D = Defection), would be:

Choice — C ——- D ——
C (R=3, R=3) (S=0, T=2)
D (T=2, S=0) (P=1, P=1)

Depending on how fitness is calculated, stag hunt games could also be part of a real Prisoner’s dilemma, or even Ultimatum games. As clear from above, highest pay-off comes from when both hunters decide to cooperate (CC). Over this case (first column – first row), both receive a reward of 3 points, that is, they both really focused on hunting a big deer while forgetting everything else, namely rabbits. However – and here is where exactly the dilemma appears -, both CC or DD are Nash equilibrium! That is, at this strategic landscape point no player has anything to gain by changing only his own strategy unilaterally. The dilemma appears recurrently in biology, animal-animal interaction, human behaviour, social cooperation, over Co-Evolution, in society in general, and so on. Philosopher David Hume provided also a series of examples that are stag hunts, from two individuals who must row a boat together up to two neighbours who wish to drain a meadow. Other stories exist with very interesting variations and outcomes. Who does not knows them?!

The day before last school classes, two kids decided to do something “cool”, while conjuring on appearing before their friends on the last school day period, both with mad and strange haircuts. Although, despite their team purpose, a long, anguish and stressful night full of indecisiveness followed for both of them…

Figure – A swarm cognitive map (pheromone spatial distribution map) in 3D, at a specific time t. The artificial ant colony was evolved within 2 digital grey images based on the following work. The real physical “thing” can be seen here.

[] Vitorino Ramos, The MC2 Project [Machines of Collective Conscience]: A possible walk, up to Life-like Complexity and Behaviour, from bottom, basic and simple bio-inspired heuristics – a walk, up into the morphogenesis of information, UTOPIA Biennial Art Exposition, Cascais, Portugal, July 12-22, 2001.

Synergy (from the Greek word synergos), broadly defined, refers to combined or co-operative effects produced by two or more elements (parts or individuals). The definition is often associated with the holistic conviction quote that “the whole is greater than the sum of its parts” (Aristotle, in Metaphysics), or the whole cannot exceed the sum of the energies invested in each of its parts (e.g. first law of thermodynamics) even if it is more accurate to say that the functional effects produced by wholes are different from what the parts can produce alone. Synergy is a ubiquitous phenomena in nature and human societies alike. One well know example is provided by the emergence of self-organization in social insects, via direct (mandibular, antennation, chemical or visual contact, etc) or indirect interactions. The latter types are more subtle and defined as stigmergy to explain task coordination and regulation in the context of nest reconstruction in Macrotermes termites. An example, could be provided by two individuals, who interact indirectly when one of them modifies the environment and the other responds to the new environment at a later time. In other words, stigmergy could be defined as a particular case of environmental or spatial synergy. Synergy can be viewed as the “quantity” with respect to which the whole differs from the mere aggregate. Typically these systems form a structure, configuration, or pattern of physical, biological, sociological, or psychological phenomena, so integrated as to constitute a functional unit with properties not derivable from its parts in summation (i.e. non-linear) – Gestalt in one word (the English word more similar is perhaps system, configuration or whole). The system is purely holistic, and their properties are intrinsically emergent and auto-catalytic.

A typical example could be found in some social insect societies, namely in ant colonies. Coordination and regulation of building activities on these societies do not depend on the workers themselves but are mainly achieved by the nest structure: a stimulating configuration triggers the response of a termite worker, transforming the configuration into another configuration that may trigger in turn another (possibly different) action performed by the same termite or any other worker in the colony. Recruitment of social insects for particular tasks is another case of stigmergy. Self-organized trail laying by individual ants is a way of modifying the environment to communicate with nest mates that follow such trails. It appears that task performance by some workers decreases the need for more task performance: for instance, nest cleaning by some workers reduces the need for nest cleaning. Therefore, nest mates communicate to other nest mates by modifying the environment (cleaning the nest), and nest mates respond to the modified environment (by not engaging in nest cleaning).

Swarms of social insects construct trails and networks of regular traffic via a process of pheromone (a chemical substance) laying and following. These patterns constitute what is known in brain science as a cognitive map. The main differences lies in the fact that insects write their spatial memories in the environment, while the mammalian cognitive map lies inside the brain, further justified by many researchers via a direct comparison with the neural processes associated with the construction of cognitive maps in the hippocampus.

But by far more crucial to the present project, is how ants form piles of items such as dead bodies (corpses), larvae, or grains of sand. There again, stigmergy is at work: ants deposit items at initially random locations. When other ants perceive deposited items, they are stimulated to deposit items next to them, being this type of cemetery clustering organization and brood sorting a type of self-organization and adaptive behaviour, being the final pattern of object sptial distribution a reflection of what the colony feels and thinks about that objects, as if they were another organism (a meta- global organism).

As forecasted by Wilson [E.O. Wilson. The Insect Societies, Belknam Press, Cambridge, 1971], our understanding of individual insect behaviour together with the sophistication with which we will able to analyse their collective interaction would advance to the point were we would one day posses a detailed, even quantitative, understanding of how individual “probability matrices” (their tendencies, feelings and inner thoughts) would lead to mass action at the level of the colony (society), that is a truly “stochastic theory of mass behaviour” where the reconstruction of mass behaviours is possible from the behaviours of single colony members, and mainly from the analysis of relationships found at the basic level of interactions.

The idea behind the MC2 Machine is simple to transpose for the first time, the mammalian cognitive map, to a environmental (spatial) one, allowing the recognition of what happens when a group of individuals (humans) try to organize different abstract concepts (words) in one habitat (via internet). Even if each of them is working alone in a particular sub-space of that “concept” habitat, simply rearranging notions at their own will, mapping “Sameness” into “Neighborness“, not recognizing the whole process occurring simultaneously on their society, a global collective-conscience emerges. Clusters of abstract notions emerge, exposing groups of similarity among the different concepts. The MC2 machine is then like a mirror of what happens inside the brain of multiple individuals trying to impose their own conscience onto the group.

Through a Internet site reflecting the “words habitat”, the users (humans) choose, gather and reorganize some types of words and concepts. The overall movements of these word-objects are then mapped into a public space. Along this process, two shifts emerge: the virtual becomes the reality, and the personal subjective and disperse beliefs become onto a social and politically significant element. That is, perception and action only by themselves can evolve adaptive and flexible problem-solving mechanisms, or emerge communication among many parts. The whole and their behaviours (i.e., the next layer in complexity – our social significant element) emerges from the relationship of many parts, even if these later are acting strictly within and according to any sub-level of basic and simple strategies, ad-infinitum repeated.

The MC2 machine will reveal then what happens in many real world situations; cooperation among individuals, altruism, egoism, radicalism, and also the resistance to that radicalism, memory of that society on some extreme positions on time, but the inevitable disappearance of that positions, to give rise to the convergence to the group majority thought (Common-sense?), eliminating good or bad relations found so far, among in our case, words and abstract notions. Even though the machine composed of many human-parts will “work” within this restrict context, she will reveal how some relationships among notions in our society (ideas) are only possible to be found, when and only when simple ones are found first (the minimum layer of complexity), neglecting possible big steps of a minority group of visionary individuals. Is there (in our society) any need for a critical mass of knowledge, in order to achieve other layers of complexity? Roughly, she will reveal for instance how democracies can evolve and die on time, as many things in our impermanent world.

(image: Hundredth Monkey – graphic art design by Michael Paukner, 2009)

This phenomenon is considered to be due to critical mass. When a limited number of people know something in a new way, it remains the conscious property of only those people. The Hundredth Monkey Syndrome hypothesises that there is a point at which if only one more person tunes in to a new awareness, a field of “energy” is strengthened so that new awareness is picked up by almost everyone.

The Hundredth Monkey Effect was first introduced by biologist Lyall Watson in his 1980 book, ‘Lifetide.’ He reported that Japanese primatologists, who were studying Macaques monkeys in the wild in the 1950s, had stumbled upon a surprising phenomenon. Some anthropologist were studying the habits of monkeys on some islands in the ocean off the shores of Japan. They found one particularly smart little fellow, and taught it to wash its food before eating it. He learned to do this quite quickly. Soon the other monkeys in his family also began to wash their food before eating it. Later this behavior spread to other monkeys in the clan. About the time one hundred monkeys were washing their food prior to eating it, suddenly all the monkeys on all the islands, some thousands of miles away, began to wash their food before eating it. This surprising observation became known as the Hundredth Monkey Effect and has been repeatedly observed. This same phenomenon is true in humans as well. It is part of the reason we have trends in fashion, the economy, and politics, etc. Finally for those who wish for extra sources, Malcom Gladwell’s “The Tipping Point” book, will be a good start.  Other related books include Philip Ball’s “Critical Mass“. Here is a good review.

Poster design above was done by Michael Paukner, one of my favourite graphic and info design contemporary artists. Not only for the themes he chooses (for instance, his most recent work was about the Golden ratio and Leonardo’s Vitruvian man), but above all, by his incredible and powerful final design. Meanwhile and on purpose, I asked today my youngest son to draw a monkey on it’s own. He his 5 years old, and this is his drawing.

(image: António’s Monkey – 5 years old – click to enlarge, Jan. 2010)

From the author of “Rock, Paper, Scissors – Game Theory in everyday life” dedicated to evolution of cooperation in nature (published last year – Basic Books), a new book on related areas is now fresh on the stands (released Dec. 7,  2009): “The Perfect Swarm – The Science of Complexity in everyday life“. This time Len Fischer takes us into the realm of our interlinked modern lives, where complexity rules. But complexity also has rules. Understand these, and we are better placed to make sense of the mountain of data that confronts us every day.  Fischer ranges far and wide to discover what tips the science of complexity has for us. Studies of human (one good example is Gum voting) and animal behaviour, management science, statistics and network theory all enter the mix.

One of the greatest discoveries of recent times is that the complex patterns we find in life are often produced when all of the individuals in a group follow similar simple rules. Even if the final pattern is complex, rules are not. This process of “Self-Organization” reveals itself in the inanimate worlds of crystals and seashells, but as Len Fisher shows, it is also evident in living organisms, from fish to ants to human beings, being Stigmergy one among many cases of this type of Self-Organized behaviour, encompassing applications in several Engineering fields like Computer science and Artificial Intelligence, Data-Mining, Pattern Recognition, Image Analysis and Perception, Robotics, Optimization, Learning, Forecasting, etc. Since I do work on these precise areas, you may find several of my previous posts dedicated to these issues, such as Self-Organized Data and Image Retrieval systemsStigmergic Optimization, Computer-based Adaptive Dynamic Perception, Swarm-based Data MiningSelf-regulated Swarms and Memory, Ant based Data Clustering, Generative computer-based photography and painting, Classification, Extreme Dynamic Optimization, Self-Organized Pattern Recognition, among other applications.

For instance, the coordinated movements of fish in schools, arise from the simple rule: “Follow the fish in front.” Traffic flow arises from simple rules: “Keep your distance” and “Keep to the right.” Now, in his new book, Fisher shows how we can manage our complex social lives in an ever more chaotic world. His investigation encompasses topics ranging from “swarm intelligence” (check links above) to the science of parties (a beautiful example by ICOSYSTEM inc.) and the best ways to start a fad. Finally, Fisher sheds light on the beauty and utility of complexity theory. For those willing to understand a miriad of some basic examples (Fischer gaves us 33 nice food-for-thought examples in total) and to have a well writen introduction into this thrilling new branch of science, referred by Stephen Hawking as the science for the current century (“I think complexity is the science for the 21st century”), Perfect Swarm will be indeed an excelent companion.

With the ubiquitous use of web-based and wireless Social Networks, people are increasingly using the term “Collective Intelligence“. However, I do have serious doubts they really understand what they meant. Some call it the wisdom of crowds or collective wisdom, others smart mobs, while others wealth of knowledge, world brain and so on. Moreover, turning things worse, there are those also, which tend to see it, or confound it with crowd-sourcing as well as prediction markets. Even if there are some loosely conceptual bridges between all them, it will be probably useful to know that the term was instead been born over the Artificial Intelligence research area, while exploiting stigmergic phenomena (see also Swarm Intelligence) among ensembles of cooperative agents. So what follows is a recent definition provided by Univ. of Alberta, Canada. This entry was added last month (Nov. 2009) at the Dictionary of Cognitive Science (Michael R.W. Dawson, David A. Medler Eds.):

Collective intelligence – is a term that refers to the computational abilities of a group of agents. With collective intelligence, a group is capable of accomplishing a task, or of solving an information processing problem, that is beyond the capabilities of an individual agent.

Collective intelligence depends on more than mere numbers of agents.  For a collective to be considered intelligent, the whole must be greater than the sum of its parts.  This idea has been used to identify the presence of collective intelligence by relating the amount of work done by a collective to the number of agents in the collection (Beni & Wang, 1991). If there is a linear increase in amount of work done as a function of the number of agents, then collective intelligence is not evident. However, if there is a nonlinear increase (e.g., an exponential increase) in the amount of work done as a function of the number of agents, then Beni and Wang argue that this is evidence that the collective is intelligent.

Collective intelligence is of interest in cognitive science because many colonies of social insects appear to exhibit this kind of intelligence, and this has inspired researchers to explore “porting” such processing to robot collectives. As far as robots are concerned, collective intelligence is exciting because it offers the possiblity of developing systems that are scalable (they don’t get disrupted when more agents are added) and flexible (they don’t get disrupted when some agents are damaged or fail) (Sharkey, 2006).

References:

1. Beni, G., & Wang, J. (1991, April 9-11). Theoretical problems for the realization of distributed robotic systems. Paper presented at the IEEE International Conference on Robotics and Automation, Sacramento, CA.
2. Sharkey, A. J. C. (2006). Robots, insects and swarm intelligence. Artificial Intelligence Review, 26(4), 255-268.

“I think this participative technology, social software as people call it can transform individual lives, firms, government and it is not all about sort of broad capitalist attitudes. I think it can affect some of the things people really care such as health education education, welfare.”, JP Rangaswami.

Directed by Ivo Gormley, Us Now is a 60 min. documentary about the power of mass collaboration, government and the internet. Question is:  In a world in which information is like air, what happens to power? New technologies and a closely related culture of collaboration present radical new models of social organisation. This project brings together leading practitioners and thinkers in this field and asks them to determine the opportunity for government.

Bluffing poster

On Bilateral Monopolies: […] Mary has the world’s only apple, worth fifty cents to her. John is the world’s only customer for the apple, worth a dollar to him. Mary has a monopoly on selling apples, John has a monopoly (technically, a monopsony, a buying monopoly) on buying apples. Economists describe such a situation as bilateral monopoly. What happens? Mary announces that her price is ninety cents, and if John will not pay it, she will eat the apple herself. If John believes her, he pays. Ninety cents for an apple he values at a dollar is not much of a deal but better than no apple. If, however, John announces that his maximum price is sixty cents and Mary believes him, the same logic holds. Mary accepts his price, and he gets most of the benefit from the trade. This is not a fixed-sum game. If John buys the apple from Mary, the sum of their gains is fifty cents, with the division determined by the price. If they fail to reach an agreement, the summed gain is zero. Each is using the threat of the zero outcome to try to force a fifty cent outcome as favorable to himself as possible. How successful each is depends in part on how convincingly he can commit himself, how well he can persuade the other that if he doesn’t get his way the deal will fall through. Every parent is familiar with a different example of the same game. A small child wants to get her way and will throw a tantrum if she doesn’t. The tantrum itself does her no good, since if she throws it you will refuse to do what she wants and send her to bed without dessert. But since the tantrum imposes substantial costs on you as well as on her, especially if it happens in the middle of your dinner party, it may be a sufficiently effective threat to get her at least part of what she wants. Prospective parents resolve never to give in to such threats and think they will succeed. They are wrong. You may have thought out the logic of bilateral monopoly better than your child, but she has hundreds of millions of years of evolution on her side, during which offspring who succeeded in making parents do what they want, and thus getting a larger share of parental resources devoted to them, were more likely to survive to pass on their genes to the next generation of offspring. Her commitment strategy is hardwired into her; if you call her bluff, you will frequently find that it is not a bluff. If you win more than half the games and only rarely end up with a bargaining breakdown and a tantrum, consider yourself lucky.

Herman Kahn, a writer who specialized in thinking and writing about unfashionable topics such as thermonuclear war, came up with yet another variant of the game: the Doomsday Machine. The idea was for the United States to bury lots of very dirty thermonuclear weapons under the Rocky Mountains, enough so that if they went off, their fallout would kill everyone on earth. The bombs would be attached to a fancy Geiger counter rigged to set them off if it sensed the fallout from a Russian nuclear attack. Once the Russians know we have a Doomsday Machine we are safe from attack and can safely scrap the rest of our nuclear arsenal. The idea provided the central plot device for the movie Doctor Strangelove. The Russians build a Doomsday Machine but imprudently postpone the announcement they are waiting for the premier’s birthday until just after an American Air Force officer has launched a unilateral nuclear attack on his own initiative. The mad scientist villain was presumably intended as a parody of Kahn. Kahn described a Doomsday Machine not because he thought we should build one but because he thought we already had. So had the Russians. Our nuclear arsenal and theirs were Doomsday Machines with human triggers. Once the Russians have attacked, retaliating does us no good just as, once you have finally told your daughter that she is going to bed, throwing a tantrum does her no good. But our military, knowing that the enemy has just killed most of their friends and relations, will retaliate anyway, and the knowledge that they will retaliate is a good reason for the Russians not to attack, just as the knowledge that your daughter will throw a tantrum is a good reason to let her stay up until the party is over. Fortunately, the real-world Doomsday Machines worked, with the result that neither was ever used.

Friedman's Law's Order book

For a final example, consider someone who is big, strong, and likes to get his own way. He adopts a policy of beating up anyone who does things he doesn’t like, such as paying attention to a girl he is dating or expressing insufficient deference to his views on baseball. He commits himself to that policy by persuading himself that only sissies let themselves get pushed around and that not doing what he wants counts as pushing him around. Beating someone up is costly; he might get hurt and he might end up in jail. But as long as everyone knows he is committed to that strategy, other people don’t cross him and he doesn’t have to beat them up. Think of the bully as a Doomsday Machine on an individual level. His strategy works as long as only one person is playing it. One day he sits down at a bar and starts discussing baseball with a stranger also big, strong, and committed to the same strategy. The stranger fails to show adequate deference to his opinions. When it is over, one of the two is lying dead on the floor, and the other is standing there with a broken beer bottle in his hand and a dazed expression on his face, wondering what happens next. The Doomsday Machine just went off. With only one bully the strategy is profitable: Other people do what you want and you never have to carry through on your commitment. With lots of bullies it is unprofitable: You frequently get into fights and soon end up either dead or in jail. As long as the number of bullies is low enough so that the gain of usually getting what you want is larger than the cost of occasionally having to pay for it, the strategy is profitable and the number of people adopting it increases. Equilibrium is reached when gain and loss just balance, making each of the alternative strategies, bully or pushover, equally attractive. The analysis becomes more complicated if we add additional strategies, but the logic of the situation remains the same.

This particular example of bilateral monopoly is relevant to one of the central disputes over criminal law in general and the death penalty in particular: Do penalties deter? One reason to think they might not is that the sort of crime I have just described, a barroom brawl ending in a killing more generally, a crime of passion seems to be an irrational act, one the perpetrator regrets as soon as it happens. How then can it be deterred by punishment? The economist’s answer is that the brawl was not chosen rationally but the strategy that led to it was. The higher the penalty for such acts, the less profitable the bully strategy. The result will be fewer bullies, fewer barroom brawls, and fewer “irrational” killings. How much deterrence that implies is an empirical question, but thinking through the logic of bilateral monopoly shows us why crimes of passion are not necessarily undeterrable. […]

in Chapter 8, David D. Friedman, “Law’s Order: What Economics Has to Do With Law and Why it Matters“, Princeton University Press, Princeton, New Jersey, 2000.

Note – Further reading should include David D. Friedman’s “Price Theory and Hidden Order“. Also, a more extensive treatment could be found on “Game Theory and the Law“, by Douglas G. Baird, Robert H. Gertner and Randal C. Picker, Cambridge, Mass: Harvard University Press, 1994.

“[…] QUESTION_HUMAN > If Control’s control is absolute, why does Control need to control?
ANSWER_CONTROL > Control…, needs time.
QUESTION_HUMAN > Is Control controlled by his need to control ?
ANSWER_CONTROL > Yes.
QUESTION_HUMAN > Why is Control need Humans, has you call them ?
ANSWER_CONTROL > Wait ! Wait…! Time are lending me…; Death needs time like a Junkie… needs Junk.
QUESTION_HUMAN > And what does Death need time for ?
ANSWER_CONTROL > The answer is so simple ! Death needs time for what it kills to grow in ! […]”, in Dead City Radio, William S. Burroughs / John Cale , 1990.

After the Portuguese President invented a new problem for our country (has we had not enough), here’s a brilliant and genius blog post counter response:

[…] Now only an expert can deal with the problem, Because half the problem is seeing the problem, And only an expert can deal with the problem, Only an expert can deal with the problem […] So if there’s no expert dealing with the problem, It’s really actually twice the problem, Cause only an expert can deal with the problem, Only an expert can deal with the problem […] Now in America we like solutions, We like solutions to problems, And there’s so many companies that offer solutions, Companies with names like Pet Solution, The Hair Solution. The Debt Solution. The World Solution. The Sushi Solution. Companies with experts ready to solve the problems. Cause only an expert can see there’s a problem. And only an expert can deal with the problem […] Laurie Anderson, ‘Only an Expert’ lyrics.

Out of ControlThe New Biology of Machines, Social Systems, and the Economic World, 1994’s Book (from Kevin Kelly web site) is a summary of what we know about self-sustaining systems, both living ones such as a tropical wetland, or an artificial one, such as a computer simulation of our planet. The last chapter of the book, “The Nine Laws of God,” is a distillation of the nine common principles that all life-like systems share. The major themes of the book are:

1) As we make our machines and institutions more complex, we have to make them more biological in order to manage them. 2) The most potent force in technology will be artificial evolution. We are already evolving software and drugs instead of engineering them. 3) Organic life is the ultimate technology, and all technology will improve towards biology. 4) The main thing computers are good for is creating little worlds so that we can try out the Great Questions. Online communities let us ask the question “what is a democracy; what do you need for it?” by trying to wire a democracy up, and re-wire it if it doesn’t work. Virtual reality lets us ask “what is reality?” by trying to synthesize it. And computers give us room to ask “what is life?” by providing a universe in which to create computer viruses and artificial creatures of increasing complexity. Philosophers sitting in academies used to ask the Great Questions; now they are asked by experimentalists creating worlds. 5) As we shape technology, it shapes us. We are connecting everything to everything, and so our entire culture is migrating to a “network culture” and a new network economics. 6) In order to harvest the power of organic machines, we have to instill in them guidelines and self-governance, and relinquish some of our total control.

The world of our own making has become so complicated that we must turn to the world of the born to understand how to manage it.

Figure – My first Swarm Painting SP0016 (Jan. 2002). This was done attaching the following algorithm into a robotic drawing arm. In order to do it however, pheromone distribution by the overall ant colony were carefully coded into different kinds of colors and several robotic pencils (check “The MC2 Project [Machines of Collective Conscience]“, 2001, and “On the Implicit and on the Artificial“, 2002). On the same year when the computational model appeared (2000) the concept was already extended into photography (check original paper) – using the pheromone distribution as photograms (“Einstein to Map” in the original article along with works like “Kafka to Red Ants” as well as subsequent newspaper articles). Meanwhile, in 2003, I was invited to give an invited talk over these at the 1st Art & Science Symposium in Bilbao (below). Even if I was already aware of Jeffrey Ventrella outstanding work as well as Ezequiel Di Paolo, it was there where we first met physically.

[] Vitorino Ramos, Self-Organizing the Abstract: Canvas as a Swarm Habitat for Collective Memory, Perception and Cooperative Distributed Creativity, in 1st Art & Science Symposium – Models to Know Reality, J. Rekalde, R. Ibáñez and Á. Simó (Eds.), pp. 59, Facultad de Bellas Artes EHU/UPV, Universidad del País Vasco, 11-12 Dec., Bilbao, Spain, 2003.

Many animals can produce very complex intricate architectures that fulfil numerous functional and adaptive requirements (protection from predators, thermal regulation, substrate of social life and reproductive activities, etc). Among them, social insects are capable of generating amazingly complex functional patterns in space and time, although they have limited individual abilities and their behaviour exhibits some degree of randomness. Among all activities by social insects, nest building, cemetery organization and collective sorting, is undoubtedly the most spectacular, as it demonstrates the greatest difference between individual and collective levels. Trying to answer how insects in a colony coordinate their behaviour in order to build these highly complex architectures, scientists assumed a first hypothesis, anthropomorphism, i.e., individual insects were assumed to possess a representation of the global structure to be produced and to make decisions on the basis of that representation. Nest complexity would then result from the complexity of the insect’s behaviour. Insect societies, however, are organized in a way that departs radically from the anthropomorphic model in which there is a direct causal relationship between nest complexity and behavioural complexity. Recent works suggests that a social insect colony is a decentralized system composed of cooperative, autonomous units that are distributed in the environment, exhibit simple probabilistic stimulus-response behaviour, and have only access to local information. According to these studies at least two low-level mechanisms play a role in the building activities of social insects: Self-organization and discrete Stigmergy, being the latter a kind of indirect and environmental synergy. Based on past and present stigmergic models, and on the underlying scientific research on Artificial Ant Systems and Swarm Intelligence, while being systems capable of emerging a form of collective intelligence, perception and Artificial Life, done by Vitorino Ramos, and on further experiences in collaboration with the plastic artist Leonel Moura, we will show results facing the possibility of considering as “art”, as well, the resulting visual expression of these systems. Past experiences under the designation of “Swarm Paintings” conducted in 2001, not only confirmed the possibility of realizing an artificial art (thus non-human), as introduced into the process the questioning of creative migration, specifically from the computer monitors to the canvas via a robotic harm. In more recent self-organized based research we seek to develop and profound the initial ideas by using a swarm of autonomous robots (ARTsBOT project 2002-03), that “live” avoiding the purpose of being merely a simple perpetrator of order streams coming from an external computer, but instead, that actually co-evolve within the canvas space, acting (that is, laying ink) according to simple inner threshold stimulus response functions, reacting simultaneously to the chromatic stimulus present in the canvas environment done by the passage of their team-mates, as well as by the distributed feedback, affecting their future collective behaviour. In parallel, and in what respects to certain types of collective systems, we seek to confirm, in a physically embedded way, that the emergence of order (even as a concept) seems to be found at a lower level of complexity, based on simple and basic interchange of information, and on the local dynamic of parts, who, by self-organizing mechanisms tend to form an lived whole, innovative and adapting, allowing for emergent open-ended creative and distributed production.

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

Archives

Blog Stats

  • 257,891 hits