You are currently browsing the category archive for the ‘Quotes’ category.
“There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself“. ~ John von Neumann, in his 1949 University of Illinois lectures on the Theory and Organization of Complicated Automata [J. von Neumann, Theory of self-reproducing automata, 1949 Univ. of Illinois Lectures on the Theory and Organization of Complicated Automata, ed. A.W. Burks (University of Illinois Press, Urbana, IL, 1966).].
“I don’t do drugs. I am drugs” ~ Salvador Dalí.
The photo, which dates from 1969, depicts the 65-year-old Catalan surrealist Salvador Dalí emerging from a Paris subway station led by his trusty giant anteater. Surrealism‘s aim was to “resolve the previously contradictory conditions of dream and reality.” Artists painted unnerving, illogical scenes with photographic precision, created strange creatures from everyday objects and developed painting techniques that allowed the unconscious to express itself. [from Wikipedia, link above].
Interesting how this Samuel Beckett (1906–1989) quote to his work is so close to the research on Artificial Life (aLife), as well as how Christopher Langton (link) approached the field, on his initial stages, fighting back and fourth with his Lambda parameter (“Life emerges at the Edge of Chaos“) back in the 80′s. According to Langton‘s findings, at the edge of several ordered states and the chaotic regime (lambda=0,273) the information passing on the system is maximal, thus ensuring life. Will not wait for Godot. Here:
“Beckett was intrigued by chess because of the way it combined the free play of imagination with a rigid set of rules, presenting what the editors of the Faber Companion to Samuel Beckett call a “paradox of freedom and restriction”. That is a very Beckettian notion: the idea that we are simultaneously free and unfree, capable of beauty yet doomed. Chess, especially in the endgame when the board’s opening symmetry has been wrecked and the courtiers eliminated, represents life reduced to essentials – to a struggle to survive.”(*)
(*) on Stephen Moss, “Samuel Beckett’s obsession with chess: how the game influenced his work“, The Guardian, 29 August 2013. [link]
[...] Analogy is the core of all thinking. – This is the simple but unorthodox premise that Pulitzer Prize-winning author Douglas Hofstadter and French psychologist Emmanuel Sander defend in their new work. Hofstadter has been grappling with the mysteries of human thought for over thirty years. Now, with his trademark wit and special talent for making complex ideas vivid, he has partnered with Sander to put forth a highly novel perspective on cognition. We are constantly faced with a swirling and intermingling multitude of ill-defined situations. Our brain’s job is to try to make sense of this unpredictable, swarming chaos of stimuli. How does it do so? The ceaseless hail of input triggers analogies galore, helping us to pinpoint the essence of what is going on. Often this means the spontaneous evocation of words, sometimes idioms, sometimes the triggering of nameless, long-buried memories.
Why did two-year-old Camille proudly exclaim, “I undressed the banana!”? Why do people who hear a story often blurt out, “Exactly the same thing happened to me!” when it was a completely different event? How do we recognize an aggressive driver from a split-second glance in our rear-view mirror? What in a friend’s remark triggers the offhand reply, “That’s just sour grapes”? What did Albert Einstein see that made him suspect that light consists of particles when a century of research had driven the final nail in the coffin of that long-dead idea? The answer to all these questions, of course, is analogy-making – the meat and potatoes, the heart and soul, the fuel and fire, the gist and the crux, the lifeblood and the wellsprings of thought. Analogy-making, far from happening at rare intervals, occurs at all moments, defining thinking from top to toe, from the tiniest and most fleeting thoughts to the most creative scientific insights.
Like Gödel, Escher, Bach before it, Surfaces and Essences will profoundly enrich our understanding of our own minds. By plunging the reader into an extraordinary variety of colorful situations involving language, thought, and memory, by revealing bit by bit the constantly churning cognitive mechanisms normally completely hidden from view, and by discovering in them one central, invariant core – the incessant, unconscious quest for strong analogical links to past experiences – this book puts forth a radical and deeply surprising new vision of the act of thinking. [...] intro to “Surfaces and Essences – Analogy as the fuel and fire of thinking” by Douglas Hofstadter and Emmanuel Sander, Basic Books, NY, 2013 [link] (to be released May 1, 2013).
Photo – Oscar Niemeyer (1907-2012) photographed by Ludovic Lent for L’Express, France.
“First were the thick stone walls, the arches, then the domes and vaults – of the architect, searching out for wider spaces. Now it is concrete-reinforced that gives our imagination flight with its soaring spans and uncommon cantilevers. Concrete, to which architecture is integrated, through which it is able to discard the foregone conclusions of rationalism, with its monotony and repetitious solutions. A concern for beauty, a zest for fantasy, and an ever-present element of surprise bear witness that today’s architecture is not a minor craft bound to straight-edge rules, but an architecture imbued with technology: light, creative and unfettered, seeking out its architectural scene.” ~ Oscar Niemeyer, acceptance speech, Pritzker Architecture Prize (1988).
“What bothers us about primordial beauty is that it is no longer characteristic. Unspoiled places sadden us because they are, in an important sense, no longer true.” – Robert Adams.
Living and working mostly in Colorado for nearly 30 years, Robert Adams was mostly concerned about a palimpsest of alterations, unfolding in front of his camera in plain western America. Even if unperceivable for so many, the landscape in turmoil was his medium. And it was there, he found out what beauty is not. In 1975, New Topographics encapsulated an evolving Man-altered landscape in an exhibition that end-up by signalling a pivotal key moment in American landscape photography. His sensibility and aesthetic approach remains pertinent today among us. One needs to only replace random and lost inanimate landscapes with random lonely people.
“… words are not numbers, nor even signs. They are animals, alive and with a will of their own. Put together, they are invariably less or more than their sum. Words die in antisepsis. Asked to be neutral, they display allegiances and stubborn propensities. They assume the color of their new surroundings, like chameleons; they perversely develop echoes.” Guy Davenport, “Another Odyssey”, 1967. [above: painting by Mark Rothko - untitled]
Figure – A classic example of emergence: The exact shape of a termite mound is not reducible to the actions of individual termites. Even if, there are already computer models who could achieve it (Check for more on “Stigmergic construction” or the full current blog Stigmergy tag)
“The world can no longer be understood like a chessboard… It’s a Jackson Pollack painting” ~ Carne Ross, 2012.
[...] As pointed by Langton, there is more to life than mechanics – there is also dynamics. Life depends critically on principles of dynamical self-organization that have remained largely untouched by traditional analytic methods. There is a simple explanation for this – these self-organized dynamics are fundamentally non-linear phenomena, and non-linear phenomena in general depend critically on the interactions between parts: they necessarily disappear when parts are treated in isolation from one another, which is the basis for any analytic method. Rather, non-linear phenomena are most appropriately treated by a synthetic approach, where synthesis means “the combining of separate elements or substances to form a coherent whole”. In non-linear systems, the parts must be treated in each other’s presence, rather than independently from one another, because they behave very differently in each other’s presence than we would expect from a study of the parts in isolation. [...] in Vitorino Ramos, 2002, http://arxiv.org/abs/cs /0412077.
What follows are passages from an important article on the consequences for Science at the moment of the recent discovery of the Higgs boson. Written by Ashutosh Jogalekar, “The Higgs boson and the future of science” (link) the article appeared at the Scientific American blog section (July 2012). And it starts discussing reductionism or how the Higgs boson points us to the culmination of reductionist thinking:
[...] And I say this with a suspicion that the Higgs boson may be the most fitting tribute to the limitations of what has been the most potent philosophical instrument of scientific discovery – reductionism. [...]
[...] Yet as we enter the second decade of the twenty-first century, it is clear that reductionism as a principal weapon in our arsenal of discovery tools is no longer sufficient. Consider some of the most important questions facing modern science, almost all of which deal with complex, multi factorial systems. How did life on earth begin? How does biological matter evolve consciousness? What are dark matter and dark energy? How do societies cooperate to solve their most pressing problems? What are the properties of the global climate system? It is interesting to note at least one common feature among many of these problems; they result from the build-up rather than the breakdown of their operational entities. Their signature is collective emergence, the creation of attributes which are greater than the sum of their constituent parts. Whatever consciousness is for instance, it is definitely a result of neurons acting together in ways that are not obvious from their individual structures. Similarly, the origin of life can be traced back to molecular entities undergoing self-assembly and then replication and metabolism, a process that supersedes the chemical behaviour of the isolated components. The puzzle of dark matter and dark energy also have as their salient feature the behaviour of matter at large length and time scales. Studying cooperation in societies essentially involves studying group dynamics and evolutionary conflict. The key processes that operate in the existence of all these problems seem to almost intuitively involve the opposite of reduction; they all result from the agglomeration of molecules, matter, cells, bodies and human beings across a hierarchy of unique levels. In addition, and this is key, they involve the manifestation of unique principles emerging at every level that cannot be merely reduced to those at the underlying level. [...]
[...] While emergence had been implicitly appreciated by scientists for a long time, its modern salvo was undoubtedly a 1972 paper in Science by the Nobel Prize winning physicist Philip Anderson (link) titled “More is Different” (PDF), a title that has turned into a kind of clarion call for emergence enthusiasts. In his paper Anderson (who incidentally first came up with the so-called Higgs mechanism) argued that emergence was nothing exotic; for instance, a lump of salt has properties very different from those of its highly reactive components sodium and chlorine. A lump of gold evidences properties like color that don’t exist at the level of individual atoms. Anderson also appealed to the process of broken symmetry, invoked in all kinds of fundamental events – including the existence of the Higgs boson – as being instrumental for emergence. Since then, emergent phenomena have been invoked in hundreds of diverse cases, ranging from the construction of termite hills to the flight of birds. The development of chaos theory beginning in the 60s further illustrated how very simple systems could give rise to very complicated and counter-intuitive patterns and behaviour that are not obvious from the identities of the individual components. [...]
[...] Many scientists and philosophers have contributed to considered critiques of reductionism and an appreciation of emergence since Anderson wrote his paper. (…) These thinkers make the point that not only does reductionism fail in practice (because of the sheer complexity of the systems it purports to explain), but it also fails in principle on a deeper level. [...]
[...] An even more forceful proponent of this contingency-based critique of reductionism is the complexity theorist Stuart Kauffman who has laid out his thoughts in two books. Just like Anderson, Kauffman does not deny the great value of reductionism in illuminating our world, but he also points out the factors that greatly limit its application. One of his favourite examples is the role of contingency in evolution and the object of his attention is the mammalian heart. Kauffman makes the case that no amount of reductionist analysis could explain tell you that the main function of the heart is to pump blood. Even in the unlikely case that you could predict the structure of hearts and the bodies that house them starting from the Higgs boson, such a deductive process could never tell you that of all the possible functions of the heart, the most important one is to pump blood. This is because the blood-pumping action of the heart is as much a result of historical contingency and the countless chance events that led to the evolution of the biosphere as it is of its bottom-up construction from atoms, molecules, cells and tissues. [...]
[...] Reductionism then falls woefully short when trying to explain two things; origins and purpose. And one can see that if it has problems even when dealing with left-handed amino acids and human hearts, it would be in much more dire straits when attempting to account for say kin selection or geopolitical conflict. The fact is that each of these phenomena are better explained by fundamental principles operating at their own levels. [...]
[...] Every time the end of science has been announced, science itself proved that claims of its demise were vastly exaggerated. Firstly, reductionism will always be alive and kicking since the general approach of studying anything by breaking it down into its constituents will continue to be enormously fruitful. But more importantly, it’s not so much the end of reductionism as the beginning of a more general paradigm that combines reductionism with new ways of thinking. The limitations of reductionism should be seen as a cause not for despair but for celebration since it means that we are now entering new, uncharted territory. [...]
“It takes you 500,000 microseconds just to click a mouse. But if you’re a Wall Street algorithm and you’re five microseconds behind, you’re a loser.” ~ Kevin Slavin.
TED video lecture – Kevin Slavin (link) argues that we’re living in a world designed for – and increasingly controlled by – algorithms. In this riveting talk from TEDGlobal, he shows how these complex computer programs determine: espionage tactics, stock prices, movie scripts, and architecture. And he warns that we are writing code we can’t understand, with implications we can’t control. Kevin Slavin navigates in the “algoworld“, the expanding space in our lives that’s determined and run by algorithms (link at TED).
“Coders are now habitat providers for the rest of the world.” ~ Vitorino Ramos, via Twitter, July, 17, 2012 (link).
Video lecture – Casey Reas (reas.com) at Eyeo2012 (uploaded 2 days ago on Vimeo): From a visual and conceptual point of view, the tension between order and chaos is a fertile space to explore. For over one hundred years, visual artists have focused on both in isolation and in tandem. As artists started to use software in the 1960s, the nature of this exploration expanded. This presentation features a series of revealing examples, historical research into the topic as developed for Reas‘ upcoming co-authored book “10 PRINT CHR$(205.5+RND(1)); : GOTO 10″ (MIT Press, 2012, book link; cover above), and a selection of Casey‘s artwork that relies on the relationship between chance operations and strict rules.
“Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!” ~ The Red Queen, at “Through the looking-glass, and what Alice found there“, Charles Lutwidge Dogson, 1871.
Your move. Alice was suddenly found in a new strange world. And quickly needed to adapt. As C.L. Dogson (most known as Lewis Carroll) brilliantly puts it, all the running you can do, does not suffices at all. This is a world (Wonderland) with different “physical” laws or “societal norms”. Surprisingly, those patterns appear also to us quite familiar, here, on planet Earth. As an example, the quote above is mainly the paradigm for Biological Co-Evolution, in the form of the Red-Queen effect.
In Wonderland (1st book), Alice follows the white rabbit, which end-ups driving her on this strange habitat, where apparently “normal” “physical” laws do not apply. On this second book however, Alice now needs to overcome a series of great obstacles – structured as phases in a game of chess – in order to become a queen. Though, as she moves on, several other enigmatic personages appear. Punctuated as well as surrounded by circular arguments and logical paradoxes, Alice must keep on, in order to found the “other side of the mirror“.
There are other funny parallel moves, also. The story goes on that Lewis Carroll gave a copy of “Alice in Wonderland” to Queen Victoria who then asked him in return to send her his next book as she fancied the first one. The joke is that the book was (!) … “An Elementary Treatise on Determinants, With Their Application to Simultaneous Linear Equations and Algebraic Equations (link)”. Lewis Carroll then went on to write a follow-on to Alice in Wonderland entitled “Through the Looking-Glass, and what Alice found there” that features a chess board on his first pages, and where chess was used to gave her, Queen Victoria, a glimpse on what Alice explored on this new world.
In fact, the diagram on the first pages contains not only the entire book chapters of this novel as well how Alice moved on. Where basically, each move, moves the reader to a new chapter (see below) representing it. The entire book could be found here in PDF format. Besides the beauty and philosophical value of Dogson‘s novel on itself, and his repercussions on nowadays co-Evolution research as a metaphor, this is much probably the first “chess-literature” diagram ever composed. Now, of course, pieces are not white and black, but instead white and red (note that pieces in c1 – queen – and c6 – king – are white). Lewis Carroll novel, then goes on like this: White pawn (Alice) to play, and win in eleven moves.
However, in order to enter this world you must follow the “rules” of this new world. “Chess” in here is not normal, as Wonderland was not normal to Alice’s eyes. Remember: If you do all do run you could do, you will find yourself at the same place. Better if you could run twice as fast! First Lewis Carroll words on his second book (at the preface / PDF link above) advise us:
(…) As the chess-problem, given on a previous page, has puzzled some of my readers, it may be well to explain that it is correctly worked out, so far as the moves are concerned. The alternation of Red and White is perhaps not so strictly observed as it might be, and the ‘castling’ of the three Queens is merely a way of saying that they entered the palace; but the ‘check’ of the White King at move 6, the capture of the Red Knight at move 7, and the final ‘check-mate’ of the Red King, will be found, by any one who will take the trouble to set the pieces and play the moves as directed, to be strictly in accordance with the laws of the game. (…) Lewis Carroll, Christmas,1896.
1750 LET t$=”11. Alice takes Red Queen & wins(checkmate)”: GO SUB 7000 (…)
9001 REM ** ZX SPECTRUM MANUAL Page 96 Chapter 14. **
9004 RESTORE 9000 (…)
9006 LET b=BIN 01111100: LET c=BIN 00111000: LET d=BIN 00010000
9010 FOR n=1 TO 6: READ p$: REM 6 pieces
9020 FOR f=0 TO 7: REM read piece into 8 bytes
9030 READ a: POKE USR p$+f,a
9040 NEXT f
9100 REM bishop
9110 DATA “b”,0,d,BIN 00101000,BIN 01000100
9120 DATA BIN 01101100,c,b,0
9130 REM king
9140 DATA “k”,0,d,c,d
9150 DATA c,BIN 01000100,c,0
9160 REM rook
9170 DATA “r”,0,BIN 01010100,b,c
9180 DATA c,b,b,0
9190 REM queen
9200 DATA “q”,0,BIN 01010100,BIN 00101000,d
9210 DATA BIN 01101100,b,b,0
9220 REM pawn
9230 DATA “p”,0,0,d,c
9240 DATA c,d,b,0
9250 REM knight
9260 DATA “n”,0,d,c,BIN 01111000
9270 DATA BIN 00011000,c,b,0
(…) full code on [link]
This is BASIC-ally Alice’s story …
Picture – (click to enlarge) We all are on a huge spacecraft full of water, … the big blue marble. The new OMEGA watch campaign, Planet Ocean, features a giant swarm of sardines in deep blue ocean along with a well known quote from Buzz Aldrin, the astronaut (Wien, Sept. 2011).
“Standing on the Moon looking back at Earth – this lovely place you just came from – you see all the colours, and you know what they represent. Having left the water planet, with all that water brings to Earth in terms of colour and abudance life, the absence of water and atmosphere on the desolate surface of the Moon gives rise to a stark contrast.”, ~ Buzz Aldrin, astronaut.
“It is very difficult to make good mistakes“, Tim Harford, July 2011.
TED talk (July 2011) by Tim Harford a writer on Economics who studies Complex Systems, exposing a surprising link among the successful ones: they were built through trial and error. In this sparkling talk from TEDGlobal 2011, he asks us to embrace our randomness and start making better mistakes [from TED]. Instead of the God complex, he purposes trial and error, or to be more precise, Genetic Algorithms and Evolutionary Computation (one of those examples over his talk is indeed the evolutionary optimal design of an airplane nozzle).
Now, we may ask, if it’s clear to you from the talk whether the nozzle was computationally designed using evolutionary search as suggested by the imagery, or was the imagery designed to describe the process in the laboratory? … as a colleague ask me the other day over Google plus. A great question, since as I believe it will be not clear to everyone watching that lecture.
Though, it was clear to me from the beginning, for one simple reason. That is a well-know work in the Evolutionary Computation area, done by one of its pioneers, Professor Hans-Paul Schwefel from Germany, in 1974 I believe. Unfortunately, at least to me I must say, Tim Harford did not mentioned the author, neither he mentions over his talk, the entire Evolutionary Computation or Genetic Algorithms area, even if he makes a clear bridge between these concepts and the search for innovation. The optimal nozzle design was in fact produced for the first time, on Schwefel‘s PhD thesis (“Adaptive Mechanismen in der Biologischen Evolution und ihr Einfluß auf die Evolutiongeschwindigkeit“), and he did arrive at this results by using a branch of Evolutionary Computation know as (ES) Evolution Strategies [here is a Wikipedia entry]. The objective was to achieve the maximum thrust and for that some parameters should be adjusted, such as in which point the small aperture should be put between the two entrances. What follows is a rather old video from YouTube on the process:
The animation shows the evolution of a nozzle design since its initial configuration until the final one. After achieving such a design it was a a little difficult understanding why the surprising design was good and a team of physicists and engineers gathered to provide an investigation aiming at devising some explanation for the final nozzle configuration. Schwefel (later on with his German group) also investigated the algorithmic features of Evolution Strategies, what made possible different generalizations such as a surplus of offspring created, the use of non-elitist evolution strategies (the comma selection scheme), and the use of recombination beyond the well known mutation operator to generate the offspring. Here are some related links and papers (link).
Albeit these details … I did enjoyed the talk a lot as well as his quote above. There is still a subtle difference between “trial and error” and “Evolutionary search” even if linked, but when Tim Harford makes a connection between Innovation and Evolutionary Computation, it remembered me back the “actual” (one decade now, perhaps) work of David Goldberg (IlliGAL – Illinois Genetic Algorithms laboratory). Another founding father of the area, now dedicated to innovation, learning, etc… much on these precise lines. Mostly his books, (2002) The design of innovation: Lessons from and for competent genetic algorithms. Kluwer Academic Publishers, and (2006) The Entrepreneurial Engineer by Wiley.
Finally, let me add, that there are other beautiful examples of Evolutionary Design. The one I love most – however – (for several reasons, namely the powerful abstract message that is sends out into other conceptual fields) is this: a simple bridge. Enjoy, and for some seconds do think about your own area of work.
“To destroy variety at a scale, we need variety at another scale“, Yavni Bar-Yam, ICCS’11 – Int. Conference on Complex Systems, Boston, June 2011.
In “a process oriented externalist solution to the hard problem” (A Process View of Reality, 2008), an 8 page comic series (pdf link) about the Mind-Body problem, or David Chalmers Hard Problem, Riccardo Manzotti asks: [...] How can the conscious mind emerge out of physical stuff like the brain? Apparently, Science faces an unsolvable problem: The hard problem states that there is an unbridgeable gap between our conscious experience and the scientific description of the world. The modern version of the mind-body problem arose when the scholars of the XVII century suggested that reality is divided in the mental domain and in the physical domain [...]. In the next 7 pages, Manzotti comes up with a possible solution, not far from what Science nowadays is doing, starting in the 1950′s: avoiding reductionism.