You are currently browsing the tag archive for the ‘Genetic Algorithms’ tag.

Vitorino Ramos - Citations2016Jan

2016 – Up now, an overall of 1567 citations among 74 works (including 3 books) on GOOGLE SCHOLAR (https://scholar.google.com/citations?user=gSyQ-g8AAAAJ&hl=en) [with an Hirsh h-index=19, and an average of 160.2 citations each for any work on my top five] + 900 citations among 57 works on the new RESEARCH GATE site (https://www.researchgate.net/profile/Vitorino_Ramos).

Refs.: Science, Artificial Intelligence, Swarm Intelligence, Data-Mining, Big-Data, Evolutionary Computation, Complex Systems, Image Analysis, Pattern Recognition, Data Analysis.

Video – Animaris Gubernare (AG), is one of the most recent Theo Jansen’s Strandbeest‘s (strandbeest.com) machine animals. Born in October 2010, AG died out in October 2011. It had two external (rolling) wind stomachs which serve as an anchor against strong winds.

Since 1990, only by using plastic tubes, lemonade bottles and air pistons as logic gates, powered by wind, Theo Jansen has produced some quite incredible machine animals. His creatures are designed to move – and even survive – on their own. In some cases he have recurred to Evolutionary Computation (more) as a mean to optimize their shape in order to longer survive hard storms and salt water. He briefly explains:

“(…) Since 1990 I have been occupied creating new forms of life. Not pollen or seeds but plastic yellow tubes are used as the basic material of this new nature. I make skeletons that are able to walk on the wind, so they don’t have to eat. Over time, these skeletons have become increasingly better at surviving the elements such as storm and water and eventually I want to put these animals out in herds on the beaches, so they will live their own lives (…)”, Theo Jansen, in Strandbeest (strandbeest.com).

But he goes a step further. Not he only develop sensors (for water sensing) as well as a full Brain, a binary step counter made of plastic tubes, which could change his pattern of zeroes, overtime, and adapts. Have a look (minute 6, second 33) … :

Video – Jansen‘s Lecture at TED talks, March 2007 (Monterey, California). Theo Jansen creates kinetic sculptures that walk using wind power (featured in a few previous short sifts), here he explains how he makes them work. Incredibly, he has devised a way to optimize the shape of the machine’s parts and gait using a genetic algorithm running on a PC and has actually made logic gates out of the air pistons making up the machines. His work attests to a truly jaw-dropping intelligence.

It is very difficult to make good mistakes“, Tim Harford, July 2011.

TED talk (July 2011) by Tim Harford a writer on Economics who studies Complex Systems, exposing a surprising link among the successful ones: they were built through trial and error. In this sparkling talk from TEDGlobal 2011, he asks us to embrace our randomness and start making better mistakes [from TED]. Instead of the God complex, he purposes trial and error, or to be more precise, Genetic Algorithms and Evolutionary Computation (one of those examples over his talk  is indeed the evolutionary optimal design of an airplane nozzle).

Now, we may ask, if it’s clear to you from the talk whether the nozzle was computationally designed using evolutionary search as suggested by the imagery, or was the imagery designed to describe the process in the laboratory? … as a colleague ask me the other day over Google plus. A great question, since as I believe it will be not clear to everyone watching that lecture.

Though, it was clear to me from the beginning, for one simple reason. That is a well-know work in the Evolutionary Computation area, done by one of its pioneers, Professor Hans-Paul Schwefel from Germany, in 1974 I believe. Unfortunately, at least to me I must say, Tim Harford did not mentioned the author, neither he mentions over his talk, the entire Evolutionary Computation or Genetic Algorithms area, even if he makes a clear bridge between these concepts and the search for innovation. The optimal nozzle design was in fact produced for the first time, on Schwefel‘s PhD thesis (“Adaptive Mechanismen in der Biologischen Evolution und ihr Einfluß auf die Evolutiongeschwindigkeit“), and he did arrive at this results by using a branch of Evolutionary Computation know as (ES) Evolution Strategies [here is a Wikipedia entry]. The objective was to achieve the maximum thrust and for that some parameters should be adjusted, such as in which point the small aperture should be put between the two entrances. What follows is a rather old video from YouTube on the process:

The animation shows the evolution of a nozzle design since its initial configuration until the final one. After achieving such a design it was a a little difficult understanding why the surprising design was good and a team of physicists and engineers gathered to provide an investigation aiming at devising some explanation for the final nozzle configuration. Schwefel (later on with his German group) also investigated the algorithmic features of Evolution Strategies, what made possible different generalizations such as a surplus of offspring created, the use of non-elitist evolution strategies (the comma selection scheme), and the use of recombination beyond the well known mutation operator to generate the offspring. Here are some related links and papers (link).

Albeit these details … I did enjoyed the talk a lot as well as his quote above. There is still a subtle difference between “trial and error” and “Evolutionary search” even if linked, but when Tim Harford makes a connection between Innovation and Evolutionary Computation, it remembered me back the “actual” (one decade now, perhaps) work of David Goldberg (IlliGAL – Illinois Genetic Algorithms laboratory). Another founding father of the area, now dedicated to innovation, learning, etc… much on these precise lines. Mostly his books, (2002) The design of innovation: Lessons from and for competent genetic algorithms. Kluwer Academic Publishers, and (2006) The Entrepreneurial Engineer by Wiley.

Finally, let me add, that there are other beautiful examples of Evolutionary Design. The one I love most – however – (for several reasons, namely the powerful abstract message that is sends out into other conceptual fields) is this: a simple bridge. Enjoy, and for some seconds do think about your own area of work.

Figure – Application of Mathematical Morphology openings and closing operators of increasing size on different digital images (from Fig. 2, page 5).

[] Vitorino Ramos, Pedro Pina, Exploiting and Evolving R{n} Mathematical Morphology Feature Spaces, in Ronse Ch., Najman L., Decencière E. (Eds.), Mathematical Morphology: 40 Years On, pp. 465-474, Springer Verlag, Dordrecht, The Netherlands, 2005.

(abstract) A multidisciplinary methodology that goes from the extraction of features till the classification of a set of different Portuguese granites is presented in this paper. The set of tools to extract the features that characterize the polished surfaces of granites is mainly based on mathematical morphology. The classification methodology is based on a genetic algorithm capable of search for the input feature space used by the nearest neighbor rule classifier. Results show that is adequate to perform feature reduction and simultaneous improve the recognition rate. Moreover, the present methodology represents a robust strategy to understand the proper nature of the textures studied and their discriminant features.

(to obtain the respective PDF file follow link above or visit chemoton.org)

The following letter entitled “Darwin among the machines“, was sent on June 1863, by Samuel Butler, the novelist (signed here as Cellarius) to the editor of the Press, Christchurch, New Zealand (13 June, 1863). The article was much probably the first to raise the possibility that machines were a kind of “mechanical life” undergoing constant evolution, something that is now happening on the realm of Evolutionary Computation, Evolution Strategies along with other types of Genetic Algorithms, and that eventually machines might supplant humans as the dominant species. What follows are some excerpts. The full letter could be achieved here. It is truly, an historic visionary document:

[…] “Our present business lies with considerations which may somewhat tend to humble our pride and to make us think seriously of the future prospects of the human race. If we revert to the earliest primordial types of mechanical life, to the lever, the wedge, the inclined plane, the screw and the pulley, or (for analogy would lead us one step further) to that one primordial type from which all the mechanical kingdom has been developed. (…) We refer to the question: What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race. Inferior in power, inferior in that moral quality of self-control, we shall look up to them as the acme of all that the best and wisest man can ever dare to aim at. (…) Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question. (…) For the present we shall leave this subject, which we present gratis to the members of the Philosophical Society. Should they consent to avail themselves of the vast field which we have pointed out, we shall endeavour to labour in it ourselves at some future and indefinite period.” […]

Figure – High-Frequency Financial trading world-wide map showing optimal hotspots, from (fig.2, pp.5) in A.D. Wissner-Gross and C.E. Freer,”Relativistic Statistical Arbitrage“, Physical Review E 82, 056104, 2010. (ABS.:) Recent advances in high-frequency financial trading have made light propagation delays between geographically separated exchanges relevant. Here we show that there exist optimal locations from which to coordinate the statistical arbitrage of pairs of spacelike separated securities, and calculate a representative map of such locations on Earth. Furthermore, trading local securities along chains of such intermediate locations results in a novel econophysical effect, in which the relativistic propagation of tradable information is effectively slowed or stopped by arbitrage.

All those tiny blue circles above, come together into a real financial treasure map! In case you wonder, below is part of my own financial treasure DNA map (of course, blurred and noised on purpose. Just don’t ask me what type of noise this is. Hint: it’s not salt & pepper!). Meanwhile, check this out… and guess what? We have got company… :)

[…] Golden Networking’s High-Frequency Trading Happy Hour, December 7th, 2010, will bring the high-frequency trading community together to listen to Adam Afshar, President and CEO, Hyde Park Global Investments, Milind Sharma, CEO, QuantZ Capital Management, and Peter van Kleef, CEO, Lakeview Arbitrage, on “How to Get High-Frequency Trading Right First Time” […] Mr. Afshar is Hyde Park Global’s President and Chief Executive Officer. He has over two decades of financial industry experience including 12 years at Bear Stearns where he was a Managing Director, overseeing long/short multi asset portfolios for both onshore and offshore clients. Hyde Park Global Investments is a 100% robotic investment and trading firm based on Artificial Intelligence (AI). The system is built primarily on Genetic Algorithms (GA) and other Evolutionary models to identify mispricings, arbitrage and patterns in electronic financial markets. Additionally, Hyde Park Global Investments has developed programs applying natural language processing and sentiment analytics to trade equities based on machine readable news. Hyde Park Global employs no analysts, portfolio managers or traders, ONLY scientists and engineers. Mr. Afshar has a BA in Economics from Wofford College and received his MBA from the University of Chicago, Booth School of Business. […] Mr. Sharma is Chief Executive Officer, QuantZ Capital Management. He ran the LTMN desk in Global Arbitrage & Trading at RBC where he served as Portfolio Manager for Quant EMN, Short Term & Event Driven portfolios [up to $700mm gross]. In his capacity as Director & Senior Proprietary Trader at Deutsche, he managed Quant EMN portfolios of significant size & contributed to the broader prop mandate in Cap Structure Arb & with LBOs. Prior to that he was co-founder of Quant Strategies (previously R&P) at BlackRock (MLIM), where his investment role spanned a dozen quantitatively managed funds & separate accounts with approx $30B in AUM pegged to the models. Prior to MLIM, he was Manager of the Risk Analytics and Research Group at Ernst & Young LLP where he was co-architect of Raven (one of the earliest derivatives pricing/ validation engines) & co-created the 1st model for pricing cross-currency puttable Bermudan swaptions. […] in How to Get High-Frequency Trading Right First Time, NY, Dec.2, 2010 + www.hfthappyhour.com .

“Chaos theory has a bad name, conjuring up images of unpredictable weather, economic crashes and science gone wrong. But there is a fascinating and hidden side to Chaos, one that scientists are only now beginning to understand. It turns out that chaos theory answers a question that mankind has asked for millennia – how did we get here?

Over this 2010 BBC 4 documentary “The Secret Life of Chaos“, Professor Jim Al-Khalili sets out to uncover one of the great mysteries of science – how does a universe that starts off as dust end up with intelligent life? How does order emerge from disorder? It’s a mind bending, counter-intuitive and for many people a deeply troubling idea. But Professor Al-Khalili reveals the science behind much of beauty and structure in the natural world and discovers that far from it being magic or an act of God, it is in fact an intrinsic part of the laws of physics. Amazingly, it turns out that the mathematics of chaos can explain how and why the universe creates exquisite order and pattern. The natural world is full of awe-inspiring examples of the way nature transforms simplicity into complexity. From trees to clouds to humans – after watching this film you’ll never be able to look at the world in the same way again.” (description at YouTube).

[1 hour documentary in 6 parts: Part I (above), Part II, Part III, Part IV, Part V and Part VI. Even if you have no time, do not miss part 6. I mean, do really not miss it. Enjoy!]

From the author of “Rock, Paper, Scissors – Game Theory in everyday life” dedicated to evolution of cooperation in nature (published last year – Basic Books), a new book on related areas is now fresh on the stands (released Dec. 7,  2009): “The Perfect Swarm – The Science of Complexity in everyday life“. This time Len Fischer takes us into the realm of our interlinked modern lives, where complexity rules. But complexity also has rules. Understand these, and we are better placed to make sense of the mountain of data that confronts us every day.  Fischer ranges far and wide to discover what tips the science of complexity has for us. Studies of human (one good example is Gum voting) and animal behaviour, management science, statistics and network theory all enter the mix.

One of the greatest discoveries of recent times is that the complex patterns we find in life are often produced when all of the individuals in a group follow similar simple rules. Even if the final pattern is complex, rules are not. This process of “Self-Organization” reveals itself in the inanimate worlds of crystals and seashells, but as Len Fisher shows, it is also evident in living organisms, from fish to ants to human beings, being Stigmergy one among many cases of this type of Self-Organized behaviour, encompassing applications in several Engineering fields like Computer science and Artificial Intelligence, Data-Mining, Pattern Recognition, Image Analysis and Perception, Robotics, Optimization, Learning, Forecasting, etc. Since I do work on these precise areas, you may find several of my previous posts dedicated to these issues, such as Self-Organized Data and Image Retrieval systemsStigmergic Optimization, Computer-based Adaptive Dynamic Perception, Swarm-based Data MiningSelf-regulated Swarms and Memory, Ant based Data Clustering, Generative computer-based photography and painting, Classification, Extreme Dynamic Optimization, Self-Organized Pattern Recognition, among other applications.

For instance, the coordinated movements of fish in schools, arise from the simple rule: “Follow the fish in front.” Traffic flow arises from simple rules: “Keep your distance” and “Keep to the right.” Now, in his new book, Fisher shows how we can manage our complex social lives in an ever more chaotic world. His investigation encompasses topics ranging from “swarm intelligence” (check links above) to the science of parties (a beautiful example by ICOSYSTEM inc.) and the best ways to start a fad. Finally, Fisher sheds light on the beauty and utility of complexity theory. For those willing to understand a miriad of some basic examples (Fischer gaves us 33 nice food-for-thought examples in total) and to have a well writen introduction into this thrilling new branch of science, referred by Stephen Hawking as the science for the current century (“I think complexity is the science for the 21st century”), Perfect Swarm will be indeed an excelent companion.

Figure – Book cover of Toby Segaran’s, “Programming Collective Intelligence – Building Smart Web 2.0 Applications“, O’Reilly Media, 368 pp., August 2007.

{scopus online description} Want to tap the power behind search rankings, product recommendations, social bookmarking, and online matchmaking? This fascinating book demonstrates how you can build Web 2.0 applications to mine the enormous amount of data created by people on the Internet. With the sophisticated algorithms in this book, you can write smart programs to access interesting data-sets from other web sites, collect data from users of your own applications, and analyze and understand the data once you’ve found it. Programming Collective Intelligence takes you into the world of machine learning and statistics, and explains how to draw conclusions about user experience, marketing, personal tastes, and human behavior in general — all from information that you and others collect every day. Each algorithm is described clearly and concisely with code that can immediately be used on your web site, blog, Wiki, or specialized application.

{even if I don’t totally agree, here’s a “over-rated” description – specially on the scientific side, by someone “dwa” – link above} Programming Collective Intelligence is a new book from O’Reilly, which was written by Toby Segaran. The author graduated from MIT and is currently working at Metaweb Technologies. He develops ways to put large public data-sets into Freebase, a free online semantic database. You can find more information about him on his blog:  http://blog.kiwitobes.com/. Web 2.0 cannot exist without Collective Intelligence. The “giants” use it everywhere, YouTube recommends similar movies, Last.fm knows what would you like to listen and Flickr which photos are your favorites etc. This technology empowers intelligent search, clustering, building price models and ranking on the web. I cannot imagine modern service without data analysis. That is the reason why it is worth to start read about it. There are many titles about collective intelligence but recently I have read two, this one and “Collective Intelligence in Action“. Both are very pragmatic, but the O’Reilly’s one is more focused on the merit of the CI. The code listings are much shorter (but examples are written in Python, so that was easy). In general these books comparison is like Java vs. Python. If you would like to build recommendation engine “in Action”/Java way, you would have to read whole book, attach extra jar-s and design dozens of classes. The rapid Python way requires reading only 15 pages and voila, you have got the first recommendations. It is awesome!

So how about rest of the book, there are still 319 pages! Further chapters say about: discovering groups, searching, ranking, optimization, document filtering, decision trees, price models or genetic algorithms. The book explains how to implement Simulated Annealing, k-Nearest Neighbors, Bayesian Classifier and many more. Take a look at the table of contents (here: http://oreilly.com/catalog/9780596529321/preview.html), it does not list all the algorithms but you can find more information there. Each chapter has about 20-30 pages. You do not have to read them all, you can choose the most important and still know what is going on. Every chapter contains minimum amount of theoretical introduction, for total beginners it might be not enough. I recommend this book for students who had statistics course (not only IT or computing science), this book will show you how to use your knowledge in practice _ there are many inspiring examples. For those who do not know Python – do not be afraid _ at the beginning you will find short introduction to language syntax. All listings are very short and well described by the author _ sometimes line by line. The book also contains necessary information about basic standard libraries responsible for xml processing or web pages downloading. If you would like to start learn about collective intelligence I would strongly recommend reading “Programming Collective Intelligence” first, then “Collective Intelligence in Action”. The first one shows how easy it is to implement basic algorithms, the second one would show you how to use existing open source projects related to machine learning.

Figure – My first Swarm Painting SP0016 (Jan. 2002). This was done attaching the following algorithm into a robotic drawing arm. In order to do it however, pheromone distribution by the overall ant colony were carefully coded into different kinds of colors and several robotic pencils (check “The MC2 Project [Machines of Collective Conscience]“, 2001, and “On the Implicit and on the Artificial“, 2002). On the same year when the computational model appeared (2000) the concept was already extended into photography (check original paper) – using the pheromone distribution as photograms (“Einstein to Map” in the original article along with works like “Kafka to Red Ants” as well as subsequent newspaper articles). Meanwhile, in 2003, I was invited to give an invited talk over these at the 1st Art & Science Symposium in Bilbao (below). Even if I was already aware of Jeffrey Ventrella outstanding work as well as Ezequiel Di Paolo, it was there where we first met physically.

[] Vitorino Ramos, Self-Organizing the Abstract: Canvas as a Swarm Habitat for Collective Memory, Perception and Cooperative Distributed Creativity, in 1st Art & Science Symposium – Models to Know Reality, J. Rekalde, R. Ibáñez and Á. Simó (Eds.), pp. 59, Facultad de Bellas Artes EHU/UPV, Universidad del País Vasco, 11-12 Dec., Bilbao, Spain, 2003.

Many animals can produce very complex intricate architectures that fulfil numerous functional and adaptive requirements (protection from predators, thermal regulation, substrate of social life and reproductive activities, etc). Among them, social insects are capable of generating amazingly complex functional patterns in space and time, although they have limited individual abilities and their behaviour exhibits some degree of randomness. Among all activities by social insects, nest building, cemetery organization and collective sorting, is undoubtedly the most spectacular, as it demonstrates the greatest difference between individual and collective levels. Trying to answer how insects in a colony coordinate their behaviour in order to build these highly complex architectures, scientists assumed a first hypothesis, anthropomorphism, i.e., individual insects were assumed to possess a representation of the global structure to be produced and to make decisions on the basis of that representation. Nest complexity would then result from the complexity of the insect’s behaviour. Insect societies, however, are organized in a way that departs radically from the anthropomorphic model in which there is a direct causal relationship between nest complexity and behavioural complexity. Recent works suggests that a social insect colony is a decentralized system composed of cooperative, autonomous units that are distributed in the environment, exhibit simple probabilistic stimulus-response behaviour, and have only access to local information. According to these studies at least two low-level mechanisms play a role in the building activities of social insects: Self-organization and discrete Stigmergy, being the latter a kind of indirect and environmental synergy. Based on past and present stigmergic models, and on the underlying scientific research on Artificial Ant Systems and Swarm Intelligence, while being systems capable of emerging a form of collective intelligence, perception and Artificial Life, done by Vitorino Ramos, and on further experiences in collaboration with the plastic artist Leonel Moura, we will show results facing the possibility of considering as “art”, as well, the resulting visual expression of these systems. Past experiences under the designation of “Swarm Paintings” conducted in 2001, not only confirmed the possibility of realizing an artificial art (thus non-human), as introduced into the process the questioning of creative migration, specifically from the computer monitors to the canvas via a robotic harm. In more recent self-organized based research we seek to develop and profound the initial ideas by using a swarm of autonomous robots (ARTsBOT project 2002-03), that “live” avoiding the purpose of being merely a simple perpetrator of order streams coming from an external computer, but instead, that actually co-evolve within the canvas space, acting (that is, laying ink) according to simple inner threshold stimulus response functions, reacting simultaneously to the chromatic stimulus present in the canvas environment done by the passage of their team-mates, as well as by the distributed feedback, affecting their future collective behaviour. In parallel, and in what respects to certain types of collective systems, we seek to confirm, in a physically embedded way, that the emergence of order (even as a concept) seems to be found at a lower level of complexity, based on simple and basic interchange of information, and on the local dynamic of parts, who, by self-organizing mechanisms tend to form an lived whole, innovative and adapting, allowing for emergent open-ended creative and distributed production.

 

Dynamic Optimization Problems (DOP) solved by Swarm Intelligence (dynamic environment) - Vitorino Ramos

a) Dynamic Optimization Problems (DOP) tackled by Swarm Intelligence (in here a quick snapshot of the dynamic environment)

Swarm adaptive response over time, under sever dynamics

b) Swarm adaptive response over time, under severe dynamics, over the dynamic environment on the left (a).

Figs. – Check animated pictures in here. (a) A 3D toroidal fast changing landscape describing a Dynamic Optimization (DO) Control Problem (8 frames in total). (b) A self-organized swarm emerging a characteristic flocking migration behaviour surpassing in intermediate steps some local optima over the 3D toroidal landscape (left), describing a Dynamic Optimization (DO) Control Problem. Over each foraging step, the swarm self-regulates his population and keeps tracking the extrema (44 frames in total).

 [] Vitorino Ramos, Carlos Fernandes, Agostinho C. Rosa, On Self-Regulated Swarms, Societal Memory, Speed and Dynamics, in Artificial Life X – Proc. of the Tenth Int. Conf. on the Simulation and Synthesis of Living Systems, L.M. Rocha, L.S. Yaeger, M.A. Bedau, D. Floreano, R.L. Goldstone and A. Vespignani (Eds.), MIT Press, ISBN 0-262-68162-5, pp. 393-399, Bloomington, Indiana, USA, June 3-7, 2006.

PDF paper.

Wasps, bees, ants and termites all make effective use of their environment and resources by displaying collective “swarm” intelligence. Termite colonies – for instance – build nests with a complexity far beyond the comprehension of the individual termite, while ant colonies dynamically allocate labor to various vital tasks such as foraging or defense without any central decision-making ability. Recent research suggests that microbial life can be even richer: highly social, intricately networked, and teeming with interactions, as found in bacteria. What strikes from these observations is that both ant colonies and bacteria have similar natural mechanisms based on Stigmergy and Self-Organization in order to emerge coherent and sophisticated patterns of global foraging behavior. Keeping in mind the above characteristics we propose a Self-Regulated Swarm (SRS) algorithm which hybridizes the advantageous characteristics of Swarm Intelligence as the emergence of a societal environmental memory or cognitive map via collective pheromone laying in the landscape (properly balancing the exploration/exploitation nature of our dynamic search strategy), with a simple Evolutionary mechanism that trough a direct reproduction procedure linked to local environmental features is able to self-regulate the above exploratory swarm population, speeding it up globally. In order to test his adaptive response and robustness, we have recurred to different dynamic multimodal complex functions as well as to Dynamic Optimization Control problems, measuring reaction speeds and performance. Final comparisons were made with standard Genetic Algorithms (GAs), Bacterial Foraging strategies (BFOA), as well as with recent Co-Evolutionary approaches. SRS’s were able to demonstrate quick adaptive responses, while outperforming the results obtained by the other approaches. Additionally, some successful behaviors were found: SRS was able to maintain a number of different solutions, while adapting to unforeseen situations even when over the same cooperative foraging period, the community is requested to deal with two different and contradictory purposes; the possibility to spontaneously create and maintain different sub-populations on different peaks, emerging different exploratory corridors with intelligent path planning capabilities; the ability to request for new agents (division of labor) over dramatic changing periods, and economizing those foraging resources over periods of intermediate stabilization. Finally, results illustrate that the present SRS collective swarm of bio-inspired ant-like agents is able to track about 65% of moving peaks traveling up to ten times faster than the velocity of a single individual composing that precise swarm tracking system. This emerged behavior is probably one of the most interesting ones achieved by the present work. 

 

With the current ongoing dramatic need of Africa to have contemporary maps (currently, Google promises to launch his first and exhaustive world-wide open-access digital cartography of the African continent very soon), back in 1999-2000 we envisioned a very simple idea into a research project (over my previous lab. – CVRM IST). Instead of producing new maps in the regular standard way, which are costly (specially for African continent countries) as well as time consuming (imagine the amount of money and time needed to cover the whole continent with high resolution aerial photos) the idea then was to hybridize trough an automatic procedure (with the help of Artificial Intelligence) new current data coming from satellites with old data coming from the computational analysis of images of old colonial maps. For instance, old roads segmented in old maps will help us finding the new ones coming from the current satellite images, as well as those that were lost. The same goes on for bridges, buildings, numbers, letters at the map, etc. However in order to do this, several preparatory steps were needed. One of those crucial steps was to obtain (segment – know to be one of the hardest procedures in image processing) the old roads, buildings, airports, at the old maps. Back in 1999-2000 while dealing with several tasks at this research project (AUTOCARTIS Automatic Methods for Updating Cartographic Maps) I started to think of using evolutionary computation in order to tackle and surpass this precise problem, in what then later become one of the first usages of Genetic Algorithms in image analysis. The result could be checked below. Meanwhile, the experience gained with AUTOCARTIS was then later useful not only for digital old books (Visão Magazine, March 2002), as well as for helping us finding water in Mars (at the MARS EXPRESS European project – Expresso newspaper, May 2003) from which CVRM lab. was one of the European partners. Much often in life simple ideas (I owe it to Prof. Fernando Muge and Prof. Pedro Pina) are the best ones. This is particularly true in science.

Figure – One original image (left – Luanda, Angola map) and two segmentation examples, rivers and roads respectively obtained through the Genetic Algorithm proposed (low resolution images). [at the same time this precise Map of Luanda, was used by me along with the face of Einstein to benchmark several dynamic image adaptive perception versus memory experiments via ant-like artificial life systems over what I then entitled Digital Image Habitats]

[] Vitorino Ramos, Fernando Muge, Map Segmentation by Colour Cube Genetic K-Mean Clustering, Proc. of ECDL´2000 – 4th European Conference on Research and Advanced Technology for Digital Libraries, J. Borbinha and T. Baker (Eds.), ISBN 3-540-41023-6, Lecture Notes in Computer Science, Vol. 1923, pp. 319-323, Springer-Verlag -Heidelberg, Lisbon, Portugal, 18-20 Sep. 2000.

Segmentation of a colour image composed of different kinds of texture regions can be a hard problem, namely to compute for an exact texture fields and a decision of the optimum number of segmentation areas in an image when it contains similar and/or non-stationary texture fields. In this work, a method is described for evolving adaptive procedures for these problems. In many real world applications data clustering constitutes a fundamental issue whenever behavioural or feature domains can be mapped into topological domains. We formulate the segmentation problem upon such images as an optimisation problem and adopt evolutionary strategy of Genetic Algorithms for the clustering of small regions in colour feature space. The present approach uses k-Means unsupervised clustering methods into Genetic Algorithms, namely for guiding this last Evolutionary Algorithm in his search for finding the optimal or sub-optimal data partition, task that as we know, requires a non-trivial search because of its NP-complete nature. To solve this task, the appropriate genetic coding is also discussed, since this is a key aspect in the implementation. Our purpose is to demonstrate the efficiency of Genetic Algorithms to automatic and unsupervised texture segmentation. Some examples in Colour Maps are presented and overall results discussed.

(to obtain the respective PDF file follow link above or visit chemoton.org)

Figure – Evolutionary lego cranes and bridges. Check here for a small animation as well as a long mpeg video.

These, I believe, will be the solution for the first floating Europe-Africa bridge preliminary project (linking the Portuguese-Spanish border between Faro and Huelva, ending near Tanger – Morocco, spanning the entire strait of Gibraltar), or for many of them coming in the near future, in case they need to robustly and adaptively join further long distances:

[…] Creating artificial life forms through evolutionary robotics faces a “chicken and egg” problem: Learning to control a complex body is dominated by inductive biases specific to its sensors and effectors, while building a body which is controllable is conditioned on the pre-existence of a brain. The idea of co-evolution of bodies and brains is becoming popular, but little work has been done in evolution of physical structure because of the lack of a general framework for doing it. Evolution of creatures in simulation has been constrained by the “reality gap” which implies that resultant objects are not buildable. The work we present takes a step in the problem of body evolution by applying evolutionary techniques to the design of structures assembled out of parts. Evolution takes place in a simulator we designed, which computes forces and stresses and predicts failure for 2-dimensional Lego structures. The final printout of our program is a schematic assembly, which can then be built physically. We demonstrate its functionality in several different evolved organisms […]. in, Computer Evolution of Buildable Objects, Pablo Funes and Jordan Pollack, 1997.

[…] The type of rationality we assume in economics – perfect, logical, deductive rationality–is extremely useful in generating solutions to theoretical problems. But it demands much of human behavior – much more in fact than it can usually deliver. If we were to imagine the vast collection of decision problems economic agents might conceivably deal with as a sea or an ocean, with the easier problems on top and more complicated ones at increasing depth, then deductive rationality would describe human behavior accurately only within a few feet of the surface. For example, the game Tic-Tac-Toe is simple, and we can readily find a perfectly rational, minimax solution to it. But we do not find rational “solutions” at the depth of Checkers; and certainly not at the still modest depths of Chess and Go.

There are two reasons for perfect or deductive rationality to break down under complication. The obvious one is that beyond a certain complicatedness, our logical apparatus ceases to cope – our rationality is bounded. The other is that in interactive situations of complication, agents can not rely upon the other agents they are dealing with to behave under perfect rationality, and so they are forced to guess their behavior. This lands them in a world of subjective beliefs, and subjective beliefs about subjective beliefs. Objective, well-defined, shared assumptions then cease to apply. In turn, rational, deductive reasoning–deriving a conclusion by perfect logical processes from well-defined premises – itself cannot apply. The problem becomes ill-defined.

As economists, of course, we are well aware of this. The question is not whether perfect rationality works, but rather what to put in its place. How do we model bounded rationality in economics? Many ideas have been suggested in the small but growing literature on bounded rationality; but there is not yet much convergence among them. In the behavioral sciences this is not the case. Modern psychologists are in reasonable agreement that in situations that are complicated or ill-defined, humans use characteristic and predictable methods of reasoning. These methods are not deductive, but inductive. […] The system that emerges under inductive reasoning will have connections both with evolution and complexity. […]

in, Inductive Reasoning and Bounded Rationality (The El Farol Problem), by W. Brian Arthur, 1994.

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

@ViRAms on Twitter

Archives

Blog Stats

  • 256,420 hits