You are currently browsing the tag archive for the ‘Dynamic Optimization’ tag.
Ever tried to solve a problem where its own problem statement is changing constantly? Have a look on our approach:
Vitorino Ramos, David M.S. Rodrigues, Jorge Louçã, “Spatio-Temporal Dynamics on Co-Evolved Stigmergy“, in European Conference on Complex Systems, ECCS’11, Vienna, Austria, Sept. 12-16 2011.
Abstract: Research over hard NP-complete Combinatorial Optimization Problems (COP’s) has been focused in recent years, on several robust bio-inspired meta-heuristics, like those involving Evolutionary Computation (EC) algorithmic paradigms. One particularly successful well-know meta-heuristic approach is based on Swarm Intelligence (SI), i.e., the self-organized stigmergic-based property of a complex system whereby the collective behaviors of (unsophisticated) entities interacting locally with their environment cause coherent functional global patterns to emerge. This line of research recognized as Ant Colony Optimization (ACO), uses a set of stochastic cooperating ant-like agents to find good solutions, using self-organized stigmergy as an indirect form of communication mediated by artificial pheromone, whereas agents deposit pheromone-signs on the edges of the problem-related graph complex network, encompassing a family of successful algorithmic variations such as: Ant Systems (AS), Ant Colony Systems (ACS), Max-Min Ant Systems (Max–Min AS) and Ant-Q.
Albeit being extremely successful these algorithms mostly rely on positive feedback’s, causing excessive algorithmic exploitation over the entire combinatorial search space. This is particularly evident over well known benchmarks as the symmetrical Traveling Salesman Problem (TSP). Being these systems comprised of a large number of frequently similar components or events, the principal challenge is to understand how the components interact to produce a complex pattern feasible solution (in our case study, an optimal robust solution for hard NP-complete dynamic TSP-like combinatorial problems). A suitable approach is to first understand the role of two basic modes of interaction among the components of Self-Organizing (SO) Swarm-Intelligent-like systems: positive and negative feedback. While positive feedback promotes a snowballing auto-catalytic effect (e.g. trail pheromone upgrading over the network; exploitation of the search space), taking an initial change in a system and reinforcing that change in the same direction as the initial deviation (self-enhancement and amplification) allowing the entire colony to exploit some past and present solutions (environmental dynamic memory), negative feedback such as pheromone evaporation ensure that the overall learning system does not stables or freezes itself on a particular configuration (innovation; search space exploration). Although this kind of (global) delayed negative feedback is important (evaporation), for the many reasons given above, there is however strong assumptions that other negative feedbacks are present in nature, which could also play a role over increased convergence, namely implicit-like negative feedbacks. As in the case for positive feedbacks, there is no reason not to explore increasingly distributed and adaptive algorithmic variations where negative feedback is also imposed implicitly (not only explicitly) over each network edge, while the entire colony seeks for better answers in due time.
In order to overcome this hard search space exploitation-exploration compromise, our present algorithmic approach follows the route of very recent biological findings showing that forager ants lay attractive trail pheromones to guide nest mates to food, but where, the effectiveness of foraging networks were improved if pheromones could also be used to repel foragers from unrewarding routes. Increasing empirical evidences for such a negative trail pheromone exists, deployed by Pharaoh’s ants (Monomorium pharaonis) as a ‘no entry‘ signal to mark unrewarding foraging paths. The new algorithm comprises a second order approach to Swarm Intelligence, as pheromone-based no entry-signals cues, were introduced, co-evolving with the standard pheromone distributions (collective cognitive maps) in the aforementioned known algorithms.
To exhaustively test his adaptive response and robustness, we have recurred to different dynamic optimization problems. Medium-size and large-sized dynamic TSP problems were created. Settings and parameters such as, environmental upgrade frequencies, landscape changing or network topological speed severity, and type of dynamic were tested. Results prove that the present co-evolved two-type pheromone swarm intelligence algorithm is able to quickly track increasing swift changes on the dynamic TSP complex network, compared to standard algorithms.
Keywords: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Dynamical Symmetrical Traveling Salesman Problems (TSP).
Fig. – Recovery times over several dynamical stress tests at the fl1577 TSP problem (1577 node graph) – 460 iter max – Swift changes at every 150 iterations (20% = 314 nodes, 40% = 630 nodes, 60% = 946 nodes, 80% = 1260 nodes, 100% = 1576 nodes). [click to enlarge]
From the author of “Rock, Paper, Scissors – Game Theory in everyday life” dedicated to evolution of cooperation in nature (published last year – Basic Books), a new book on related areas is now fresh on the stands (released Dec. 7, 2009): “The Perfect Swarm – The Science of Complexity in everyday life“. This time Len Fischer takes us into the realm of our interlinked modern lives, where complexity rules. But complexity also has rules. Understand these, and we are better placed to make sense of the mountain of data that confronts us every day. Fischer ranges far and wide to discover what tips the science of complexity has for us. Studies of human (one good example is Gum voting) and animal behaviour, management science, statistics and network theory all enter the mix.
One of the greatest discoveries of recent times is that the complex patterns we find in life are often produced when all of the individuals in a group follow similar simple rules. Even if the final pattern is complex, rules are not. This process of “Self-Organization” reveals itself in the inanimate worlds of crystals and seashells, but as Len Fisher shows, it is also evident in living organisms, from fish to ants to human beings, being Stigmergy one among many cases of this type of Self-Organized behaviour, encompassing applications in several Engineering fields like Computer science and Artificial Intelligence, Data-Mining, Pattern Recognition, Image Analysis and Perception, Robotics, Optimization, Learning, Forecasting, etc. Since I do work on these precise areas, you may find several of my previous posts dedicated to these issues, such as Self-Organized Data and Image Retrieval systems, Stigmergic Optimization, Computer-based Adaptive Dynamic Perception, Swarm-based Data Mining, Self-regulated Swarms and Memory, Ant based Data Clustering, Generative computer-based photography and painting, Classification, Extreme Dynamic Optimization, Self-Organized Pattern Recognition, among other applications.
For instance, the coordinated movements of fish in schools, arise from the simple rule: “Follow the fish in front.” Traffic flow arises from simple rules: “Keep your distance” and “Keep to the right.” Now, in his new book, Fisher shows how we can manage our complex social lives in an ever more chaotic world. His investigation encompasses topics ranging from “swarm intelligence” (check links above) to the science of parties (a beautiful example by ICOSYSTEM inc.) and the best ways to start a fad. Finally, Fisher sheds light on the beauty and utility of complexity theory. For those willing to understand a miriad of some basic examples (Fischer gaves us 33 nice food-for-thought examples in total) and to have a well writen introduction into this thrilling new branch of science, referred by Stephen Hawking as the science for the current century (“I think complexity is the science for the 21st century”), Perfect Swarm will be indeed an excelent companion.
Video – ABB FlexPicker Robots (Source: http://www.botjunkie.com/ + http://www.abb.com/)
As well as, something at the lower pre-processing engineering level involving also Pattern Recognition, Image Analysis and Classification. Not for brownies, cookies or sausages. Since this is summer time, it relates with clams and bivalve in general. From the video, everything appears to be rather easy. But, they are not.

a) Dynamic Optimization Problems (DOP) tackled by Swarm Intelligence (in here a quick snapshot of the dynamic environment)

b) Swarm adaptive response over time, under severe dynamics, over the dynamic environment on the left (a).
Figs. – Check animated pictures in here. (a) A 3D toroidal fast changing landscape describing a Dynamic Optimization (DO) Control Problem (8 frames in total). (b) A self-organized swarm emerging a characteristic flocking migration behaviour surpassing in intermediate steps some local optima over the 3D toroidal landscape (left), describing a Dynamic Optimization (DO) Control Problem. Over each foraging step, the swarm self-regulates his population and keeps tracking the extrema (44 frames in total).
[] Vitorino Ramos, Carlos Fernandes, Agostinho C. Rosa, On Self-Regulated Swarms, Societal Memory, Speed and Dynamics, in Artificial Life X – Proc. of the Tenth Int. Conf. on the Simulation and Synthesis of Living Systems, L.M. Rocha, L.S. Yaeger, M.A. Bedau, D. Floreano, R.L. Goldstone and A. Vespignani (Eds.), MIT Press, ISBN 0-262-68162-5, pp. 393-399, Bloomington, Indiana, USA, June 3-7, 2006.
PDF paper.
Wasps, bees, ants and termites all make effective use of their environment and resources by displaying collective “swarm” intelligence. Termite colonies – for instance – build nests with a complexity far beyond the comprehension of the individual termite, while ant colonies dynamically allocate labor to various vital tasks such as foraging or defense without any central decision-making ability. Recent research suggests that microbial life can be even richer: highly social, intricately networked, and teeming with interactions, as found in bacteria. What strikes from these observations is that both ant colonies and bacteria have similar natural mechanisms based on Stigmergy and Self-Organization in order to emerge coherent and sophisticated patterns of global foraging behavior. Keeping in mind the above characteristics we propose a Self-Regulated Swarm (SRS) algorithm which hybridizes the advantageous characteristics of Swarm Intelligence as the emergence of a societal environmental memory or cognitive map via collective pheromone laying in the landscape (properly balancing the exploration/exploitation nature of our dynamic search strategy), with a simple Evolutionary mechanism that trough a direct reproduction procedure linked to local environmental features is able to self-regulate the above exploratory swarm population, speeding it up globally. In order to test his adaptive response and robustness, we have recurred to different dynamic multimodal complex functions as well as to Dynamic Optimization Control problems, measuring reaction speeds and performance. Final comparisons were made with standard Genetic Algorithms (GAs), Bacterial Foraging strategies (BFOA), as well as with recent Co-Evolutionary approaches. SRS’s were able to demonstrate quick adaptive responses, while outperforming the results obtained by the other approaches. Additionally, some successful behaviors were found: SRS was able to maintain a number of different solutions, while adapting to unforeseen situations even when over the same cooperative foraging period, the community is requested to deal with two different and contradictory purposes; the possibility to spontaneously create and maintain different sub-populations on different peaks, emerging different exploratory corridors with intelligent path planning capabilities; the ability to request for new agents (division of labor) over dramatic changing periods, and economizing those foraging resources over periods of intermediate stabilization. Finally, results illustrate that the present SRS collective swarm of bio-inspired ant-like agents is able to track about 65% of moving peaks traveling up to ten times faster than the velocity of a single individual composing that precise swarm tracking system. This emerged behavior is probably one of the most interesting ones achieved by the present work.
Abraham, Ajith; Grosan, Crina; Ramos, Vitorino (Eds.), Stigmergic Optimization, Studies in Computational Intelligence (series), Vol. 31, Springer-Verlag, ISBN: 3-540-34689-9, 295 p., Hardcover, 2006.
TABLE OF CONTENTS (short /full) / CHAPTERS:
[1] Stigmergic Optimization: Foundations, Perspectives and Applications.
[2] Stigmergic Autonomous Navigation in Collective Robotics.
[3] A general Approach to Swarm Coordination using Circle Formation.
[4] Cooperative Particle Swarm Optimizers: a powerful and promising approach.
[5] Parallel Particle Swarm Optimization Algorithms with Adaptive
Simulated Annealing.
[6] Termite: a Swarm Intelligent Routing algorithm for Mobile
Wireless ad-hoc Networks.
[7] Linear Multiobjective Particle Swarm Optimization.
[8] Physically realistic Self-Assembly Simulation system.
[9] Gliders and Riders: A Particle Swarm selects for coherent Space-time Structures in Evolving Cellular Automata.
[10] Stigmergic Navigation for Multi-agent Teams in Complex Environments.
[11] Swarm Intelligence: Theoretical proof that Empirical techniques are Optimal.
[12] Stochastic Diffusion search: Partial function evaluation in Swarm Intelligence Dynamic Optimization.
Fig. – (Above) A 3D toroidal fast changing landscape describing a Dynamic Optimization (DO) Control Problem (8 frames in total). (Bellow) A self-organized swarm emerging a characteristic flocking migration behaviour surpassing in intermediate steps some local optima over the 3D toroidal landscape (above), describing a Dynamic Optimization (DO) Control Problem. Over each foraging step, the swarm self-regulates his population and keeps tracking the extrema (44 frames in total). [extra details + PDF]
[] Vitorino Ramos, Fernandes, C., Rosa, A.C., Abraham, A., Computational Chemotaxis in Ants and Bacteria over Dynamic Environments, in CEC´07 – Congress on Evolutionary Computation, IEEE Press, USA, ISBN 1-4244-1340-0, pp. 1009-1017, Sep. 2007.
Chemotaxis can be defined as an innate behavioural response by an organism to a directional stimulus, in which bacteria, and other single-cell or multicellular organisms direct their movements according to certain chemicals in their environment. This is important for bacteria to find food (e.g., glucose) by swimming towards the highest concentration of food molecules, or to flee from poisons. Based on self-organized computational approaches and similar stigmergic concepts we derive a novel swarm intelligent algorithm. What strikes from these observations is that both eusocial insects as ant colonies and bacteria have similar natural mechanisms based on stigmergy in order to emerge coherent and sophisticated patterns of global collective behaviour. Keeping in mind the above characteristics we will present a simple model to tackle the collective adaptation of a social swarm based on real ant colony behaviors (SSA algorithm) for tracking extrema in dynamic environments and highly multimodal complex functions described in the well-know De Jong test suite. Later, for the purpose of comparison, a recent model of artificial bacterial foraging (BFOA algorithm) based on similar stigmergic features is described and analyzed. Final results indicate that the SSA collective intelligence is able to cope and quickly adapt to unforeseen situations even when over the same cooperative foraging period, the community is requested to deal with two different and contradictory purposes, while outperforming BFOA in adaptive speed. Results indicate that the present approach deals well in severe Dynamic Optimization problems.
(to obtain the respective PDF file follow link above or visit chemoton.org)
Video – Thousands of starlings birds gathering in flocks, flying in formations while emerging complex patterns on S.W. Scotland (more photos & video by/at Fresh Pics, 2007). Here for an artificial version with different purposes. They are not birds, instead an entirely different new animal.
[…] In contrast to negative feedback, positive feedback (PF) generally promotes changes in the system (the majority of self-organizing SO systems use them). The explosive growth of the human population provides a familiar example of the effect of positive feedback. The snowballing autocatalytic effect of PF takes an initial change in a system (due to amplification of fluctuations; a minimal and natural local cluster of objects could be a starting point) and reinforces that change in the same direction as the initial deviation. Self-enhancement, amplification, facilitation, and autocatalysis are all terms used to describe positive feedback [9]. Another example could be provided by the clustering or aggregation of individuals. Many birds, such as seagulls nest in large colonies. Group nesting evidently provides individuals with certain benefits, such as better detection of predators or greater ease in finding food. The mechanism in this case is imitation (1): birds preparing to nest are attracted to sites where other birds are already nesting, while the behavioral rule could be synthesized as “I nest close where you nest”. The key point is that aggregation of nesting birds at a particular site is not purely a consequence of each bird being attracted to the site per se. Rather, the aggregation evidently arises primarily because each bird is attracted to others (check for further references on [7,9]). On social insect societies, PF could be illustrated by the pheromone reinforcement on trails, allowing the entire colony to exploit some past and present solutions. Generally, as in the above cases, positive feedback is imposed implicitly on the system and locally by each one of the constituent units. Fireflies flashing in synchrony [49] follow the rule, “I signal when you signal”, fish traveling in schools abide by the rule, “I go where you go”, and so forth. In humans, the “infectious” quality of a yawn of laughter is a familiar example of positive feedback of the form, “I do what you do”. Seeing a person yawning (2), or even just thinking of yawning, can trigger a yawn [9]. There is however one associated risk, generally if PF acts alone without the presence of negative feedbacks, which per si can play a critical role keeping under control this snowballing effect, providing inhibition to offset the amplification and helping to shape it into a particular pattern. Indeed, the amplifying nature of PF means that it has the potential to produce destructive explosions or implosions in any process where it plays a role. Thus the behavioral rule may be more complicated than initially suggested, possessing both an autocatalytic as well as an antagonistic aspect. In the case of fish [9], the minimal behavioral rule could be “I nest where others nest, unless the area is overcrowded” (HEY !! here we go again to the El Farol Bar problem!). In this case both positive and negative feedback may be coded into the behavioral rules of the fish. Finally, in other cases one finds that the inhibition arises automatically, often simply from physical constraints. […]
in, V. Ramos et al., “Social Cognitive Maps, Swarm Collective Perception and Distributed Search on Dynamic Landscapes“.
(1) See also on this subject the seminal sociological work of Gabriel Tarde; Tarde, G., Les Lois de l’Imitation, Eds. du Seuil (2001), 1st Edition, Eds. Alcan, Paris, 1890.
(2) Similarly, Milgram et al (Milgram, Bickerman and Berkowitz, “Note on the Drawing Power of Crowds of Different Size”, Journal of Personality and Social Psychology, 13, 1969) found that if one person stood in a Manhattan street gazing at a sixth floor window, 20% of pedestrians looked up; if five people stood gazing, then 80% of people looked up.
(to obtain the respective PDF file follow this link or visit chemoton.org)
Recent Comments