You are currently browsing the tag archive for the ‘Artificial Intelligence’ tag.

2016 – Up now, an overall of 1567 citations among 74 works (including 3 books) on *GOOGLE SCHOLAR* (https://scholar.google.com/citations?user=gSyQ-g8AAAAJ&hl=en) [with an *Hirsh* h-index=19, and an average of 160.2 citations each for any work on my top five] + 900 citations among 57 works on the new *RESEARCH GATE* site (https://www.researchgate.net/profile/Vitorino_Ramos).

Refs.: * Science*,

**,**

*Artificial Intelligence***,**

*Swarm Intelligence***,**

*Data-Mining***,**

*Big-Data***,**

*Evolutionary Computation***,**

*Complex Systems***,**

*Image Analysis***,**

*Pattern Recognition***.**

*Data Analysis*

Figure – Neural circuit controller of the virtual ant (page 3, fig. 2). [URL: http://arxiv.org/abs/1507.08467 ]

Intelligence and decision in foraging ants. Individual or Collective? Internal or External? What is the right balance between the two. Can one have internal intelligence without external intelligence? Can one take examples from nature to build in silico artificial lives that present us with interesting patterns? We explore a model of foraging ants in this paper that will be presented in early September in Exeter, UK, at UKCI 2015. (available on arXiv [PDF] and ResearchGate)

Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; “** A Model for Foraging Ants, Controlled by Spiking Neural Networks and Double Pheromones**“, UKCI 2015 Computational Intelligence – University of Exeter, UK, September 2015.

**Abstract**: A model of an Ant System where ants are controlled by a spiking neural circuit and a second order pheromone mechanism in a foraging task is presented. A neural circuit is trained for individual ants and subsequently the ants are exposed to a virtual environment where a swarm of ants performed a resource foraging task. The model comprises an associative and unsupervised learning strategy for the neural circuit of the ant. The neural circuit adapts to the environment by means of classical conditioning. The initially unknown environment includes different types of stimuli representing food (rewarding) and obstacles (harmful) which, when they come in direct contact with the ant, elicit a reflex response in the motor neural system of the ant: moving towards or away from the source of the stimulus. The spiking neural circuits of the ant is trained to identify food and obstacles and move towards the former and avoid the latter. The ants are released on a landscape with multiple food sources where one ant alone would have difficulty harvesting the landscape to maximum efficiency. In this case the introduction of a double pheromone mechanism (positive and negative reinforcement feedback) yields better results than traditional ant colony optimization strategies. Traditional ant systems include mainly a positive reinforcement pheromone. This approach uses a second pheromone that acts as a marker for forbidden paths (negative feedback). This blockade is not permanent and is controlled by the evaporation rate of the pheromones. The combined action of both pheromones acts as a collective stigmergic memory of the swarm, which reduces the search space of the problem. This paper explores how the adaptation and learning abilities observed in biologically inspired cognitive architectures is synergistically enhanced by swarm optimization strategies. The model portraits two forms of artificial intelligent behaviour: at the individual level the spiking neural network is the main controller and at the collective level the pheromone distribution is a map towards the solution emerged by the colony. The presented model is an important pedagogical tool as it is also an easy to use library that allows access to the spiking neural network paradigm from inside a Netlogo—a language used mostly in agent based modelling and experimentation with complex systems.

**References**:

[**1**] C. G. Langton, “Studying artificial life with cellular automata,” Physica D: Nonlinear Phenomena, vol. 22, no. 1–3, pp. 120 – 149, 1986, proceedings of the Fifth Annual International Conference. [Online]. Available: http://www.sciencedirect.com/ science/article/pii/016727898690237X

[**2**] A. Abraham and V. Ramos, “Web usage mining using artificial ant colony clustering and linear genetic programming,” in Proceedings of the Congress on Evolutionary Computation. Australia: IEEE Press, 2003, pp. 1384–1391.

[**3**] V. Ramos, F. Muge, and P. Pina, “Self-organized data and image retrieval as a consequence of inter-dynamic synergistic relationships in artificial ant colonies,” Hybrid Intelligent Systems, vol. 87, 2002.

[**4**] V. Ramos and J. J. Merelo, “Self-organized stigmergic document maps: Environment as a mechanism for context learning,” in Proceddings of the AEB, Merida, Spain, February 2002. ´

[**5**] D. Sousa-Rodrigues and V. Ramos, “Traversing news with ant colony optimisation and negative pheromones,” in European Conference in Complex Systems, Lucca, Italy, Sep 2014.

[**6**] E. Bonabeau, G. Theraulaz, and M. Dorigo, Swarm Intelligence: From Natural to Artificial Systems, 1st ed., ser. Santa Fe Insitute Studies In The Sciences of Complexity. 198 Madison Avenue, New York: Oxford University Press, USA, Sep. 1999.

[**7**] M. Dorigo and L. M. Gambardella, “Ant colony system: A cooperative learning approach to the traveling salesman problem,” Universite Libre de Bruxelles, Tech. Rep. TR/IRIDIA/1996-5, ´ 1996.

[**8**] M. Dorigo, G. Di Caro, and L. M. Gambardella, “Ant algorithms for discrete optimization,” Artif. Life, vol. 5, no. 2, pp. 137– 172, Apr. 1999. [Online]. Available: http://dx.doi.org/10.1162/ 106454699568728

[**9**] L. M. Gambardella and M. Dorigo, “Ant-q: A reinforcement learning approach to the travelling salesman problem,” in Proceedings of the ML-95, Twelfth Intern. Conf. on Machine Learning, M. Kaufman, Ed., 1995, pp. 252–260.

[**10**] A. Gupta, V. Nagarajan, and R. Ravi, “Approximation algorithms for optimal decision trees and adaptive tsp problems,” in Proceedings of the 37th international colloquium conference on Automata, languages and programming, ser. ICALP’10. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 690–701. [Online]. Available: http://dl.acm.org/citation.cfm?id=1880918.1880993

[**11**] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Second order ˜ swarm intelligence,” in HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, ser. Lecture Notes in Computer Science, J.-S. Pan, M. Polycarpou, M. Wozniak, A. Carvalho, ´ H. Quintian, and E. Corchado, Eds. Salamanca, Spain: Springer ´ Berlin Heidelberg, Sep 2013, vol. 8073, pp. 411–420.

[**12**] W. Maass and C. M. Bishop, Pulsed Neural Networks. Cambridge, Massachusetts: MIT Press, 1998.

[**13**] E. M. Izhikevich and E. M. Izhikevich, “Simple model of spiking neurons.” IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, vol. 14, no. 6, pp. 1569–72, 2003. [Online]. Available: http://www.ncbi.nlm.nih. gov/pubmed/18244602

[**14**] C. Liu and J. Shapiro, “Implementing classical conditioning with spiking neurons,” in Artificial Neural Networks ICANN 2007, ser. Lecture Notes in Computer Science, J. de S, L. Alexandre, W. Duch, and D. Mandic, Eds. Springer Berlin Heidelberg, 2007, vol. 4668, pp. 400–410. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-74690-4 41

[**15**] J. Haenicke, E. Pamir, and M. P. Nawrot, “A spiking neuronal network model of fast associative learning in the honeybee,” Frontiers in Computational Neuroscience, no. 149, 2012. [Online]. Available: http://www.frontiersin.org/computational neuroscience/10.3389/conf.fncom.2012.55.00149/full

[**16**] L. I. Helgadottir, J. Haenicke, T. Landgraf, R. Rojas, and M. P. Nawrot, “Conditioned behavior in a robot controlled by a spiking neural network,” in International IEEE/EMBS Conference on Neural Engineering, NER, 2013, pp. 891–894.

[**17**] A. Cyr and M. Boukadoum, “Classical conditioning in different temporal constraints: an STDP learning rule for robots controlled by spiking neural networks,” pp. 257–272, 2012.

[**18**] X. Wang, Z. G. Hou, F. Lv, M. Tan, and Y. Wang, “Mobile robots’ modular navigation controller using spiking neural networks,” Neurocomputing, vol. 134, pp. 230–238, 2014.

[**19**] C. Hausler, M. P. Nawrot, and M. Schmuker, “A spiking neuron classifier network with a deep architecture inspired by the olfactory system of the honeybee,” in 2011 5th International IEEE/EMBS Conference on Neural Engineering, NER 2011, 2011, pp. 198–202.

[**20**] U. Wilensky, “Netlogo,” Evanston IL, USA, 1999. [Online]. Available: http://ccl.northwestern.edu/netlogo/

[**21**] C. Jimenez-Romero and J. Johnson, “Accepted abstract: Simulation of agents and robots controlled by spiking neural networks using netlogo,” in International Conference on Brain Engineering and Neuro-computing, Mykonos, Greece, Oct 2015.

[**22**] W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge University Press, 2002.

[**23**] J. v. H. W Gerstner, R Kempter and H. Wagner, “A neuronal learning rule for sub-millisecond temporal coding,” Nature, vol. 386, pp. 76–78, 1996.

[**24**] I. P. Pavlov, “Conditioned reflexes: An investigation of the activity of the cerebral cortex,” New York, 1927.

[**25**] E. J. H. Robinson, D. E. Jackson, M. Holcombe, and F. L. W. Ratnieks, “Insect communication: ‘no entry’ signal in ant foraging,” Nature, vol. 438, no. 7067, pp. 442–442, 11 2005. [Online]. Available: http://dx.doi.org/10.1038/438442a

[**26**] E. J. Robinson, D. Jackson, M. Holcombe, and F. L. Ratnieks, “No entry signal in ant foraging (hymenoptera: Formicidae): new insights from an agent-based model,” Myrmecological News, vol. 10, no. 120, 2007.

[**27**] D. Sousa-Rodrigues, J. Louçã, and V. Ramos, “From standard ˜ to second-order swarm intelligence phase-space maps,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, Sep 2011.

[**28**] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Spatio-temporal ˜ dynamics on co-evolved stigmergy,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, 9 2011.

[**29**] S. Tisue and U. Wilensky, “Netlogo: A simple environment for modeling complexity,” in International conference on complex systems. Boston, MA, 2004, pp. 16–21.

Figure – Two *simplicies a* and

**connected by the 2-dimensional face, the triangle**

*b***{1;2;3}**. In the analysis of the time-line of

**newspaper (link) the system used feature vectors based on frequency of words and them computed similarity between documents based on those feature vectors. This is a purely statistical approach that requires great computational power and that is difficult for problems that have large feature vectors and many documents. Feature vectors with 100,000 or more items are common and computing similarities between these documents becomes cumbersome. Instead of computing distance (or similarity) matrices between documents from feature vectors, the present approach explores the possibility of inferring the distance between documents from the**

*The Guardian***description. Q-analysis is a very natural notion of connectivity between the**

*Q-analysis**simplicies*of the structure and in the relation studied, documents are connected to each other through shared sets of tags entered by the journalists. Also in this framework, eccentricity is defined as a measure of the relatedness of one simplex in relation to another [7].

David M.S. Rodrigues and Vitorino Ramos, “* Traversing News with Ant Colony Optimisation and Negative Pheromones*” [PDF], accepted as preprint for oral presentation at the

**–**

*European Conference on Complex Systems***ECCS**‘

**14**in Lucca, Sept. 22-26, 2014, Italy.

**Abstract**: The past decade has seen the rapid development of the online newsroom. News published online are the main outlet of news surpassing traditional printed newspapers. This poses challenges to the production and to the consumption of those news. With those many sources of information available it is important to find ways to cluster and organise the documents if one wants to understand this new system. Traditional approaches to the problem of clustering documents usually embed the documents in a suitable similarity space. Previous studies have reported on the impact of the similarity measures used for clustering of * textual corpora* [1]. These similarity measures usually are calculated for bag of words representations of the documents. This makes the final document-word matrix high dimensional. Feature vectors with more than 10,000 dimensions are common and algorithms have severe problems with the high dimensionality of the data. A novel bio inspired approach to the problem of traversing the news is presented. It finds

**over documents published by the newspaper**

*Hamiltonian cycles**The Guardian*. A

**based on**

*Second Order Swarm Intelligence algorithm***was developed [2, 3] that uses a**

*Ant Colony Optimisation**negative pheromone*to mark unrewarding paths with a “no-entry” signal. This approach follows recent findings of

*negative pheromone*usage in real ants [4].

In this case study the corpus of data is represented as a bipartite relation between documents and keywords entered by the journalists to characterise the news. A new similarity measure between documents is presented based on the ** Q-analysis** description [5, 6, 7] of the

*simplicial*complex formed between documents and keywords. The eccentricity between documents (two

*simplicies*) is then used as a novel measure of similarity between documents. The results prove that the

**performs better in benchmark problems of the travelling salesman problem, with faster convergence and optimal results. The addition of the**

*Second Order Swarm Intelligence algorithm**negative pheromone*as a

*non-entry signal*improves the quality of the results. The application of the algorithm to the corpus of news of

*The Guardian*creates a coherent navigation system among the news. This allows the users to navigate the news published during a certain period of time in a semantic sequence instead of a time sequence. This work as broader application as it can be applied to many cases where the data is mapped to bipartite relations (e.g. protein expressions in cells, sentiment analysis, brand awareness in social media, routing problems), as it highlights the connectivity of the underlying complex system.

**Keywords**: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Hamiltonian cycles, Text Mining, Textual Corpora, Information Retrieval, Knowledge Discovery, Sentiment Analysis, Q-Analysis, Data Mining, Journalism, *The Guardian*.

**References**:

[**1**] Alexander Strehl, Joydeep Ghosh, and Raymond Mooney. ** Impact of similarity measures on web-page clustering**. In

*Workshop on Artifcial Intelligence for Web Search*(AAAI 2000), pages 58-64, 2000.

[

**2**] David M. S. Rodrigues, Jorge Louçã, and Vitorino Ramos.

*. In Stefan Thurner, editor, 8th*

**From standard to second-order Swarm Intelligence phase-space maps***European Conference on Complex Systems*, Vienna, Austria, 9 2011.

[

**3**] Vitorino Ramos, David M. S. Rodrigues, and Jorge Louçã.

**Second order Swarm Intelligence**. In Jeng-Shyang Pan, Marios M. Polycarpou, Micha l Wozniak, André C.P.L.F. Carvalho, Hector Quintian, and Emilio Corchado, editors,

*HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems*, volume 8073 of Lecture Notes in Computer Science, pages 411-420. Springer Berlin Heidelberg, Salamanca, Spain, 9 2013.

[

**4**] Elva J.H. Robinson, Duncan Jackson, Mike Holcombe, and Francis L.W. Ratnieks.

**.**

*No entry signal in ant foraging (hymenoptera: Formicidae): new insights from an agent-based model**Myrmecological*News, 10(120), 2007.

[

**5**] Ronald Harry Atkin.

**.**

*Mathematical Structure in Human Affairs**Heinemann Educational Publishers*, 48 Charles Street, London, 1 edition, 1974.

[

**6**] J. H. Johnson.

**. In Proceedings of the**

*A survey of Q-analysis, part 1: The past and present**Seminar on Q-analysis and the Social Sciences*, Universty of Leeds, 9 1983.

[

**7**] David M. S. Rodrigues.

**. In Albert Diaz-Guilera, Alex Arenas, and Alvaro Corral, editors, Proceedings of the**

*Identifying news clusters using Q-analysis and modularity**European Conference on Complex Systems*2013, Barcelona, 9 2013.

In order to solve hard combinatorial optimization problems (e.g. optimally scheduling students and teachers along a week plan on several different classes and classrooms), one way is to computationally mimic how ants forage the vicinity of their habitats searching for food. On a myriad of endless possibilities to find the optimal route (minimizing the travel distance), ants, collectively emerge the solution by using stigmergic signal traces, or *pheromones*, which also dynamically change under evaporation.

Current algorithms, however, make only use of a positive feedback type of *pheromone* along their search, that is, if they collectively visit a good low-distance route (a minimal pseudo-solution to the problem) they tend to reinforce that signal, for their colleagues. Nothing wrong with that, on the contrary, but no one knows however if a lower-distance alternative route is there also, just at the corner. On his global search endeavour, like a snowballing effect, positive feedbacks tend up to give credit to the exploitation of solutions but not on the – also useful – exploration side. The upcoming potential solutions can thus get crystallized, and freeze, while a small change on some parts of the whole route, could on the other-hand successfully increase the global result.

Figure – Influence of **negative pheromone** on *kroA100.tsp* problem (fig.1 – page 6) (values on lines represent 1-ALPHA). A typical standard ACS (*Ant Colony System*) is represented here by the line with value 0.0, while better results could be found by our approach, when using positive feedbacks (0.95) along with negative feedbacks (0.05). Not only we obtain better results, as we found them earlier.

There is, however, an advantage when a second type of *pheromone* (a negative feedback one) **co-evolves** with the first type. And we decided to research for his impact. What we found out, is that by using a second type of global feedback, we can indeed increase a faster search while achieving better results. **In a way, it’s like using two different types of evaporative traffic lights, in green and red, co-evolving together**. And as a conclusion, we should indeed use a negative no-entry signal pheromone. In small amounts (0.05), but use it. Not only this prevents the whole system to freeze on some solutions, to soon, as it enhances a better compromise on the search space of potential routes. The pre-print article is available here at arXiv. Follows the abstract and keywords:

*Vitorino Ramos*, *David M. S. Rodrigues*, *Jorge Louçã*, “** Second Order Swarm Intelligence**” [PDF], in

*Hybrid Artificial Intelligent Systems*, Lecture Notes in Computer Science, Springer-Verlag, Volume 8073, pp. 411-420, 2013.

**Abstract**: An artificial *Ant Colony System* (ACS) algorithm to solve general purpose* combinatorial Optimization Problems* (COP) that extends previous AC models [21] by the inclusion of a negative pheromone, is here described. Several *Travelling Salesman Problem*‘s (TSP) were used as benchmark. We show that by using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedbacks achieves better results than single positive feedback systems. The algorithm was tested against known NP complete combinatorial Optimization Problems, running on symmetrical TSPs. We show that the new algorithm compares favourably against these benchmarks, accordingly to recent biological findings by *Robinson* [26,27], and *Grüter* [28] where “No entry” signals and negative feedback allows a colony to quickly reallocate the majority of its foragers to superior food patches. This is the first time an extended ACS algorithm is implemented with these successful characteristics.

**Keywords**: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Travelling Salesman Problems (TSP).

Figure – *Hybrid Artificial Intelligent Systems* new LNAI (*Lecture Notes on Artificial Intelligence*) series volume 8073, Springer-Verlag Book [original photo by my colleague David M.S. Rodrigues].

New work, new book. Last week one of our latest works come out published on Springer. Edited by *Jeng-Shyang Pan*, *Marios M. Polycarpou*, *Emilio Corchado* et al. “** Hybrid Artificial Intelligent Systems**” comprises a full set of new papers on this hybrid area on Intelligent Computing (check the full articles list at Springer). Our new paper “

**Second Order Swarm Intelligence**” (pp. 411-420, Springer books link) was published on the

*Bio-inspired Models and Evolutionary Computation*section.

Interesting how this ** Samuel Beckett** (1906–1989) quote to his work is so close to the research on

*Artificial Life*(aLife), as well as how

*Christopher Langton*(link) approached the field, on his initial stages, fighting back and fourth with his

*Lambda parameter*(“

**“) back in the 80’s. According to**

*Life emerges at the Edge of Chaos**Langton*‘s findings, at the edge of several ordered states and the chaotic regime (lambda=0,273) the information passing on the system is maximal, thus ensuring life. Will not wait for

*Godot*. Here:

“Beckett was intrigued by chess because of the way it combined the free play of imagination with a rigid set of rules, presenting what the editors of the *Faber Companion to Samuel Beckett* call a “**paradox of freedom and restriction**”. That is a very Beckettian notion: the idea that we are simultaneously free and unfree, capable of beauty yet doomed. Chess, especially in the endgame when the board’s opening symmetry has been wrecked and the courtiers eliminated, represents life reduced to essentials – to a struggle to survive.”(*)

(*) on *Stephen Moss*, “** Samuel Beckett’s obsession with chess: how the game influenced his work**“, The

*Guardian*, 29 August 2013. [link]

Four different snapshots (click to enlarge) from one of my latest books, recently published in Japan: *Ajith Abraham*, *Crina Grosan*, *Vitorino Ramos* (Eds.), “*Swarm Intelligence in Data Mining*” (**群知能と データマイニング**), *Tokyo Denki University* press [TDU], Tokyo, Japan, July 2012.

Fig.1 – (click to enlarge) The optimal shortest path among *N*=1265 points depicting a Portuguese *Navalheira* crab as a result of one of our latest Swarm-Intelligence based algorithms. The problem of finding the shortest path among *N* different points in space is *NP-hard*, known as the *Travelling Salesmen Problem* (*TSP*), being one of the major and hardest benchmarks in Combinatorial Optimization (link) and Artificial Intelligence. (*V. Ramos*, *D. Rodrigues*, 2012)

This summer my kids just grab a tiny Portuguese *Navalheira *crab on the shore. After a small photo-session and some baby-sitting with a lettuce leaf, it was time to release it again into the ocean. He not only survived my kids, as he is now entitled into a new World Wide Web on-line life. After the *Shortest path Sardine* (link) with 1084 points, here is the *Crab* with 1265 points. The algorithm just run as little as 110 iterations.

Fig. 2 – (click to enlarge) Our 1265 initial points depicting a TSP Portuguese *Navalheira* crab. Could you already envision a minimal tour between all these points?

As usual in *Travelling Salesmen problems* (*TSP*) we start it with a set of points, in our case 1084 points or cities (fig. 2). Given a list of cities and their pairwise distances, the task is now to find the *shortest possible tour* that visits each city exactly once. The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods.

Fig. 3 – (click to enlarge) Again the shortest path *Navalheira* crab, where the optimal contour path (in black: first fig. above) with 1265 points (or cities) was filled in dark orange.

TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder.

What follows (fig. 4) is the original crab photo after image segmentation and just before adding *Gaussian* noise in order to retrieve several data points for the initial TSP problem. The algorithm was then embedded with the extracted *x*,*y* coordinates of these data points (fig. 2) in order for him to discover the minimal path, in just 110 iterations. For extra details, pay a visit onto the *Shortest path Sardine* (link) done earlier.

Fig. 4 – (click to enlarge) The original crab photo after some image processing as well as segmentation and just before adding *Gaussian* noise in order to retrieve several data points for the initial TSP problem.

Figure (click to enlarge) – Cover from one of my books published last month (10 July 2012) “*Swarm Intelligence in Data Mining*” recently translated and edited in Japan (by *Tokyo Denki University press* [TDU]). Cover image from Amazon.co.jp (url). Title was translated into **群知能と データマイニング**. Funny also, to see my own name for the first time translated into Japanese – wonder if it’s Kanji. A brief synopsis follow:

(…) ** Swarm Intelligence** (SI) is an innovative distributed intelligent paradigm for solving optimization problems that originally took its inspiration from the biological examples by swarming, flocking and herding phenomena in vertebrates.

*Particle Swarm Optimization*(PSO) incorporates swarming behaviours observed in flocks of birds, schools of fish, or swarms of bees, and even human social behaviour, from which the idea is emerged.

*Ant Colony Optimization*(ACO) deals with artificial systems that is inspired from the foraging behaviour of real ants, which are used to solve discrete optimization problems. Historically the notion of finding useful patterns in data has been given a variety of names including data mining, knowledge discovery, information extraction, etc. Data Mining is an analytic process designed to explore large amounts of data in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data. In order to achieve this, data mining uses computational techniques from statistics, machine learning and pattern recognition. Data mining and Swarm intelligence may seem that they do not have many properties in common. However, recent studies suggests that they can be used together for several real world data mining problems especially when other methods would be too expensive or difficult to implement. This book deals with the application of swarm intelligence methodologies in data mining. Addressing the various issues of swarm intelligence and data mining using different intelligent approaches is the novelty of this edited volume. This volume comprises of 11 chapters including an introductory chapters giving the fundamental definitions and some important research challenges. Chapters were selected on the basis of fundamental ideas/concepts rather than the thoroughness of techniques deployed. (…) (more)

*Complex adaptive systems* (*CAS*), including ecosystems, governments, biological cells, and markets, are characterized by intricate hierarchical arrangements of boundaries and signals. In ecosystems, for example, niches act as semi-permeable boundaries, and smells and visual patterns serve as signals; governments have departmental hierarchies with memoranda acting as signals; and so it is with other CAS. Despite a wealth of data and descriptions concerning different CAS, there remain many unanswered questions about “steering” these systems. In *Signals and Boundaries*, ** John Holland** (

*Wikipedia*entry) argues that understanding the origin of the intricate signal/border hierarchies of these systems is the key to answering such questions. He develops an overarching framework for comparing and steering CAS through the mechanisms that generate their signal/boundary hierarchies.

*Holland*lays out a path for developing the framework that emphasizes agents, niches, theory, and mathematical models. He discusses, among other topics, theory construction; signal-processing agents; networks as representations of signal/boundary interaction; adaptation; recombination and reproduction; the use of tagged urn models (adapted from elementary probability theory) to represent boundary hierarchies; finitely generated systems as a way to tie the models examined into a single framework; the framework itself, illustrated by a simple finitely generated version of the development of a multi-celled organism; and Markov processes.

*in, *Introduction to* John H. Holland*, “** Signals and Boundaries – Building blocks for Complex Adaptive Systems**“, Cambridge, Mass. : ©MIT Press, 2012.

Video – ** Water has Memory** (from Oasis HD, Canada; link): just a liquid or much more? Many researchers are convinced that water is capable of “memory” by storing information and retrieving it. The possible applications are innumerable: limitless retention and storage capacity and the key to discovering the origins of life on our planet. Research into water is just beginning.

Water capable of processing information as well as a huge possible “*container*” for data media, that is something remarkable. This theory was first proposed by the late French immunologist Jacques Benveniste, in a controversial article published in 1988 in *Nature, *as a way of explaining how homeopathy works (link). *Benveniste*’s theory has continued to be championed by some and disputed by others. The video clip above, from the Oasis HD Channel, shows some fascinating recent experiments with water “memory” from the *Aerospace Institute of the University of Stuttgart* in Germany. The results with the different types of flowers immersed in water are particularly evocative.

This line of research also remembers me back of an old and quite interesting paper by a colleague, *Chrisantha Fernando*. Together with *Sampsa Sojakka*, both have proved that waves produced on the surface of water can be used as the medium for a *Wolfgang Maass*’ “Liquid State Machine” (link) that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Amazingly, Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. What follows is the abstract of their paper entitled “** Pattern Recognition in a Bucket**“, as well a PDF link onto it:

Figure – Typical wave patterns for the XOR task. Top-Left: [0 1] (right motor on), Top-Right: [1 0] (left motor on), Bottom-Left: [1 1] (both motors on), Bottom-Right: [0 0] (still water). Sobel filtered and thresholded images on right. (from Fig. 3. in in Chrisantha Fernando and Sampsa Sojakka, “*Pattern Recognition in a Bucket*“, ECAL proc., European Conference on Artificial Life, 2003.

[…] **Abstract**. This paper demonstrates that the waves produced on the surface of water can be used as the medium for a “Liquid State Machine” that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Interference between waves allows non-linear parallel computation upon simultaneous sensory inputs. Temporal patterns of stimulation are converted to spatial patterns of water waves upon which a linear discrimination can be made. Whereas Wolfgang Maass’ Liquid State Machine requires fine tuning of the spiking neural network parameters, water has inherent self-organising properties such as strong local interactions, time-dependent spread of activation to distant areas, inherent stability to a wide variety of inputs, and high complexity. Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. An analogy is made between water molecules and neurons in a recurrent neural network. […] in *Chrisantha Fernando* and *Sampsa Sojakka*, **“*** Pattern Recognition in a Bucket*“, ECAL proc., European Conference on Artificial Life, 2003. [PDF link]

Ever tried to solve a problem where its own problem statement is changing constantly? Have a look on our approach:

* Vitorino Ramos, David M.S. Rodrigues*,

*Jorge Louçã*, “

*“, in*

**Spatio-Temporal Dynamics on Co-Evolved Stigmergy***European Conference on Complex Systems*, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

**Abstract**: Research over hard ** NP**-complete

*Combinatorial Optimization Problems*(

**COP**’s) has been focused in recent years, on several robust bio-inspired meta-heuristics, like those involving

*Evolutionary Computation*(

**EC**) algorithmic paradigms. One particularly successful well-know meta-heuristic approach is based on

*Swarm Intelligence*(

**SI**), i.e., the self-organized stigmergic-based property of a complex system whereby the collective behaviors of (unsophisticated) entities interacting locally with their environment cause coherent functional global patterns to emerge. This line of research recognized as

*Ant Colony Optimization*(

**ACO**), uses a set of stochastic cooperating ant-like agents to find good solutions, using self-organized stigmergy as an indirect form of communication mediated by artificial pheromone, whereas agents deposit pheromone-signs on the edges of the problem-related graph complex network, encompassing a family of successful algorithmic variations such as:

*Ant Systems*(

**AS**),

*Ant Colony Systems*(

**ACS**),

*Max-Min Ant Systems*(

**) and**

*Max*–*Min*AS**Ant-Q**.

Albeit being extremely successful these algorithms mostly rely on positive feedback’s, causing excessive algorithmic exploitation over the entire combinatorial search space. This is particularly evident over well known benchmarks as the symmetrical *Traveling Salesman Problem* (**TSP**). Being these systems comprised of a large number of frequently similar components or events, the principal challenge is to understand how the components interact to produce a complex pattern feasible solution (in our case study, an optimal robust solution for hard ** NP**-complete dynamic

**-like combinatorial problems). A suitable approach is to first understand the role of two basic modes of interaction among the components of Self-Organizing (**

*TSP***SO**) Swarm-Intelligent-like systems: positive and negative feedback. While positive feedback promotes a snowballing auto-catalytic effect (e.g. trail pheromone upgrading over the network; exploitation of the search space), taking an initial change in a system and reinforcing that change in the same direction as the initial deviation (self-enhancement and amplification) allowing the entire colony to exploit some past and present solutions (environmental dynamic memory), negative feedback such as pheromone evaporation ensure that the overall learning system does not stables or freezes itself on a particular configuration (innovation; search space exploration). Although this kind of (global) delayed negative feedback is important (evaporation), for the many reasons given above, there is however strong assumptions that other negative feedbacks are present in nature, which could also play a role over increased convergence, namely implicit-like negative feedbacks. As in the case for positive feedbacks, there is no reason not to explore increasingly distributed and adaptive algorithmic variations where negative feedback is also imposed implicitly (not only explicitly) over each network edge, while the entire colony seeks for better answers in due time.

In order to overcome this hard search space exploitation-exploration compromise, our present algorithmic approach follows the route of very recent biological findings showing that forager ants lay attractive trail pheromones to guide nest mates to food, but where, the effectiveness of foraging networks were improved if pheromones could also be used to repel foragers from unrewarding routes. Increasing empirical evidences for such a negative trail pheromone exists, deployed by Pharaoh’s ants (*Monomorium pharaonis*) as a ‘*no entry*‘ signal to mark unrewarding foraging paths. The new algorithm comprises a second order approach to Swarm Intelligence, as pheromone-based no entry-signals cues, were introduced, co-evolving with the standard pheromone distributions (collective cognitive maps) in the aforementioned known algorithms.

To exhaustively test his adaptive response and robustness, we have recurred to different dynamic optimization problems. Medium-size and large-sized dynamic **TSP** problems were created. Settings and parameters such as, environmental upgrade frequencies, landscape changing or network topological speed severity, and type of dynamic were tested. Results prove that the present co-evolved two-type pheromone swarm intelligence algorithm is able to quickly track increasing swift changes on the dynamic **TSP** complex network, compared to standard algorithms.

**Keywords**: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Dynamical Symmetrical Traveling Salesman Problems (TSP).

Fig. – Recovery times over several dynamical stress tests at the fl1577 TSP problem (1577 node graph) – 460 iter max – Swift changes at every 150 iterations (20% = 314 nodes, 40% = 630 nodes, 60% = 946 nodes, 80% = 1260 nodes, 100% = 1576 nodes). [click to enlarge]

*David M.S. Rodrigues*, *Jorge Louçã*, *Vitorino Ramos*, “* From Standard to Second Order Swarm Intelligence Phase-space maps*“, in

*European Conference on Complex Systems*, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

**Abstract**: Standard *Stigmergic* approaches to *Swarm Intelligence* encompasses the use of a set of stochastic cooperating ant-like agents to find optimal solutions, using self-organized * Stigmergy* as an indirect form of communication mediated by a singular artificial pheromone. Agents deposit pheromone-signs on the edges of the problem-related graph to give rise to a family of successful algorithmic approaches entitled

*Ant Systems*(

**AS**),

*Ant Colony Systems*(

**ACS**), among others. These mainly rely on positive feedback’s, to search for an optimal solution in a large combinatorial space. The present work shows how, using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedback’s achieves better results than single positive feedback systems. This follows the route of very recent biological findings showing that forager ants, while laying attractive trail pheromones to guide nest mates to food, also gained foraging effectiveness by the use of pheromones that repelled foragers from unrewarding routes. The algorithm presented here takes inspiration precisely from this biological observation.

The new algorithm was exhaustively tested on a series of well-known benchmarks over hard NP-complete *Combinatorial Optimization Problems* (**COP**’s), running on symmetrical *Traveling Salesman Problems* (**TSP**). Different network topologies and stress tests were conducted over low-size TSP’s (eil51.tsp; eil78.tsp; kroA100.tsp), medium-size (d198.tsp; lin318.tsp; pcb442.tsp; att532.tsp; rat783.tsp) as well as large sized ones (fl1577.tsp; d2103.tsp) [numbers here referring to the number of nodes in the network]. We show that the new co-evolved stigmergic algorithm compared favorably against the benchmark. The algorithm was able to equal or majorly improve every instance of those standard algorithms, not only in the realm of the Swarm Intelligent **AS**, **ACS** approach, as in other computational paradigms like *Genetic Algorithms* (**GA**), *Evolutionary Programming* (**EP**), as well as **SOM** (*Self-Organizing Maps*) and **SA** (*Simulated Annealing*). In order to deeply understand how a second co-evolved pheromone was useful to track the collective system into such results, a refined phase-space map was produced mapping the pheromones ratio between a pure *Ant Colony System* (where no negative feedback besides pheromone evaporation is present) and the present second-order approach. The evaporation rate between different pheromones was also studied and its influence in the outcomes of the algorithm is shown. A final discussion on the phase-map is included. This work has implications in the way large combinatorial problems are addressed as the double feedback mechanism shows improvements over the single-positive feedback mechanisms in terms of convergence speed and on major results.

**Keywords**: Stigmergy, Co-Evolution, Self-Organization, Swarm Intelligence, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Traveling Salesman Problems (TSP), phase-space.

Fig. – Comparing convergence results between Standard algorithms vs. Second Order Swarm Intelligence, over TSP fl1577 (click to enlarge).

“** It is very difficult to make good mistakes**“,

*Tim Harford*, July 2011.

TED talk (July 2011) by *Tim Harford* a writer on Economics who studies Complex Systems, exposing a surprising link among the successful ones: they were built through trial and error. In this sparkling talk from *TEDGlobal 2011*, he asks us to embrace our randomness and start making better mistakes [from *TED*]. Instead of the *God complex*, he purposes trial and error, or to be more precise, *Genetic Algorithms* and *Evolutionary Computation* (one of those examples over his talk is indeed the evolutionary optimal design of an airplane nozzle).

Now, we may ask, if it’s clear to you from the talk whether the nozzle was computationally designed using evolutionary search as suggested by the imagery, or was the imagery designed to describe the process in the laboratory? … as a colleague ask me the other day over *Google plus.* A great question, since as I believe it will be not clear to everyone watching that lecture.

Though, it was clear to me from the beginning, for one simple reason. That is a well-know work in the *Evolutionary Computation* area, done by one of its pioneers, Professor *Hans-Paul Schwefel* from Germany, in 1974 I believe. Unfortunately, at least to me I must say, *Tim Harford* did not mentioned the author, neither he mentions over his talk, the entire *Evolutionary Computation* or *Genetic Algorithms* area, even if he makes a clear bridge between these concepts and the search for innovation. The optimal nozzle design was in fact produced for the first time, on *Schwefel*‘s PhD thesis (“*Adaptive Mechanismen in der Biologischen Evolution und ihr Einfluß auf die Evolutiongeschwindigkeit*“), and he did arrive at this results by using a branch of *Evolutionary Computation* know as (ES) *Evolution Strategies* [here is a Wikipedia entry]. The objective was to achieve the maximum thrust and for that some parameters should be adjusted, such as in which point the small aperture should be put between the two entrances. What follows is a rather old video from *YouTube* on the process:

The animation shows the evolution of a nozzle design since its initial configuration until the final one. After achieving such a design it was a a little difficult understanding why the surprising design was good and a team of physicists and engineers gathered to provide an investigation aiming at devising some explanation for the final nozzle configuration. *Schwefel* (later on with his German group) also investigated the algorithmic features of *Evolution Strategies*, what made possible different generalizations such as a surplus of offspring created, the use of non-elitist evolution strategies (*the comma selection scheme*), and the use of recombination beyond the well known mutation operator to generate the offspring. Here are some related links and papers (link).

Albeit these details … I did enjoyed the talk a lot as well as his quote above. There is still a subtle difference between “*trial and error*” and “*Evolutionary search*” even if linked, but when *Tim Harford* makes a connection between *Innovation* and *Evolutionary Computation*, it remembered me back the “actual” (one decade now, perhaps) work of *David Goldberg* (IlliGAL – Illinois Genetic Algorithms laboratory). Another founding father of the area, now dedicated to innovation, learning, etc… much on these precise lines. Mostly his books, (2002) *The design of innovation: Lessons from and for competent genetic algorithms*. Kluwer Academic Publishers, and (2006) *The Entrepreneurial Engineer* by Wiley.

Finally, let me add, that there are other beautiful examples of Evolutionary Design. The one I love most – however – (for several reasons, namely the powerful abstract message that is sends out into other conceptual fields) is this: a simple bridge. Enjoy, and for some seconds do think about your own area of work.

Figure – Application of Mathematical Morphology *openings* and *closing* operators of increasing size on different digital images (from Fig. 2, page 5).

[] Vitorino Ramos, Pedro Pina, *Exploiting and Evolving R{n} Mathematical Morphology Feature Spaces*, in Ronse Ch., Najman L., Decencière E. (Eds.), *Mathematical Morphology: 40 Years On*, pp. 465-474, **Springer Verlag**, Dordrecht, The Netherlands, 2005.

(**abstract**) A multidisciplinary methodology that goes from the extraction of features till the classification of a set of different Portuguese granites is presented in this paper. The set of tools to extract the features that characterize the polished surfaces of granites is mainly based on mathematical morphology. The classification methodology is based on a genetic algorithm capable of search for the input feature space used by the nearest neighbor rule classifier. Results show that is adequate to perform feature reduction and simultaneous improve the recognition rate. Moreover, the present methodology represents a robust strategy to understand the proper nature of the textures studied and their discriminant features.

(to obtain the respective PDF file follow link above or visit chemoton.org)

The importance of Network Topology again… at [PLoS]. **Abstract**: […] *The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems*. […] (*)

(*) in Hermundstad AM, Brown KS, Bassett DS, Carlson JM, 2011 Learning, Memory, and the Role of Neural Network Architecture. PLoS Comput Biol 7(6): e1002063. doi:10.1371/journal.pcbi.1002063

Fig.1 – (click to enlarge) The optimal shortest path among *N*=1084 points depicting a Portuguese sardine as a result of one of our latest Swarm-Intelligence based algorithms. The problem of finding the shortest path among *N* different points in space is *NP-hard*, known as the *Travelling Salesmen Problem* (*TSP*), being one of the major and hardest benchmarks in Combinatorial Optimization (link) and Artificial Intelligence. (*D. Rodrigues*, *V. Ramos*, 2011)

Almost summer time in Portugal, great weather as usual, and the perfect moment to eat sardines along with friends in open air esplanades; in fact, a lot of grilled sardines. We usually eat grilled sardines with a tomato-onion salad along with barbecued cherry peppers in salt and olive oil. That’s tasty, believe me. But not tasty enough however for me and one of my colleagues, *David Rodrigues* (blog link/twitter link). We decided to take this experience a little further on, creating the first ** shortest path sardine**.

Fig. 2 – (click to enlarge) Our 1084 initial points depicting a TSP Portuguese sardine. Could you already envision a minimal tour between all these points?

As usual in *Travelling Salesmen problems* (*TSP*) we start it with a set of points, in our case 1084 points or cities (fig. 2). Given a list of cities and their pairwise distances, the task is now to find the *shortest possible tour* that visits each city exactly once. The problem was first formulated as a mathematical problem in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. In many applications, additional constraints such as limited resources or time windows make the problem considerably harder. (link)

Fig. 3 – (click to enlarge) A well done and quite grilled shortest path sardine, where the optimal contour path (in blue: first fig. above) with 1084 points was filled in black colour. Nice T-shirt!

Even for toy-problems like the present 1084 *TSP sardine*, the amount of possible paths are incredible huge. And only one of those possible paths is the optimal (minimal) one. Consider for example a TSP with *N*=4 cities, *A*, *B*, *C*, and *D*. Starting in city *A*, the number of possible paths is 6: that is 1) A to B, B to C, C to D, and D to A, 2) A-B, B-D, D-C, C-A, 3) A-C, C-B, B-D and D-A, 4) A-C, C-D, D-B, and B-A, 5) A-D, D-C, C-B, and B-A, and finally 6) A-D, D-B, B-C, and C-A. I.e. there are (*N*–*1*)! [i.e., *N*–*1 factorial*] possible *paths*. For *N*=3 cities, 2×1=2 possible paths, for *N*=4 cities, 3x2x1=6 possible paths, for *N*=5 cities, 4x3x2x1=24 possible paths, … for *N*=20 cities, 121.645.100.408.832.000 possible paths, and so on.

The most direct solution would be to try all permutations (ordered combinations) and see which one is cheapest (using computational brute force search). The running time for this approach however, lies within a polynomial factor of *O*(*n*!), the factorial of the number of cities, so this solution becomes impractical even for only 20 cities. One of the earliest and oldest applications of dynamic programming is the *Held–Karp algorithm* which only solves the problem in time *O*(*n*^{2}2^{n}).

In our present case (*N*=1084) we have had to deal with 1083 factorial possible paths, leading to the astronomical number of 1.19×10^{2818 }possible solutions. That’s roughly 1 followed by 2818 zeroes! – better now to check this *Wikipedia* entry on very* large numbers*. Our new *Swarm-Intelligent* based algorithm, running on a normal PC was however, able to formulate a minimal solution (fig.1) within just several minutes. We will soon post more about our novel self-organized stigmergic-based algorithmic approach, but meanwhile, if you enjoyed these drawings, do not hesitate in asking us for a grilled cherry pepper as well. We will be pleased to deliver you one by email.

p.s. – This is a joint twin post with *David Rodrigues*.

Fig. 4 – (click to enlarge) Zoom at the end sardine tail optimal contour path (in blue: first fig. above) filled in black, from a total set of 1084 initial points.

No, not a CA (*Cellular Automata*). Instead, this is actually the *Hadamard *code matrix which was embedded on the NASA *Mariner 9 *spacecraft, launch in 1971. *Mariner 9* was a NASA space orbiter that helped in the exploration of planet Mars. It was launched toward planet Mars on May 30, 1971 from* Cape Canaveral *Air Force Station and reached the planet on November 13 of the same year, becoming the first spacecraft to orbit another planet — only narrowly beating *Soviet Mars 2* and *Mars 3*, which both arrived within a month. After months of dust-storms it managed to send back the first clear pictures of the surface (for more do check on the *Mariner 9 Wikipedia *entry).

*Hadamard *codes (named after *Jacques Hadamard*) are algorithmic systems used for **signal error detection**, **fault detection** and **correction**, probably one of the first of their kind to follow the idea of an machine* Artificial Intelligent* *self-regulation*. *Hadamard *codes are in fact, just a special case of the more universal *Reed*–*Muller* codes (here for an intro). In particular, the first order *Reed–Muller* codes are equivalent to *Hadamard *codes. Positively, without them, it would be impossible back then for *Mariner 9* to arrive Mars taking stunning images like the one below (one of the first, Human eyes ever seen). Beyond the geniality of the *Hadamard *code as a primeval approximation for machine self-regulation (details onto which are important for my own work), along with it’s practical utility on different fields of our current daily life, among so many other things, what strikes me, is that the *Hadamard *code picture above – in some instances – resembles itself one of the first photos from the red planet ever taken…

Image – *Mariner 9* view of the *Noctis Labyrinthus* “labyrinth” at the western end of *Valles Marineris* on Mars. Linear graben, grooves, and crater chains dominate this region, along with a number of flat-topped mesas. The image is roughly 400 km across, centered at 6 S, 105 W, at the edge of the *Tharsis* bulge. North is up. (from Wikipedia)

p.s. – (disclaimer) I did play a lot over the title on this present blog post. From *Hadamard*, to *Hada*-mars, into *Ada*, you know, *Ada Lovelace*, Augusta, that terrific lovely English girl born in the 1800’s. Not my fault. In fact, they were all there onto the same space-time voyage.

Fig. – *First Difference Engine*. This impression from a woodcut was printed in 1853 showing a portion of the *Difference Engine* that was built in 1833 by Charles Babbage, an English mathematician, philosopher, inventor, and mechanical engineer who originated the concept of a programmable computer.

“** If all you have is a hammer, everything looks like to you as a nail**” ~ Abraham Maslow, in “

*The Psychology of Science*“, 1966.

“** Propose to an Englishman any principle, or any instrument, however admirable, and you will observe that the whole effort of the English mind is directed to find a difficulty, a defect, or an impossibility in it. If you speak to him of a machine for peeling a potato, he will pronounce it impossible: if you peel a potato with it before his eyes, he will declare it useless, because it will not slice a pineapple. **[…]

**“. ~ Charles Babbage quoted in**

*Impart the same principle or show the same machine to an American or to one of our Colonists, and you will observe that the whole effort of his mind is to find some new application of the principle, some new use for the instrument**Richard H. Babbage*(1948), “

*The Work of Charles Babbage*“, Annals of the Computation Laboratory of Harvard University, vol. 16.

At the beginning of the 1820’s, *Babbage *worked on a prototype of his first difference engine. Some parts of this prototype still survive in the Museum of the history of science in Oxford. This prototype evolved into the “*first difference engine*.” It remained unfinished and the completed fragment is located at the Museum of Science in London. This first difference engine would have been composed of around 25.000 parts, weighed around fourteen tons (13.600 kg), being 2.4 meters tall. Although * *it was never completed. He later designed an improved version, “Difference Engine No. 2”, which was not constructed until 1989–91, using *Babbage*‘s plans and 19th century manufacturing tolerances. It performed its first calculation at the London Science Museum returning results to 31 digits, far more than the average modern pocket calculator. (check Charles Babbage Wikipedia entry for more).

Soon after the attempt at making the difference engine crumbled, *Babbage *started designing a different, more complex machine called **the Analytical Engine** (fig. above). The engine is not a single physical machine but a succession of designs that he tinkered with until his death in 1871. The main difference between the two engines is that the Analytical Engine could be programmed using punched cards. He realized that programs could be put on these cards so the person had only to create the program initially, and then put the cards in the machine and let it run. It wasn’t until 100 years later that computers came into existence, with *Babbage*‘s work lying mostly ignored. In the late 1930s and 1940s, starting with *Alan Turing*‘s 1936 paper “** On Computable Numbers, with an Application to the Entscheidungsproblem**” (image below) teams in the US and UK began to build workable computers by, essentially, rediscovering what Babbage had seen a century before. Babbage had anticipated the impact of his Engine when he wrote in his memoirs: “

**“**

*As soon as an Analytical Engine exists, it will necessarily guide the future course of science.*“** With an eye for detail and an easy style, Peter Miller explains why swarm intelligence has scientists buzzing**.” —

*Steven Strogatz*, author of

*, and Professor of Mathematics, Cornell University.*

**Sync**From the introduction of, *Peter Miller*, “** Smart Swarms – How Understanding Flocks, Schools and Colonies Can Make Us Better at Communicating, Decision Making and Getting Things Done**“. (…) The modern world may be obsessed with speed and productivity, but twenty-first century humans actually have much to learn from the ancient instincts of swarms. A fascinating new take on the concept of collective intelligence and its colourful manifestations in some of our most complex problems, Smart Swarm introduces a compelling new understanding of the real experts on solving our own complex problems relating to such topics as business, politics, and technology. Based on extensive globe-trotting research, this lively tour from

*National Geographic*reporter

*Peter Miller*introduces thriving throngs of ant colonies, which have inspired computer programs for streamlining factory processes, telephone networks, and truck routes; termites, used in recent studies for climate-control solutions; schools of fish, on which the U.S. military modelled a team of robots; and many other examples of the wisdom to be gleaned about the behaviour of crowds-among critters and corporations alike.In the tradition of

*James Surowiecki*‘s The Wisdom of Crowds and the innovative works of

*Malcolm Gladwell*,

*Smart Swarm*is an entertaining yet enlightening look at small-scale phenomena with big implications for us all. (…)

(…) What do ants, bees, and birds know that we don’t? How can that give us an advantage? Consider: • Southwest Airlines used virtual ants to determine the best way to board a plane. • The CIA was inspired by swarm behavior to invent a more effective spy network. • Filmmakers studied flocks of birds as models for armies of Orcs in Lord of the Rings battle scenes. • Defense agencies sponsored teams of robots that can sense radioactivity, heat, or a chemical device as easily as a school of fish can locate food. Find out how “smart swarms” can teach us how to make better choices, create stronger networks, and organize our businesses more effectively than we ever thought possible. (…)

## Recent Comments