You are currently browsing the tag archive for the ‘Traveling Salesman Problems’ tag.

In order to solve hard combinatorial optimization problems (e.g. optimally scheduling students and teachers along a week plan on several different classes and classrooms), one way is to computationally mimic how ants forage the vicinity of their habitats searching for food. On a myriad of endless possibilities to find the optimal route (minimizing the travel distance), ants, collectively emerge the solution by using stigmergic signal traces, or *pheromones*, which also dynamically change under evaporation.

Current algorithms, however, make only use of a positive feedback type of *pheromone* along their search, that is, if they collectively visit a good low-distance route (a minimal pseudo-solution to the problem) they tend to reinforce that signal, for their colleagues. Nothing wrong with that, on the contrary, but no one knows however if a lower-distance alternative route is there also, just at the corner. On his global search endeavour, like a snowballing effect, positive feedbacks tend up to give credit to the exploitation of solutions but not on the – also useful – exploration side. The upcoming potential solutions can thus get crystallized, and freeze, while a small change on some parts of the whole route, could on the other-hand successfully increase the global result.

Figure – Influence of **negative pheromone** on *kroA100.tsp* problem (fig.1 – page 6) (values on lines represent 1-ALPHA). A typical standard ACS (*Ant Colony System*) is represented here by the line with value 0.0, while better results could be found by our approach, when using positive feedbacks (0.95) along with negative feedbacks (0.05). Not only we obtain better results, as we found them earlier.

There is, however, an advantage when a second type of *pheromone* (a negative feedback one) **co-evolves** with the first type. And we decided to research for his impact. What we found out, is that by using a second type of global feedback, we can indeed increase a faster search while achieving better results. **In a way, it’s like using two different types of evaporative traffic lights, in green and red, co-evolving together**. And as a conclusion, we should indeed use a negative no-entry signal pheromone. In small amounts (0.05), but use it. Not only this prevents the whole system to freeze on some solutions, to soon, as it enhances a better compromise on the search space of potential routes. The pre-print article is available here at arXiv. Follows the abstract and keywords:

*Vitorino Ramos*, *David M. S. Rodrigues*, *Jorge Louçã*, “** Second Order Swarm Intelligence**” [PDF], in

*Hybrid Artificial Intelligent Systems*, Lecture Notes in Computer Science, Springer-Verlag, Volume 8073, pp. 411-420, 2013.

**Abstract**: An artificial *Ant Colony System* (ACS) algorithm to solve general purpose* combinatorial Optimization Problems* (COP) that extends previous AC models [21] by the inclusion of a negative pheromone, is here described. Several *Travelling Salesman Problem*‘s (TSP) were used as benchmark. We show that by using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedbacks achieves better results than single positive feedback systems. The algorithm was tested against known NP complete combinatorial Optimization Problems, running on symmetrical TSPs. We show that the new algorithm compares favourably against these benchmarks, accordingly to recent biological findings by *Robinson* [26,27], and *Grüter* [28] where “No entry” signals and negative feedback allows a colony to quickly reallocate the majority of its foragers to superior food patches. This is the first time an extended ACS algorithm is implemented with these successful characteristics.

**Keywords**: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Travelling Salesman Problems (TSP).

Figure – *Hybrid Artificial Intelligent Systems* new LNAI (*Lecture Notes on Artificial Intelligence*) series volume 8073, Springer-Verlag Book [original photo by my colleague David M.S. Rodrigues].

New work, new book. Last week one of our latest works come out published on Springer. Edited by *Jeng-Shyang Pan*, *Marios M. Polycarpou*, *Emilio Corchado* et al. “** Hybrid Artificial Intelligent Systems**” comprises a full set of new papers on this hybrid area on Intelligent Computing (check the full articles list at Springer). Our new paper “

**Second Order Swarm Intelligence**” (pp. 411-420, Springer books link) was published on the

*Bio-inspired Models and Evolutionary Computation*section.

Ever tried to solve a problem where its own problem statement is changing constantly? Have a look on our approach:

* Vitorino Ramos, David M.S. Rodrigues*,

*Jorge Louçã*, “

*“, in*

**Spatio-Temporal Dynamics on Co-Evolved Stigmergy***European Conference on Complex Systems*, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

**Abstract**: Research over hard ** NP**-complete

*Combinatorial Optimization Problems*(

**COP**’s) has been focused in recent years, on several robust bio-inspired meta-heuristics, like those involving

*Evolutionary Computation*(

**EC**) algorithmic paradigms. One particularly successful well-know meta-heuristic approach is based on

*Swarm Intelligence*(

**SI**), i.e., the self-organized stigmergic-based property of a complex system whereby the collective behaviors of (unsophisticated) entities interacting locally with their environment cause coherent functional global patterns to emerge. This line of research recognized as

*Ant Colony Optimization*(

**ACO**), uses a set of stochastic cooperating ant-like agents to find good solutions, using self-organized stigmergy as an indirect form of communication mediated by artificial pheromone, whereas agents deposit pheromone-signs on the edges of the problem-related graph complex network, encompassing a family of successful algorithmic variations such as:

*Ant Systems*(

**AS**),

*Ant Colony Systems*(

**ACS**),

*Max-Min Ant Systems*(

**) and**

*Max*–*Min*AS**Ant-Q**.

Albeit being extremely successful these algorithms mostly rely on positive feedback’s, causing excessive algorithmic exploitation over the entire combinatorial search space. This is particularly evident over well known benchmarks as the symmetrical *Traveling Salesman Problem* (**TSP**). Being these systems comprised of a large number of frequently similar components or events, the principal challenge is to understand how the components interact to produce a complex pattern feasible solution (in our case study, an optimal robust solution for hard ** NP**-complete dynamic

**-like combinatorial problems). A suitable approach is to first understand the role of two basic modes of interaction among the components of Self-Organizing (**

*TSP***SO**) Swarm-Intelligent-like systems: positive and negative feedback. While positive feedback promotes a snowballing auto-catalytic effect (e.g. trail pheromone upgrading over the network; exploitation of the search space), taking an initial change in a system and reinforcing that change in the same direction as the initial deviation (self-enhancement and amplification) allowing the entire colony to exploit some past and present solutions (environmental dynamic memory), negative feedback such as pheromone evaporation ensure that the overall learning system does not stables or freezes itself on a particular configuration (innovation; search space exploration). Although this kind of (global) delayed negative feedback is important (evaporation), for the many reasons given above, there is however strong assumptions that other negative feedbacks are present in nature, which could also play a role over increased convergence, namely implicit-like negative feedbacks. As in the case for positive feedbacks, there is no reason not to explore increasingly distributed and adaptive algorithmic variations where negative feedback is also imposed implicitly (not only explicitly) over each network edge, while the entire colony seeks for better answers in due time.

In order to overcome this hard search space exploitation-exploration compromise, our present algorithmic approach follows the route of very recent biological findings showing that forager ants lay attractive trail pheromones to guide nest mates to food, but where, the effectiveness of foraging networks were improved if pheromones could also be used to repel foragers from unrewarding routes. Increasing empirical evidences for such a negative trail pheromone exists, deployed by Pharaoh’s ants (*Monomorium pharaonis*) as a ‘*no entry*‘ signal to mark unrewarding foraging paths. The new algorithm comprises a second order approach to Swarm Intelligence, as pheromone-based no entry-signals cues, were introduced, co-evolving with the standard pheromone distributions (collective cognitive maps) in the aforementioned known algorithms.

To exhaustively test his adaptive response and robustness, we have recurred to different dynamic optimization problems. Medium-size and large-sized dynamic **TSP** problems were created. Settings and parameters such as, environmental upgrade frequencies, landscape changing or network topological speed severity, and type of dynamic were tested. Results prove that the present co-evolved two-type pheromone swarm intelligence algorithm is able to quickly track increasing swift changes on the dynamic **TSP** complex network, compared to standard algorithms.

**Keywords**: Self-Organization, Stigmergy, Co-Evolution, Swarm Intelligence, Dynamic Optimization, Foraging, Cooperative Learning, Combinatorial Optimization problems, Dynamical Symmetrical Traveling Salesman Problems (TSP).

Fig. – Recovery times over several dynamical stress tests at the fl1577 TSP problem (1577 node graph) – 460 iter max – Swift changes at every 150 iterations (20% = 314 nodes, 40% = 630 nodes, 60% = 946 nodes, 80% = 1260 nodes, 100% = 1576 nodes). [click to enlarge]

*David M.S. Rodrigues*, *Jorge Louçã*, *Vitorino Ramos*, “* From Standard to Second Order Swarm Intelligence Phase-space maps*“, in

*European Conference on Complex Systems*, ECCS’11, Vienna, Austria, Sept. 12-16 2011.

**Abstract**: Standard *Stigmergic* approaches to *Swarm Intelligence* encompasses the use of a set of stochastic cooperating ant-like agents to find optimal solutions, using self-organized * Stigmergy* as an indirect form of communication mediated by a singular artificial pheromone. Agents deposit pheromone-signs on the edges of the problem-related graph to give rise to a family of successful algorithmic approaches entitled

*Ant Systems*(

**AS**),

*Ant Colony Systems*(

**ACS**), among others. These mainly rely on positive feedback’s, to search for an optimal solution in a large combinatorial space. The present work shows how, using two different sets of pheromones, a second-order co-evolved compromise between positive and negative feedback’s achieves better results than single positive feedback systems. This follows the route of very recent biological findings showing that forager ants, while laying attractive trail pheromones to guide nest mates to food, also gained foraging effectiveness by the use of pheromones that repelled foragers from unrewarding routes. The algorithm presented here takes inspiration precisely from this biological observation.

The new algorithm was exhaustively tested on a series of well-known benchmarks over hard NP-complete *Combinatorial Optimization Problems* (**COP**’s), running on symmetrical *Traveling Salesman Problems* (**TSP**). Different network topologies and stress tests were conducted over low-size TSP’s (eil51.tsp; eil78.tsp; kroA100.tsp), medium-size (d198.tsp; lin318.tsp; pcb442.tsp; att532.tsp; rat783.tsp) as well as large sized ones (fl1577.tsp; d2103.tsp) [numbers here referring to the number of nodes in the network]. We show that the new co-evolved stigmergic algorithm compared favorably against the benchmark. The algorithm was able to equal or majorly improve every instance of those standard algorithms, not only in the realm of the Swarm Intelligent **AS**, **ACS** approach, as in other computational paradigms like *Genetic Algorithms* (**GA**), *Evolutionary Programming* (**EP**), as well as **SOM** (*Self-Organizing Maps*) and **SA** (*Simulated Annealing*). In order to deeply understand how a second co-evolved pheromone was useful to track the collective system into such results, a refined phase-space map was produced mapping the pheromones ratio between a pure *Ant Colony System* (where no negative feedback besides pheromone evaporation is present) and the present second-order approach. The evaporation rate between different pheromones was also studied and its influence in the outcomes of the algorithm is shown. A final discussion on the phase-map is included. This work has implications in the way large combinatorial problems are addressed as the double feedback mechanism shows improvements over the single-positive feedback mechanisms in terms of convergence speed and on major results.

**Keywords**: Stigmergy, Co-Evolution, Self-Organization, Swarm Intelligence, Foraging, Cooperative Learning, Combinatorial Optimization problems, Symmetrical Traveling Salesman Problems (TSP), phase-space.

Fig. – Comparing convergence results between Standard algorithms vs. Second Order Swarm Intelligence, over TSP fl1577 (click to enlarge).

## Recent Comments