You are currently browsing the tag archive for the ‘Cognition’ tag.

Complete circuit diagram with pheromone - Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; Figure – Neural circuit controller of the virtual ant (page 3, fig. 2). [URL: http://arxiv.org/abs/1507.08467 ]

Intelligence and decision in foraging ants. Individual or Collective? Internal or External? What is the right balance between the two. Can one have internal intelligence without external intelligence? Can one take examples from nature to build in silico artificial lives that present us with interesting patterns? We explore a model of foraging ants in this paper that will be presented in early September in Exeter, UK, at UKCI 2015. (available on arXiv [PDF] and ResearchGate)

Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; “A Model for Foraging Ants, Controlled by Spiking Neural Networks and Double Pheromones“, UKCI 2015 Computational Intelligence – University of Exeter, UK, September 2015.

Abstract: A model of an Ant System where ants are controlled by a spiking neural circuit and a second order pheromone mechanism in a foraging task is presented. A neural circuit is trained for individual ants and subsequently the ants are exposed to a virtual environment where a swarm of ants performed a resource foraging task. The model comprises an associative and unsupervised learning strategy for the neural circuit of the ant. The neural circuit adapts to the environment by means of classical conditioning. The initially unknown environment includes different types of stimuli representing food (rewarding) and obstacles (harmful) which, when they come in direct contact with the ant, elicit a reflex response in the motor neural system of the ant: moving towards or away from the source of the stimulus. The spiking neural circuits of the ant is trained to identify food and obstacles and move towards the former and avoid the latter. The ants are released on a landscape with multiple food sources where one ant alone would have difficulty harvesting the landscape to maximum efficiency. In this case the introduction of a double pheromone mechanism (positive and negative reinforcement feedback) yields better results than traditional ant colony optimization strategies. Traditional ant systems include mainly a positive reinforcement pheromone. This approach uses a second pheromone that acts as a marker for forbidden paths (negative feedback). This blockade is not permanent and is controlled by the evaporation rate of the pheromones. The combined action of both pheromones acts as a collective stigmergic memory of the swarm, which reduces the search space of the problem. This paper explores how the adaptation and learning abilities observed in biologically inspired cognitive architectures is synergistically enhanced by swarm optimization strategies. The model portraits two forms of artificial intelligent behaviour: at the individual level the spiking neural network is the main controller and at the collective level the pheromone distribution is a map towards the solution emerged by the colony. The presented model is an important pedagogical tool as it is also an easy to use library that allows access to the spiking neural network paradigm from inside a Netlogo—a language used mostly in agent based modelling and experimentation with complex systems.

References:

[1] C. G. Langton, “Studying artificial life with cellular automata,” Physica D: Nonlinear Phenomena, vol. 22, no. 1–3, pp. 120 – 149, 1986, proceedings of the Fifth Annual International Conference. [Online]. Available: http://www.sciencedirect.com/ science/article/pii/016727898690237X
[2] A. Abraham and V. Ramos, “Web usage mining using artificial ant colony clustering and linear genetic programming,” in Proceedings of the Congress on Evolutionary Computation. Australia: IEEE Press, 2003, pp. 1384–1391.
[3] V. Ramos, F. Muge, and P. Pina, “Self-organized data and image retrieval as a consequence of inter-dynamic synergistic relationships in artificial ant colonies,” Hybrid Intelligent Systems, vol. 87, 2002.
[4] V. Ramos and J. J. Merelo, “Self-organized stigmergic document maps: Environment as a mechanism for context learning,” in Proceddings of the AEB, Merida, Spain, February 2002. ´
[5] D. Sousa-Rodrigues and V. Ramos, “Traversing news with ant colony optimisation and negative pheromones,” in European Conference in Complex Systems, Lucca, Italy, Sep 2014.
[6] E. Bonabeau, G. Theraulaz, and M. Dorigo, Swarm Intelligence: From Natural to Artificial Systems, 1st ed., ser. Santa Fe Insitute Studies In The Sciences of Complexity. 198 Madison Avenue, New York: Oxford University Press, USA, Sep. 1999.
[7] M. Dorigo and L. M. Gambardella, “Ant colony system: A cooperative learning approach to the traveling salesman problem,” Universite Libre de Bruxelles, Tech. Rep. TR/IRIDIA/1996-5, ´ 1996.
[8] M. Dorigo, G. Di Caro, and L. M. Gambardella, “Ant algorithms for discrete optimization,” Artif. Life, vol. 5, no. 2, pp. 137– 172, Apr. 1999. [Online]. Available: http://dx.doi.org/10.1162/ 106454699568728
[9] L. M. Gambardella and M. Dorigo, “Ant-q: A reinforcement learning approach to the travelling salesman problem,” in Proceedings of the ML-95, Twelfth Intern. Conf. on Machine Learning, M. Kaufman, Ed., 1995, pp. 252–260.
[10] A. Gupta, V. Nagarajan, and R. Ravi, “Approximation algorithms for optimal decision trees and adaptive tsp problems,” in Proceedings of the 37th international colloquium conference on Automata, languages and programming, ser. ICALP’10. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 690–701. [Online]. Available: http://dl.acm.org/citation.cfm?id=1880918.1880993
[11] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Second order ˜ swarm intelligence,” in HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, ser. Lecture Notes in Computer Science, J.-S. Pan, M. Polycarpou, M. Wozniak, A. Carvalho, ´ H. Quintian, and E. Corchado, Eds. Salamanca, Spain: Springer ´ Berlin Heidelberg, Sep 2013, vol. 8073, pp. 411–420.
[12] W. Maass and C. M. Bishop, Pulsed Neural Networks. Cambridge, Massachusetts: MIT Press, 1998.
[13] E. M. Izhikevich and E. M. Izhikevich, “Simple model of spiking neurons.” IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, vol. 14, no. 6, pp. 1569–72, 2003. [Online]. Available: http://www.ncbi.nlm.nih. gov/pubmed/18244602
[14] C. Liu and J. Shapiro, “Implementing classical conditioning with spiking neurons,” in Artificial Neural Networks ICANN 2007, ser. Lecture Notes in Computer Science, J. de S, L. Alexandre, W. Duch, and D. Mandic, Eds. Springer Berlin Heidelberg, 2007, vol. 4668, pp. 400–410. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-74690-4 41
[15] J. Haenicke, E. Pamir, and M. P. Nawrot, “A spiking neuronal network model of fast associative learning in the honeybee,” Frontiers in Computational Neuroscience, no. 149, 2012. [Online]. Available: http://www.frontiersin.org/computational neuroscience/10.3389/conf.fncom.2012.55.00149/full
[16] L. I. Helgadottir, J. Haenicke, T. Landgraf, R. Rojas, and M. P. Nawrot, “Conditioned behavior in a robot controlled by a spiking neural network,” in International IEEE/EMBS Conference on Neural Engineering, NER, 2013, pp. 891–894.
[17] A. Cyr and M. Boukadoum, “Classical conditioning in different temporal constraints: an STDP learning rule for robots controlled by spiking neural networks,” pp. 257–272, 2012.
[18] X. Wang, Z. G. Hou, F. Lv, M. Tan, and Y. Wang, “Mobile robots’ modular navigation controller using spiking neural networks,” Neurocomputing, vol. 134, pp. 230–238, 2014.
[19] C. Hausler, M. P. Nawrot, and M. Schmuker, “A spiking neuron classifier network with a deep architecture inspired by the olfactory system of the honeybee,” in 2011 5th International IEEE/EMBS Conference on Neural Engineering, NER 2011, 2011, pp. 198–202.
[20] U. Wilensky, “Netlogo,” Evanston IL, USA, 1999. [Online]. Available: http://ccl.northwestern.edu/netlogo/
[21] C. Jimenez-Romero and J. Johnson, “Accepted abstract: Simulation of agents and robots controlled by spiking neural networks using netlogo,” in International Conference on Brain Engineering and Neuro-computing, Mykonos, Greece, Oct 2015.
[22] W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge University Press, 2002.
[23] J. v. H. W Gerstner, R Kempter and H. Wagner, “A neuronal learning rule for sub-millisecond temporal coding,” Nature, vol. 386, pp. 76–78, 1996.
[24] I. P. Pavlov, “Conditioned reflexes: An investigation of the activity of the cerebral cortex,” New York, 1927.
[25] E. J. H. Robinson, D. E. Jackson, M. Holcombe, and F. L. W. Ratnieks, “Insect communication: ‘no entry’ signal in ant foraging,” Nature, vol. 438, no. 7067, pp. 442–442, 11 2005. [Online]. Available: http://dx.doi.org/10.1038/438442a
[26] E. J. Robinson, D. Jackson, M. Holcombe, and F. L. Ratnieks, “No entry signal in ant foraging (hymenoptera: Formicidae): new insights from an agent-based model,” Myrmecological News, vol. 10, no. 120, 2007.
[27] D. Sousa-Rodrigues, J. Louçã, and V. Ramos, “From standard ˜ to second-order swarm intelligence phase-space maps,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, Sep 2011.
[28] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Spatio-temporal ˜ dynamics on co-evolved stigmergy,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, 9 2011.
[29] S. Tisue and U. Wilensky, “Netlogo: A simple environment for modeling complexity,” in International conference on complex systems. Boston, MA, 2004, pp. 16–21.

Surfaces and Essences - Hofstadter Sander 2013

[…] Analogy is the core of all thinking. – This is the simple but unorthodox premise that Pulitzer Prize-winning author Douglas Hofstadter and French psychologist Emmanuel Sander defend in their new work. Hofstadter has been grappling with the mysteries of human thought for over thirty years. Now, with his trademark wit and special talent for making complex ideas vivid, he has partnered with Sander to put forth a highly novel perspective on cognition. We are constantly faced with a swirling and intermingling multitude of ill-defined situations. Our brain’s job is to try to make sense of this unpredictable, swarming chaos of stimuli. How does it do so? The ceaseless hail of input triggers analogies galore, helping us to pinpoint the essence of what is going on. Often this means the spontaneous evocation of words, sometimes idioms, sometimes the triggering of nameless, long-buried memories.

Why did two-year-old Camille proudly exclaim, “I undressed the banana!”? Why do people who hear a story often blurt out, “Exactly the same thing happened to me!” when it was a completely different event? How do we recognize an aggressive driver from a split-second glance in our rear-view mirror? What in a friend’s remark triggers the offhand reply, “That’s just sour grapes”?  What did Albert Einstein see that made him suspect that light consists of particles when a century of research had driven the final nail in the coffin of that long-dead idea? The answer to all these questions, of course, is analogy-making – the meat and potatoes, the heart and soul, the fuel and fire, the gist and the crux, the lifeblood and the wellsprings of thought. Analogy-making, far from happening at rare intervals, occurs at all moments, defining thinking from top to toe, from the tiniest and most fleeting thoughts to the most creative scientific insights.

Like Gödel, Escher, Bach before it, Surfaces and Essences will profoundly enrich our understanding of our own minds. By plunging the reader into an extraordinary variety of colorful situations involving language, thought, and memory, by revealing bit by bit the constantly churning cognitive mechanisms normally completely hidden from view, and by discovering in them one central, invariant core – the incessant, unconscious quest for strong analogical links to past experiences – this book puts forth a radical and deeply surprising new vision of the act of thinking. […] intro to “Surfaces and Essences – Analogy as the fuel and fire of thinking” by Douglas Hofstadter and Emmanuel Sander, Basic Books, NY, 2013 [link] (to be released May 1, 2013).

The importance of Network Topology again… at [PLoS]. Abstract: […] The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems. […] (*)

(*) in Hermundstad AM, Brown KS, Bassett DS, Carlson JM, 2011 Learning, Memory, and the Role of Neural Network Architecture. PLoS Comput Biol 7(6): e1002063. doi:10.1371/journal.pcbi.1002063

Animated Video – Lively RSA Animate [April 2010], adapted from Dan Pink‘s talk at the RSA (below), illustrates the hidden truths behind what really motivates us at home and in the workplace. [Inspired from the work of Economics professor Dan Ariely at MIT along with his colleagues].

What drives us? Some quotes: […] Once the task called for even rudimentary COGNITIVE skills a larger reward led to poorer performance […] Once you get above rudimentary cognitive skills, rewards do not work that way [linear], this defies the laws of behavioural physics ! […] But when a task gets more complicated, it requires some conceptual, creative thinking, these kind of motivators do not work any more […] Higher incentives led to worse performance. […] Fact: Money is a motivator. In a strange way. If you don’t pay enough, people won’t be motivated. But now there is another paradox. The best use of money, and that is: pay people enough to take the issue of money off the table. […] …Socialism…??

[…] Most upper-management and sales force personnel, as well as workers in many other jobs, are paid based on performance, which is widely perceived as motivating effort and enhancing productivity relative to non-contingent pay schemes. However, psychological research suggests that excessive rewards can in some cases produce supra-optimal motivation, resulting in a decline in performance. To test whether very high monetary rewards can decrease performance, we conducted a set of experiments at MIT, the University of Chicago, and rural India. Subjects in our experiment worked on different tasks and received performance-contingent payments that varied in amount from small to large relative to their typical levels of pay. With some important exceptions, we observed that high reward levels can have detrimental effects on performance. […] abstract, Dan Ariely, Uri Gneezy, George Loewenstein, and Nina Mazar, “Large Stakes and Big Mistakes“, Federal Reserve Bank of Boston Working paper no. 05-11, Research Center for Behavioral Economics and Decision-Making, US, July 2005. [PDF available here] (improved 2009 version below)

Video lecture – On the surprising science of motivation: analyst Daniel Pink examines the puzzle of motivation [Jul. 2009], starting with a fact that social scientists know but most managers don’t: Traditional rewards aren’t always as effective as we think. So maybe, there is a different way forward. [Inspired from the work of Economics professor Dan Ariely at MIT along with his colleagues].

[…] Payment-based performance is commonplace across many jobs in the marketplace. Many, if not most upper-management, sales force personnel, and workers in a wide variety of other jobs are rewarded for their effort based on observed measures of performance. The intuitive logic for performance-based compensation is to motivate individuals to increase their effort, and hence their output, and indeed there is some evidence that payment for performance can increase performance (Lazear, 2000). The expectation that increasing performance-contingent incentives will improve performance rests on two subsidiary assumptions: (1) that increasing performance-contingent incentives will lead to greater motivation and effort and (2) that this increase in motivation and effort will result in improved performance. The first assumption that transitory performance-based increases in pay will produce increased motivation and effort is generally accepted, although there are some notable exceptions. Gneezy and Rustichini (2000a), for example, have documented situations, both in laboratory and field experiments, in which people who were not paid at all exerted greater effort than those who were paid a small amount (see also Gneezy and Rustichini, 2000b; Frey and Jegen, 2001; Heyman and Ariely, 2004). These results show that in some situations paying a small amount in comparison to paying nothing seems to change the perceived nature of the task, which, if the amount of pay is not substantial, may result in a decline of motivation and effort.

Another situation in which effort may not respond in the expected fashion to a change in transitory wages is when workers have an earnings target that they apply narrowly. For example, Camerer, Babcock, Loewenstein and Thaler (1997) found that New York City cab drivers quit early on days when their hourly earnings were high and worked longer hours when their earnings were low. The authors speculated that the cab drivers may have had a daily earnings target beyond which their motivation to continue working dropped off. Although there appear to be exceptions to the generality of the positive relationship between pay and effort, our focus in this paper is on the second assumption – that an increase in motivation and effort will result in improved performance. The experiments we report address the question of whether increased effort necessarily leads to improved performance. Providing subjects with different levels of incentives, including incentives that were very high relative to their normal income, we examine whether, across a variety of different tasks, an increase in contingent pay leads to an improvement or decline in performance. We find that in some cases, and in fact most of the cases we examined, very high incentives result in a decrease in performance. These results provide a counterexample to the assumption that an increase in motivation and effort will always result in improved performance. […] in Dan Ariely, Uri Gneezy, George Loewenstein, and Nina Mazar, “Large Stakes and Big Mistakes“, Review of Economic Studies (2009) 75, 1-19 0034-6527/09. [PDF available here]

Now, these are not stories, these are facts. These are one of the most robust findings in social science,… yet, one of the most ignored [sic]. And they keep coming in. Such as the fallacy of the supply and demand model (March 2008). Anyway, enough good material (a simple paper with profound implications)… for one day. But hey, …Oh, if you are still wondering what other paper inspired the specific drawings at minute 7′:40” and on, in the first video over this post, well, here it is: Kristina Shampan’er and Dan Ariely (2007), “How Small is Zero Price? The True Value of Free Products“, in Marketing Science. Vol. 26, No. 6, 742 – 757. [PDF available here]… Got it ?!

[...] People should learn how to play Lego with their minds. Concepts are building bricks [...] V. Ramos, 2002.

@ViRAms on Twitter

Archives

Blog Stats

  • 256,420 hits