You are currently browsing the tag archive for the ‘Neural Networks’ tag.
2016 – Up now, an overall of 1567 citations among 74 works (including 3 books) on GOOGLE SCHOLAR (https://scholar.google.com/citations?user=gSyQ-g8AAAAJ&hl=en) [with an Hirsh h-index=19, and an average of 160.2 citations each for any work on my top five] + 900 citations among 57 works on the new RESEARCH GATE site (https://www.researchgate.net/profile/Vitorino_Ramos).
Refs.: Science, Artificial Intelligence, Swarm Intelligence, Data-Mining, Big-Data, Evolutionary Computation, Complex Systems, Image Analysis, Pattern Recognition, Data Analysis.
Figure – Neural circuit controller of the virtual ant (page 3, fig. 2). [URL: http://arxiv.org/abs/1507.08467 ]
Intelligence and decision in foraging ants. Individual or Collective? Internal or External? What is the right balance between the two. Can one have internal intelligence without external intelligence? Can one take examples from nature to build in silico artificial lives that present us with interesting patterns? We explore a model of foraging ants in this paper that will be presented in early September in Exeter, UK, at UKCI 2015. (available on arXiv [PDF] and ResearchGate)
Cristian Jimenez-Romero, David Sousa-Rodrigues, Jeffrey H. Johnson, Vitorino Ramos; “A Model for Foraging Ants, Controlled by Spiking Neural Networks and Double Pheromones“, UKCI 2015 Computational Intelligence – University of Exeter, UK, September 2015.
Abstract: A model of an Ant System where ants are controlled by a spiking neural circuit and a second order pheromone mechanism in a foraging task is presented. A neural circuit is trained for individual ants and subsequently the ants are exposed to a virtual environment where a swarm of ants performed a resource foraging task. The model comprises an associative and unsupervised learning strategy for the neural circuit of the ant. The neural circuit adapts to the environment by means of classical conditioning. The initially unknown environment includes different types of stimuli representing food (rewarding) and obstacles (harmful) which, when they come in direct contact with the ant, elicit a reflex response in the motor neural system of the ant: moving towards or away from the source of the stimulus. The spiking neural circuits of the ant is trained to identify food and obstacles and move towards the former and avoid the latter. The ants are released on a landscape with multiple food sources where one ant alone would have difficulty harvesting the landscape to maximum efficiency. In this case the introduction of a double pheromone mechanism (positive and negative reinforcement feedback) yields better results than traditional ant colony optimization strategies. Traditional ant systems include mainly a positive reinforcement pheromone. This approach uses a second pheromone that acts as a marker for forbidden paths (negative feedback). This blockade is not permanent and is controlled by the evaporation rate of the pheromones. The combined action of both pheromones acts as a collective stigmergic memory of the swarm, which reduces the search space of the problem. This paper explores how the adaptation and learning abilities observed in biologically inspired cognitive architectures is synergistically enhanced by swarm optimization strategies. The model portraits two forms of artificial intelligent behaviour: at the individual level the spiking neural network is the main controller and at the collective level the pheromone distribution is a map towards the solution emerged by the colony. The presented model is an important pedagogical tool as it is also an easy to use library that allows access to the spiking neural network paradigm from inside a Netlogo—a language used mostly in agent based modelling and experimentation with complex systems.
References:
[1] C. G. Langton, “Studying artificial life with cellular automata,” Physica D: Nonlinear Phenomena, vol. 22, no. 1–3, pp. 120 – 149, 1986, proceedings of the Fifth Annual International Conference. [Online]. Available: http://www.sciencedirect.com/ science/article/pii/016727898690237X
[2] A. Abraham and V. Ramos, “Web usage mining using artificial ant colony clustering and linear genetic programming,” in Proceedings of the Congress on Evolutionary Computation. Australia: IEEE Press, 2003, pp. 1384–1391.
[3] V. Ramos, F. Muge, and P. Pina, “Self-organized data and image retrieval as a consequence of inter-dynamic synergistic relationships in artificial ant colonies,” Hybrid Intelligent Systems, vol. 87, 2002.
[4] V. Ramos and J. J. Merelo, “Self-organized stigmergic document maps: Environment as a mechanism for context learning,” in Proceddings of the AEB, Merida, Spain, February 2002. ´
[5] D. Sousa-Rodrigues and V. Ramos, “Traversing news with ant colony optimisation and negative pheromones,” in European Conference in Complex Systems, Lucca, Italy, Sep 2014.
[6] E. Bonabeau, G. Theraulaz, and M. Dorigo, Swarm Intelligence: From Natural to Artificial Systems, 1st ed., ser. Santa Fe Insitute Studies In The Sciences of Complexity. 198 Madison Avenue, New York: Oxford University Press, USA, Sep. 1999.
[7] M. Dorigo and L. M. Gambardella, “Ant colony system: A cooperative learning approach to the traveling salesman problem,” Universite Libre de Bruxelles, Tech. Rep. TR/IRIDIA/1996-5, ´ 1996.
[8] M. Dorigo, G. Di Caro, and L. M. Gambardella, “Ant algorithms for discrete optimization,” Artif. Life, vol. 5, no. 2, pp. 137– 172, Apr. 1999. [Online]. Available: http://dx.doi.org/10.1162/ 106454699568728
[9] L. M. Gambardella and M. Dorigo, “Ant-q: A reinforcement learning approach to the travelling salesman problem,” in Proceedings of the ML-95, Twelfth Intern. Conf. on Machine Learning, M. Kaufman, Ed., 1995, pp. 252–260.
[10] A. Gupta, V. Nagarajan, and R. Ravi, “Approximation algorithms for optimal decision trees and adaptive tsp problems,” in Proceedings of the 37th international colloquium conference on Automata, languages and programming, ser. ICALP’10. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 690–701. [Online]. Available: http://dl.acm.org/citation.cfm?id=1880918.1880993
[11] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Second order ˜ swarm intelligence,” in HAIS’13. 8th International Conference on Hybrid Artificial Intelligence Systems, ser. Lecture Notes in Computer Science, J.-S. Pan, M. Polycarpou, M. Wozniak, A. Carvalho, ´ H. Quintian, and E. Corchado, Eds. Salamanca, Spain: Springer ´ Berlin Heidelberg, Sep 2013, vol. 8073, pp. 411–420.
[12] W. Maass and C. M. Bishop, Pulsed Neural Networks. Cambridge, Massachusetts: MIT Press, 1998.
[13] E. M. Izhikevich and E. M. Izhikevich, “Simple model of spiking neurons.” IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, vol. 14, no. 6, pp. 1569–72, 2003. [Online]. Available: http://www.ncbi.nlm.nih. gov/pubmed/18244602
[14] C. Liu and J. Shapiro, “Implementing classical conditioning with spiking neurons,” in Artificial Neural Networks ICANN 2007, ser. Lecture Notes in Computer Science, J. de S, L. Alexandre, W. Duch, and D. Mandic, Eds. Springer Berlin Heidelberg, 2007, vol. 4668, pp. 400–410. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-74690-4 41
[15] J. Haenicke, E. Pamir, and M. P. Nawrot, “A spiking neuronal network model of fast associative learning in the honeybee,” Frontiers in Computational Neuroscience, no. 149, 2012. [Online]. Available: http://www.frontiersin.org/computational neuroscience/10.3389/conf.fncom.2012.55.00149/full
[16] L. I. Helgadottir, J. Haenicke, T. Landgraf, R. Rojas, and M. P. Nawrot, “Conditioned behavior in a robot controlled by a spiking neural network,” in International IEEE/EMBS Conference on Neural Engineering, NER, 2013, pp. 891–894.
[17] A. Cyr and M. Boukadoum, “Classical conditioning in different temporal constraints: an STDP learning rule for robots controlled by spiking neural networks,” pp. 257–272, 2012.
[18] X. Wang, Z. G. Hou, F. Lv, M. Tan, and Y. Wang, “Mobile robots’ modular navigation controller using spiking neural networks,” Neurocomputing, vol. 134, pp. 230–238, 2014.
[19] C. Hausler, M. P. Nawrot, and M. Schmuker, “A spiking neuron classifier network with a deep architecture inspired by the olfactory system of the honeybee,” in 2011 5th International IEEE/EMBS Conference on Neural Engineering, NER 2011, 2011, pp. 198–202.
[20] U. Wilensky, “Netlogo,” Evanston IL, USA, 1999. [Online]. Available: http://ccl.northwestern.edu/netlogo/
[21] C. Jimenez-Romero and J. Johnson, “Accepted abstract: Simulation of agents and robots controlled by spiking neural networks using netlogo,” in International Conference on Brain Engineering and Neuro-computing, Mykonos, Greece, Oct 2015.
[22] W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge: Cambridge University Press, 2002.
[23] J. v. H. W Gerstner, R Kempter and H. Wagner, “A neuronal learning rule for sub-millisecond temporal coding,” Nature, vol. 386, pp. 76–78, 1996.
[24] I. P. Pavlov, “Conditioned reflexes: An investigation of the activity of the cerebral cortex,” New York, 1927.
[25] E. J. H. Robinson, D. E. Jackson, M. Holcombe, and F. L. W. Ratnieks, “Insect communication: ‘no entry’ signal in ant foraging,” Nature, vol. 438, no. 7067, pp. 442–442, 11 2005. [Online]. Available: http://dx.doi.org/10.1038/438442a
[26] E. J. Robinson, D. Jackson, M. Holcombe, and F. L. Ratnieks, “No entry signal in ant foraging (hymenoptera: Formicidae): new insights from an agent-based model,” Myrmecological News, vol. 10, no. 120, 2007.
[27] D. Sousa-Rodrigues, J. Louçã, and V. Ramos, “From standard ˜ to second-order swarm intelligence phase-space maps,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, Sep 2011.
[28] V. Ramos, D. Sousa-Rodrigues, and J. Louçã, “Spatio-temporal ˜ dynamics on co-evolved stigmergy,” in 8th European Conference on Complex Systems, S. Thurner, Ed., Vienna, Austria, 9 2011.
[29] S. Tisue and U. Wilensky, “Netlogo: A simple environment for modeling complexity,” in International conference on complex systems. Boston, MA, 2004, pp. 16–21.
Video – Water has Memory (from Oasis HD, Canada; link): just a liquid or much more? Many researchers are convinced that water is capable of “memory” by storing information and retrieving it. The possible applications are innumerable: limitless retention and storage capacity and the key to discovering the origins of life on our planet. Research into water is just beginning.
Water capable of processing information as well as a huge possible “container” for data media, that is something remarkable. This theory was first proposed by the late French immunologist Jacques Benveniste, in a controversial article published in 1988 in Nature, as a way of explaining how homeopathy works (link). Benveniste’s theory has continued to be championed by some and disputed by others. The video clip above, from the Oasis HD Channel, shows some fascinating recent experiments with water “memory” from the Aerospace Institute of the University of Stuttgart in Germany. The results with the different types of flowers immersed in water are particularly evocative.
This line of research also remembers me back of an old and quite interesting paper by a colleague, Chrisantha Fernando. Together with Sampsa Sojakka, both have proved that waves produced on the surface of water can be used as the medium for a Wolfgang Maass’ “Liquid State Machine” (link) that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Amazingly, Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. What follows is the abstract of their paper entitled “Pattern Recognition in a Bucket“, as well a PDF link onto it:
Figure – Typical wave patterns for the XOR task. Top-Left: [0 1] (right motor on), Top-Right: [1 0] (left motor on), Bottom-Left: [1 1] (both motors on), Bottom-Right: [0 0] (still water). Sobel filtered and thresholded images on right. (from Fig. 3. in in Chrisantha Fernando and Sampsa Sojakka, “Pattern Recognition in a Bucket“, ECAL proc., European Conference on Artificial Life, 2003.
[…] Abstract. This paper demonstrates that the waves produced on the surface of water can be used as the medium for a “Liquid State Machine” that pre-processes inputs so allowing a simple perceptron to solve the XOR problem and undertake speech recognition. Interference between waves allows non-linear parallel computation upon simultaneous sensory inputs. Temporal patterns of stimulation are converted to spatial patterns of water waves upon which a linear discrimination can be made. Whereas Wolfgang Maass’ Liquid State Machine requires fine tuning of the spiking neural network parameters, water has inherent self-organising properties such as strong local interactions, time-dependent spread of activation to distant areas, inherent stability to a wide variety of inputs, and high complexity. Water achieves this “for free”, and does so without the time-consuming computation required by realistic neural models. An analogy is made between water molecules and neurons in a recurrent neural network. […] in Chrisantha Fernando and Sampsa Sojakka, “Pattern Recognition in a Bucket“, ECAL proc., European Conference on Artificial Life, 2003. [PDF link]
The importance of Network Topology again… at [PLoS]. Abstract: […] The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems. […] (*)
(*) in Hermundstad AM, Brown KS, Bassett DS, Carlson JM, 2011 Learning, Memory, and the Role of Neural Network Architecture. PLoS Comput Biol 7(6): e1002063. doi:10.1371/journal.pcbi.1002063
Figure – Understanding the Brain as a Computational Network: significant neuronal motifs of size 3. Most over-represented colored motifs of size 3 in the C. elegans complex neuronal network. Green: sensory neuron; blue: motor neuron; red: interneuron. Arrows represent direction that the signal travels between the two cells. (from Adami et al. 2011 [Ref. below])
Abstract: […] Complex networks can often be decomposed into less complex sub-networks whose structures can give hints about the functional organization of the network as a whole. However, these structural motifs can only tell one part of the functional story because in this analysis each node and edge is treated on an equal footing. In real networks, two motifs that are topologically identical but whose nodes perform very different functions will play very different roles in the network. Here, we combine structural information derived from the topology of the neuronal network of the nematode C. elegans with information about the biological function of these nodes, thus coloring nodes by function. We discover that particular colorations of motifs are significantly more abundant in the worm brain than expected by chance, and have particular computational functions that emphasize the feed-forward structure of information processing in the network, while evading feedback loops. Interneurons are strongly over-represented among the common motifs, supporting the notion that these motifs process and transduce the information from the sensor neurons towards the muscles. Some of the most common motifs identified in the search for significant colored motifs play a crucial role in the system of neurons controlling the worm’s locomotion. The analysis of complex networks in terms of colored motifs combines two independent data sets to generate insight about these networks that cannot be obtained with either data set alone. The method is general and should allow a decomposition of any complex networks into its functional (rather than topological) motifs as long as both wiring and functional information is available. […] from Qian J, Hintze A, Adami C (2011) Colored Motifs Reveal Computational Building Blocks in the C. elegans Brain, PLoS ONE 6(3): e17013. doi:10.1371/journal.pone.0017013
[…] The role of evolution in producing these patterns is clear, said Adami. “Selection favors those motifs that impart high fitness to the organism, and suppresses those that work against the task at hand.” In this way, the efficient and highly functional motifs (such as the sensory neuron-interneuron-motor neuron motif) are very common in the nervous system, while those that would waste energy and give no benefit to, or even harm, the animal are not found in the network. “Adami and his team have used evolutionary computation to develop hypotheses about the evolution of neural circuits and find, for these nematode worms, that simplicity is the rule,” says George Gilchrist, program director in NSF’s Division of Environmental Biology (in the Directorate for Biological Sciences), which funds BEACON. “By including functional information about each node in the circuit, they have begun decoding the role of natural selection in shaping the architecture of neural circuits.” […] from Danielle J. Whittaker “Understanding the Brain as a Computational Network“, NSF, April 2011.
Recent Comments