The representation of space
Members of the university research centre are developing models of how the brain represents space. Certain types of neuron in the brain encode the orientation or position of an animal in its environment. Examples of such cells include head direction cells that respond when the animal's head is facing in a particular direction, and place cells that fire when the animal is in a particular location. Our computer simulations show how these cells may develop as an animal explores its environment.
Our models are also able to explain how the brain's representation of space may be updated by vestibular signals during movement. Thus, these models will help to improve understanding of a range of neurological disabilities, including vestibular disorders of balance and disorders of spatial processing.
We have also developed detailed computer models of spatial processing and memory storage in the hippocampus. Our models explain how damage to the hippocampus may lead to amnesia for episodic memories.
We have recently developed computer models based on Multi-Packet Continuous Attractor Neural Networks, which may explain how the brain is able to represent the full 3D spatial structure of an animal's environment. Control systems based on these models may help robots to move more easily within cluttered real-world environments.These models may permit more flexible movement of manufacturing manipulators, and provide more robust navigation for autonomous vehicles and mobile robots.
Case study The representation of 3-dimensional space
Multi-Packet Continuous Attractor Neural Networks are able to represent the locations of multiple spatial features in an environment.These models are thus able to represent the full 3D spatial structure of the environment. In addition, the spatial representations may be updated using idiothetic (velocity) signals as the agent moves through its environment. This important capability in animals, which has been simulated in our models, is known as path integration.
Architecture of multi-packet continuous attractor network model, including idiothetic inputs. The network is composed of a set of feature (F) cells which encode the position and orientation of the features in the environment with respect to the agent, and a set of idiothetic (ID) cells which fire when the agent moves. For details, see Stringer, S.M., Rolls, E.T. and Trappenberg,T.P. (2004). Neural Networks, 17: 5-27.
Results from a computer simulation with two activity packets active in two different feature spaces in the same continuous attractor network.The left plot shows the firing rates of the subset of feature cells belonging to the first feature space, and the right plot shows the firing rates of the subset of feature cells belonging to the second feature space.Thus, the left and right plots show two activity packets moving within their respective feature spaces.
Stringer, S.M., Trappenberg,T.P., Rolls, E.T. and de Araujo, I.E.T. (2002). Self-organizing continuous attractor networks and path integration: one-dimensional models of head direction cells, Network: Computation in Neural Systems, 13: 217-242.
Stringer, S.M., Rolls, E.T.,Trappenberg,T.P. and de Araujo, I.E.T. (2002). Self-organizing continuous attractor networks and path integration: two-dimensional models of place cells, Network: Computation in Neural Systems, 13: 429-446.
Stringer, S.M. and Rolls, E.T. (2006). Self-organising path integration using a linked continuous attractor and competitive network: Path integration of head direction, Network: Computation in Neural Systems, 17: 419-445.
Walters, D.M. and Stringer, S.M. (2010). Path integration of head direction: updating a packet of neural activity at the correct speed using neuronal time constants, Biological Cybernetics, 103: 21-41.
Walters, D.M., Stringer, S.M. and Rolls, E.T. (2013). Path integration of head direction: updating a packet of neural activity at the correct speed using axonal conduction delays, PLoS ONE, 8(3): e58330.