The foundation is currently supporting a computer modelling centre within the Oxford University Department of Experimental Psychology
The ability to successfully navigate through an environment is fundamental to many organisms. Spatial cognition is a critical area of research. To give two examples: spatial cognitive deficits may present in a variety of neurodegenerative diseases, such as Alzheimer’s, Parkinson’s, and dementia. This can help to guide differential early diagnosis and target specific pharmacological interventions. In robotics, an understanding of biological spatial processing can guide development of autonomous, situated, and embodied artificial intelligences.
A variety of cell types are known to underpin spatial processing in the brain. For example, place cells encode position in an environment, grid cells encode distance travelled, whilst Head Direction (HD) cells encode directional heading.
One aspect of our work focuses on generating novel hypotheses about the architecture and firing properties of both HD cells and their principle inputs, developing through Hebbian learning during interaction with the world. We have demonstrated how theorized Continuous Attractor Neural Network architectures may self-organise and how this organisation can allow the HD system to accurately track HD using vestibular signals (path integration). We relate path integration accuracy directly to known HD architectural limitations, proposing an update to traditional CANN models.
In collaboration with UCL’s Jeffery Lab, we have provided a theoretical explanation for experimental results showing how HD cell firing is generated by an integration of internally and externally derived information sources. We have also provided an explanation for how the HD system might learn to be differentially influence by distal visual cues.
We have also developed detailed computer models of spatial processing and memory storage in the hippocampus. Our models explain how damage to the hippocampus may lead to amnesia for episodic memories.
We have recently developed computer models based on Multi-Packet CANNs, which may explain how the brain is able to represent the full 3D spatial structure of an animal's environment. Control systems based on these models may help robots to move more easily within cluttered real-world environments. Such models may permit more flexible movement of manufacturing manipulators, and provide more robust navigation for autonomous vehicles and mobile robots.
Our research involves working towards a complete self-organising model of the rodent HD cell system. Current research questions include:
- What is the role of vestibular input in proximal-distal visual landmark distinctions?
- How might CANN-like architectures be self-organised in the absence of visual information?
- Why are HD cells spread across multiple brain areas?