The behaviour section of our lab focuses on motor control: how complex movements are planned, carried out, simplified into habits, and disrupted by various motor diseases.

To do this, we have built a biologically plausible neural network model that can build a causal understanding of its environment—using any available sensory information—and then uses this knowledge to accomplish a variety of tasks. This is accomplished without the help of any human agency, in a completely unsupervised fashion.


1.First, the robot explores its environment using random motor actions. As its position changes, it receives new sensory stimuli. The robot then uses this sensory information to deduce how each action changes its position and environment. In this case, the robot is exploring a simple room using a GPS-like position signal.

Goal.gif Goal.gif

2.A network of artificial neurons represents this cause-and-effect information through cells that denote combinations of sensory states and motor actions. These state-action cells use a variation on Hebbian synaptic learning to encode causal relationships between states by altering the synapses that connect the cells together. With a particular state designated as the robot’s intended goal, the robot achieves that state using the states and actions that it learned about in the exploration phase. Here, activation spreads through the neural network to select the optimal known action to take in each known state.


3.Even after the exploration phase is over, the robot continues to learn. Here, the robot is initially able to reach its goal and then finds that a wall has blocked its original route. It tries various ways to get past the wall, learns that they do not work, and eventually finds one that does. Thanks to this new understanding, it will get it right next time.

Route_Maze.gif Arm.gif

4.The robot described here does not specify any constraints on the type of sensory information that it receives, nor on the motor commands that it outputs. It is therefore able to do many different tasks without modification, performing the navigational tasks shown above as well as learning to control a 2-joint arm, with 4 degrees of freedom, such that it can move the hand on the end of that arm to any arbitrary position along the most efficient path.

Work is now being undertaken to demonstrate these abilities in the real world, using our handmade CALVIN-bot and a robotic arm to investigate navigation, reaching and grasping.