our computer modelling centre

OFTNAI is currently supporting a computer modelling centre within the Oxford University Department of Experimental Psychology. Members of the university research centre are currently establishing a new Cognitive Robotics Laboratory.

The aim of the Cognitive Robotics Laboratory is to apply the computer models of brain function on both simulated and real robots exploring a physical sensory training environment. Our team is currently investigating reaching and grasping by fixed base robots, as well as spatial processing and navigation in mobile robots.

the impact of our research on robotics

The researchers are interested in how different kinds of visual and spatial representation develop within the robot’s neural network brain, how the robot might learn to represent the spatial structure of its environment for planning movements and navigation, and more generally how the robot may learn about causal relations in the world and use this world knowledge to generate intelligent, flexible behaviour.

It is hoped that understanding how the brain solves these kinds of problem in vision, spatial processing, motor function and behaviour will inform the design of future robot control systems.

Case study:
current challenges in robotics and automated manufacturing

Let us consider fixed-base robots in manufacturing. A number of key technical challenges have been identified by the robotics community that are expected to dramatically enhance the capabilities of these machines. These challenges all require advances in the intelligence of the software that controls the robot.

Firstly, we need to develop robot vision systems that can perceive and interpret complex visual scenes. One common class of task for fixed-base manufacturing robots is ‘pick & place’, where the robot has to pick up an object and place it down at another location. Because today’s computer vision systems are not able to interpret complex visual scenes as well as the human brain, today’s robots typically require the objects to be in pre-specified locations.

This task becomes very difficult if the robot is faced with a higgledy-piggledy mixture of objects such as different kinds of tools. Today’s computer vision systems are still relatively poor at interpreting such a complex visual scene and segmenting individual objects reliably. Yet the human visual system can do this easily. The next generation of robots needs computer vision software that can match the performance of the human visual system, and analyse and interpret complex visual scenes like this.

Secondly, we need robots that are simpler to programme. Manufacturing robots typically require significant technical expertise to programme. This presents a further barrier to their use by small to medium sized businesses.

The uptake of robots by industry could be broadened through the development of robot control systems that are clever enough to be trained more intuitively by, for example, visual mimicry. This is how humans frequently learn from each other.

Thirdly, we need robots that can interact directly with humans. Another limitation of today’s fixed-base manufacturing robots is that they usually need to be kept physically separate from human workers, fenced off in ‘cells’. This is in order to protect people from injury. However, this again limits the robot’s flexibility and the range of tasks that can be undertaken by the robot.

The application of robotics in manufacturing could be extended to tasks involving direct interaction with human workers if the robot could actually perceive and understand its spatial environment, including the locations and behaviour of human beings. This could increase the speed and efficiency of human-robot interaction, leading to significant productivity gains.

Why not to find out more about our research on Motor Function?