Dunking a cookie into a cup of coffee

Experiments have identified a dedicated information highway that combines visual cues with body motion. This mechanism triggers responses to cues before the conscious brain has become aware of them.           

The study shows that our brains also have separate hard-wired systems to track our own bodies visually even when we are not paying attention to them.

David Franklin

We talk about being ‘on automatic’ when we’re describing carrying out a familiar series of actions without being aware of what we’re doing.

Now researchers have for the first time found evidence that a dedicated information highway or ‘visuomotor binding’ mechanism connects what we see with what we do. This mechanism helps us to coordinate our movements in order to carry out all kinds of tasks from dunking a biscuit in your coffee, while maintaining eye contact with someone else, to playing basketball on a crowded court.

The UCL-led research (published yesterday in the journal Current Biology) was a collaboration between Dr Alexandra Reichenbach, of the UCL Institute of Cognitive Neuroscience, and Dr David Franklin, of the Computational and Biological Learning Lab at Cambridge’s Department of Engineering.

Their research suggests that a specialised mechanism for spatial self-awareness links visual cues with body motion. The finding could help us understand the feeling of disconnection reported by schizophrenia patients and could also explain why people with even the most advanced prosthetic limbs find it hard to coordinate their movements.

Standard visual processing relies on us being able to ignore distractions and pay attention to objects of interest while filtering out others. “The study shows that our brains also have separate hard-wired systems to track our own bodies visually even when we are not paying attention to them,” explained Franklin. “This allows visual attention to focus on objects in the world around us rather than on our own movements.”

The newly-discovered mechanism was identified when three experiments were carried out on 52 healthy adults. In all three experiments, participants used robotic interfaces to control cursors on two-dimensional displays, where cursor motion was directly linked to hand movement. They were asked to keep their eyes fixed on the centre of the screen, a requirement checked by eye tracking. “The robotic virtual reality system allowed us to instantaneously manipulate visual feedback independently of the physical movement of the body,” said Franklin.

In the first experiment, participants controlled two separate cursors – equally close to the centre of the screen – with their right and left hands. Their goal was to guide each cursor to a corresponding target at the top of the screen. Occasionally the cursor or target on each side would jump left or right, requiring participants to take corrective action. Each jump was ‘cued’ by a flash on one side, but this was random and did not always correspond to the side about to change.

Not surprisingly, people reached faster to cursor jumps when their attention was drawn to the ‘correct’ side by the cue. However, reactions to jumps were fast regardless of cuing, suggesting that a separate mechanism independent of attention is responsible for tracking our movements.

“The first experiment showed us that we react very quickly to changes relating to objects directly under our own control, even when we are not paying attention to them,” explained Reichenbach. “This provides strong evidence for a dedicated neural pathway linking motor control to visual information, independently of the standard visual systems that are dependent on attention.”

The second experiment was similar to the first but introduced changes in brightness to demonstrate the attention effect on the visual perception system. In the third experiment, participants were asked to guide one cursor to its target in the presence of up to four dummy targets or cursors, acting as ‘distractors’ alongside the real ones. In this experiment, responses to cursor jumps were less affected by distractors than responses to target jumps. Reactions to cursor jumps remained strong with one or two distractors but decreased significantly in the presence of four.

“These results provide further evidence of a dedicated visuomotor binding mechanism that is less prone to distractions than standard visual processing,” said Reichenbach. “It looks like the specialised system has a higher tolerance for distractions but in the end it is effected by them. Exactly why we evolved a separate mechanism remains to be seen but the need to react rapidly to different visual clues about ourselves and the environment may have enough to necessitate a separate pathway.”

For more information about this story contact Alexandra Buxton, Office of Communications, University of Cambridge, amb206@admin.cam.ac.uk, 01223 761673

 


This work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page.