Listening in on the mind
Imagine a world in which the comatose could speak: Stroke victims, or those suffering from aphasia or locked-in syndrome, could communicate with those around them. A world in which the police and the courts could read your mind, and your partner was privy to your innermost thoughts.
Science fiction? Well maybe, but research published in the Journal PLoS Biology today, takes us on the first tentative steps down that road.
You can read more about the experiment here. But the team - led by Dr Brian Pasley at the University of California, Berkeley - began by attaching sensors directly onto an exposed region of the brain associated with processing sound, the Superior Temporal Gyrus (STG), while the subjects were undergoing brain surgery.
They then played the patients recordings of a series of words and monitored their neural activity as the STG processed this information into electrical signals that could be interpreted by the brain.
Incredibly they were able to see not just the specific patterns of neural activity corresponding to the individual words, but - by starting just with that signalling activity - they were able to use a computer algorithm to reconstruct the actual sounds, effectively "hearing" the words the brain was processing.
Speaking on the programme this morning Professor Robert Knight, a co-author on the paper, said the hope was to communicate with patients who couldn't speak for themselves.
"If we can get to that stage then of course you could have a prosthesis, an implantable device, for someone who has a devastating neurological disorder and can't speak," he explained
The obvious example Professor Knight added was the celebrated physicist Stephen Hawking. "His brain is clearly brilliant, but he has no way to produce the sound. If you could read his ideas you could put it into a speech synthesiser and he could speak."
The key to stimulus reconstruction is the way that the brain processes both actual sounds and imagined sounds or thoughts about sound.
If you think of a phrase like "the cat sat on the mat" it's almost as if you can hear the words sounding out in your head. That's because the STG translates both real and imagined sounds into neural activity that is processed by the brain in much the same way, creating an electrical template for linguistic thoughts.
Monitoring that template - eavesdropping on the brain - allows you to reconstruct the words. Or at least it will.
The science of stimulus reconstruction hasn't got as far as whole phrases yet, and the computer-generated words released today do sound as if they were spoken under water.
Refining the technique will be a huge challenge, but the experiment is an exciting proof of principle. Then again, it may not be long before Big Brother is doing more than just watching you.