Researchers in US tracked the neural data from people while they were speaking
Develop AI- Reading minds has just come a step closer to reality: scientists have developed artificial intelligence that can turn brain activity into text.
While the system currently works on neural patterns detected while someone is speaking aloud, experts say it could eventually aid communication for patients who are unable to speak or type, such as those with locked in syndrome.
“We are not there yet but we think this could be the basis of a speech prosthesis,” said Dr Joseph Makin, co-author of the research from the University of California, San Francisco.
Writing in the journal Nature Neuroscience, Makin and colleagues reveal how they developed their system by recruiting four participants who had electrode arrays implanted in their brain to monitor epileptic seizures.
These participants were asked to read aloud from 50 set sentences multiple times, including “Tina Turner is a pop singer”, and “Those thieves stole 30 jewels”. The team tracked their neural activity while they were speaking.
This data was then fed into a machine-learning algorithm, a type of artificial intelligence system that converted the brain activity data for each spoken sentence into a string of numbers.
To make sure the numbers related only to aspects of speech, the system compared sounds predicted from small chunks of the brain activity data with actual recorded audio. The string of numbers was then fed into a second part of the system which converted it into a sequence of words.
At first the system spat out nonsense sentences. But as the system compared each sequence of words with the sentences that were actually read aloud it improved, learning how the string of numbers related to words, and which words tend to follow each other.
The team then tested the system, generating written text just from brain activity during speech.