• Profile
Close

Artificial intelligence decodes speech from brain activity: Study

The Scientist Apr 03, 2020

The idea of communicating through a brain-machine interface is something that has morphed from science-fiction to proof-of-concept in under two decades. A new study published in Nature Neuroscience claims to have taken the next small step by using artificial intelligence to interpret brain activity while individuals listened to sentences into text.

“We are not there yet but we think this could be the basis of a speech prosthesis,” coauthor Joseph Makin of the University of California, San Francisco tells The Guardian.

Each of the four participants used in the study has a history of epileptic seizures and already had electrodes implanted in their brains to monitor that activity. The researchers used those probes to monitor brain activity while 50 pre-determined sentences were read aloud, providing data for the neural network to decode. The sentences were widely varied in context and construct, including “Tina Turner is a pop singer,” “the woman is holding a broom,” and “a little bird is watching the commotion.”

The readout of brain activity and audio of the spoken sentences were input to an algorithm, which learned to recognize how the parts of speech were formed. The initial results were highly inaccurate, for instance, interpreting brain activity from hearing the sentence “she wore warm fleecy woolen overalls” as “the oasis was a mirage.” As the program learned over time, it was able to make translations with limited errors, such as interpreting brain activity in response to hearing “the ladder was used to rescue the cat and the man” as “which ladder will be used to rescue the cat and the man.”

“If you try to go outside the [50 sentences used] the decoding gets much worse,” Makin explains to The Guardian

The BBC describes the program as learning how to decode individual words, not just the full sentences, which makes it more likely to accurately decode speech in novel phrases going forward. The program also increased its accuracy when going from one participant to the next, demonstrating plasticity in learning from multiple people. 

While being able to interpret limited sentences is a step forward, it is still a far cry from mastering English as a whole, the authors admit.

“Although we should like the decoder to learn and exploit the regularities of the language,” the researchers write in their paper, “it remains to show how many data would be required to expand from our tiny languages to a more general form of English.”

A possible use of this technology would be to communicate with someone experiencing locked-in syndrome, a person who is able to hear and understand their surroundings without any way of communicating, but this program still relies on brain activity from hearing a spoken word, not mere thoughts, Christian Herff, a brain-machine interface expert at Maastricht University who was not involved in the study, tells The Guardian

Still, he notes that this is a significant accomplishment that a machine was able to interpret speech so well after less than an hour with each participant, not several hours as has been seen in previous studies. “By doing so they achieve levels of accuracy that haven’t been achieved so far,” he says.

—Lisa Winter

Go to Original
Only Doctors with an M3 India account can read this article. Sign up for free or login with your existing account.
4 reasons why Doctors love M3 India
  • Exclusive Write-ups & Webinars by KOLs

  • Nonloggedininfinity icon
    Daily Quiz by specialty
  • Nonloggedinlock icon
    Paid Market Research Surveys
  • Case discussions, News & Journals' summaries
Sign-up / Log In
x
M3 app logo
Choose easy access to M3 India from your mobile!


M3 instruc arrow
Add M3 India to your Home screen
Tap  Chrome menu  and select "Add to Home screen" to pin the M3 India App to your Home screen
Okay