Sunday , March 7 2021

Scientists train artificial intelligence to turn brain signals into speech



gettyimages-91560242

The researchers worked with epilepsy patients undergoing brain surgery.

Scientific Photo Library Paseika / Getty Images

Neuro-engineers have created an innovative device that uses neural machine learning networks to read brain activity and translate it into speech.

An article in the journal Scientific Reports details how the University of Columbia's Zuckerman Mind Brain Behavior team used deep learning algorithms and the same kind of technology that drives devices like Apple's Siri and Amazon Echo to transform thinking in "precise". and reconstructed speech intelligible. " research was reported earlier this month but the magazine article goes deeper.

The human-computer structure could eventually provide patients who have lost the ability to speak an opportunity to use their thoughts to communicate verbally through a synthesized robotic voice.

"We showed that with the right technology, the thoughts of these people could be decoded and understood by any listener," said Nima Mesgarani, the project's principal investigator, in a statement.

When we speak, our brains light up, sending electrical signals spinning around the old box of thought. If scientists can decode these signals and understand how they relate to the formation or hearing of words, we will come a step closer to translating them into speech. With enough understanding – and ample processing power – this could create a device that directly translates thought into speech.

And that's what the team has been able to do, creating a "vocoder" that uses algorithms and neural networks to turn signals into speech.

To do this, the research team asked five patients with epilepsy who were already undergoing brain surgery to help. They connected electrodes to different exposed surfaces of the brain, and then the patients heard 40 seconds of spoken sentences, randomly repeated six times. Listening to the stories helped to train the vocoder.

Then the patients heard the speakers counting from zero to nine, while the brain signals were sent back to the vocoder. The vocoder algorithm, known as WORLD, spat out its own sounds, which were cleansed by a neural network, resulting in a robotic speech that mimicked counting. You can hear what it sounds like here. It is not perfect, but it is certainly understandable.

"We found that people can understand and repeat sounds in about 75 percent of the time, which is well above and beyond any previous attempt," Mesgarani said.

The researchers concluded that the accuracy of the reconstruction depends on how many electrodes were planted in the patient's brain and for how long the vocoder was trained. As expected, increasing electrodes and increasing training time allow the trainer to gather more data and result in better reconstruction.

Looking ahead, the team wants to test what kind of signals are emitted when a person just imagines speech, rather than listening to speech. They also hope to test a more complex set of words and sentences. Improving algorithms with more data can lead to a brain implant that totally ignores speech, transforming a person's thoughts into words.

That would be a monumental step for many.

"This would give someone who has lost the ability to speak, whether from injury or illness, the renewed chance to connect to the world around them," Mesgarani said.

NASA turns 60: the space agency has taken mankind further than anyone else and has plans to go further.

Taking it to extremes: Mix up insane situations – erupting volcanoes, nuclear collapses, 30 foot waves – with everyday technology. See what happens.


Source link