San Francisco – How can a computer translate thoughts into spoken words? A team of scientists at the University of California has discovered a promising new piece of the puzzle, and the result is a surprisingly convincing pattern of synthetic speech.
Scientists have created a system that translates brainwaves into words, focusing on physical movements related to speech, not the sound of words that try to be communicated. They found that by looking at the intended movements of the tongue, the larynx and other mechanisms of speech allowed them to reproduce the sounds of the voice in a more reliable way than, say, trying to match the brain waves with the predicted sounds of speech.
Using this information, the team created a computer program that simulates the movements of a vocal tract, enhancing the speech centers of the brain.
Take a look at an example of this type of speech modeling. You can see the connection between the intended spoken words and the way these words are formed by the different parts of the vocal tract.
The Team Findings were recently published in the journal Nature. The paper concluded that this new method could be the basis of life-changing technology for people with severe speech disorders, physical trauma or other conditions that limit their ability to communicate.
"It has long been our laboratory's goal to create technologies to restore communication for patients with severe speech disabilities," Edward Chang, one of the co-authors of the project, said at a news conference. "We want to create technologies that can reproduce speech directly from human brain activity. This study provides proof of principle that this is possible."
That's not the only exciting thing about team research. According to Chang, his model of the mechanical speech process can be applied from one person to another.
"The neural code for vocal movements is partially shared between different individuals, and that an artificial vocal tract modeled on a person's voice can be adapted to synthesize the speech of another person's brain activity," Chang explains. "This means that a speech decoder that is trained on a person with speech intact could someday act as a starting point for someone with speech disabilities who could learn to control the simulated vocal tract using their own brain activity."
According to another recent study cited by UC scientists, communication technologies for people with speech and motor limitations are evolving, but they can still be frustrating and inaccurate. If this ultimate advancement can be applied at an individual patient level, it can open a new world of understanding and be understood.