A neural brain implant provides near instantaneous speech

Delays and dictionaries
One year after the Stanford work, in 2024, Stavisky’s team published its own research on a brain-to-text system that bumped the accuracy to 97.5 percent. “Almost every word was correct, but communicating over text can be limiting, right?” Stavisky said. “Sometimes you want to use your voice. It allows you to make interjections, it makes it less likely other people interrupt you—you can sing, you can use words that aren’t in the dictionary.” But the most common approach to generating speech relied on synthesizing it from text, which led straight into another problem with BCI systems: very high latency.
In nearly all BCI speech aids, sentences appeared on a screen after a significant delay, long after the patient finished stringing the words together in their mind. The speech synthesis part usually happened after the text was ready, which caused even more delay. Brain-to-text solutions also suffered from a limited vocabulary. The latest system of this kind supported a dictionary of roughly 1,300 words. When you tried to speak a different language, use more elaborate vocabulary, or even say the unusual name of a café just around the corner, the systems failed.
So, Wairagkar designed her prosthesis to translate brain signals into sounds, not words—and do it in real time.
Extracting sound
The patient who agreed to participate in Wairagkar’s study was codenamed T15 and was a 46-year-old man suffering from ALS. “He is severely paralyzed and when he tries to speak, he is very difficult to understand. I’ve known him for several years, and when he speaks, I understand maybe 5 percent of what he’s saying,” says David M. Brandman, a neurosurgeon and co-author of the study. Before working with the UC Davis team, T15 communicated using a gyroscopic head mouse to control a cursor on a computer screen.