UC Berkeley researchers help restore speech using AI-powered brain-computer interface

Saturday, October 25, 2025
Gopala Anumanchipalli assistant professor at UC Berkeley | UC Berkeley
UC Berkeley researchers help restore speech using AI-powered brain-computer interface

After suffering a brainstem stroke in 2005, Ann Johnson lost her ability to speak and move, developing locked-in syndrome. For nearly two decades, she communicated mainly through an eye-tracking system that allowed her to spell out words on a computer screen at a slow pace.

Johnson’s situation changed when she joined a clinical trial led by researchers from the University of California, Berkeley and the University of California, San Francisco. The team aimed to restore communication for people who have lost their ability to speak using a brain-computer interface. This technology reads signals from the part of the brain responsible for speech and translates them into spoken words or text.

Gopala Anumanchipalli, now an assistant professor at UC Berkeley, began this research with neurosurgeon Edward Chang at UCSF in 2015. "We were able to get a good sense of the part of the brain that is actually responsible for speech production," said Anumanchipalli.

The researchers developed a neuroprosthesis that detects when someone is making an effort to speak and then uses artificial intelligence models to translate those brain signals into speech or facial animation. Kaylo Littlejohn, a Ph.D. student at UC Berkeley and co-lead on the study, trained these AI models as part of his work with the Berkeley Speech Group in the Berkeley AI Research Lab.

"She can’t, because she has paralysis, but those signals are still being invoked from her brain, and the neural recording device is sensing those signals," said Littlejohn. "Just like how Siri translates your voice to text, this AI model translates the brain activity into the text or the audio or the facial animation."

The system does not read random thoughts; it only works when users intentionally try to say something. "We didn’t want to read her mind," said Anumanchipalli. "We really wanted to give her the agency to do this."

Initially, there was an eight-second delay between Johnson's attempt to communicate and hearing synthesized speech due to technical limitations in processing full sentences before generating output. However, new research published by the team in March reduced this delay significantly by switching from sequence-to-sequence architecture to streaming architecture. Now, translation happens almost in real time with about a one-second delay.

Anumanchipalli sees further development ahead: "It’s not something that we have off-the-shelf models that we can use now," he said. "So development must happen in the science, in the technology, in the clinical translation, as well — all of them together to make this happen."

Johnson had her implant removed for unrelated reasons but continues providing feedback via email using her current assistive technology. She expressed satisfaction with hearing her own voice again through streaming synthesis and hopes future devices will be wireless—a feature researchers are pursuing.

Looking ahead, Anumanchipalli envisions broader accessibility: "We need to be able to have neuroprostheses be plug-and-play so that it becomes a standard of care … That’s where we need to be."

Johnson aspires eventually to work as a counselor using such technology: “I want patients there to see me and know their lives are not over now,” she wrote for UCSF reporters. “I want to show them that disabilities don’t need to stop us or slow us down.”

500 - Internal Server Error

Looks like something went wrong!

Error 500: We apologize, an error has ocurred.
Please try again or return to the homepage.

Return to Homepage