Speech Restored for Paralyzed Woman Through AI

Researchers at the University of California San Francisco and UC Berkeley have developed a groundbreaking brain-computer interface that has given a paralyzed woman the ability to communicate again. The woman lost her capacity for speech after suffering a brainstem stroke.

Using advanced technology that translates brain signals into audible speech, the researchers enabled the woman to speak through a digital avatar. This represents the first time brain signals have been synthesized into both speech and facial expressions.

Dr. Edward Chang, chair of neurological surgery at UCSF, explains this technology allows for full embodiment of human communication beyond just words. Chang has worked on this interface for over a decade.

According to Chang, this new development moves the technology beyond proof of concept and will soon become a practical option for paralyzed individuals to communicate. The interface provides an exciting milestone in restoring speech for people who have lost that ability due to paralysis.

Previously, Chang’s team showed it was possible to decode brain signals from a paralyzed individual attempting speech and translate them into text on a screen. Their new research demonstrates translating these signals into audible speech and realistic facial expressions via an avatar.

The text translation reached around 78 words per minute. But the audible speech translation, along with accurate avatar facial movements, represents a major leap forward.

The team implanted electrodes on the surface of the woman’s brain in areas key for speech. The electrodes intercept signals that would normally go to the mouth and face muscles. These signals get sent to computers via a port and cable attached to the woman’s head.

Through repeated speech practice, the computers were trained to recognize the patterns corresponding to different words using deep learning models. The device essentially reads the instructions the brain intends to send to the vocal muscles.

To make the avatar sound like the woman’s voice, they developed an algorithm using pre-injury recordings. For the facial expressions, they created customized machine-learning processes to translate her brain signals into the avatar’s lip, jaw, and tongue movements.

This multimodal approach shows promise for restoring full communication abilities for severely paralyzed individuals. A key next step is a wireless version not physically tethered to the computer. Giving paralyzed users mobile device control could profoundly improve their independence and social connections.

#ArtificialIntelligence #SpeechAI #Innovation #Neuroscience #BCIresearch

Leave a Reply

Your email address will not be published. Required fields are marked *