
United States: The research team in California developed an AI system to bring real-time speaking capabilities back to paralyzed patients who use their authentic voice tones in their natural voice format while testing on a person who suffers from severe paralysis, which prevents speech.
More about the news
A complex system from UC Berkeley University and UC San Francisco University establishes the combination of brain-computer interfaces (BCI) with advanced artificial intelligence (AI) for generating speech-through-neural-activity decoding.
This latest method outpaces all other contemporary approaches for brain signal speech generation. This system requires high-density electrode arrays to measure the brain surface neural activity.
Both microelectrodes are inserted through the brain surface, and non-invasive surface electromyography sensors are attached to the face to help measure muscle activity, Fox News reported.
How does the device work?
The devices used to monitor brain activities transmit neural signals to the AI, which undergoes learning to convert them into speech sounds for the patient.
The neuroprosthesis extracts neural signals from the motor cortex brain region, which controls speech output patterns, and then AI translates them into speech sounds.
Study co-lead author Cheol Jun Cho explained how the neuroprosthesis captures the signals that enable thinking energy to become articulation before adjusting the motor control process, as Fox News reported.