Two research teams from California have developed brain implants that can give voice to people who are unable to speak. The results of their work were published in the journal Nature. The development was reported by The Financial Times.

These groups work at the University of California and Stanford University. They used new electrode arrays and artificial intelligence programs to convert thoughts into text and speech. The UCLA researchers also created a realistic avatar that speaks the decoded words.

“Both studies represent a major leap forward toward decoding brain activity with the speed and accuracy needed to restore fluency to people with paralysis (voice),” said Frank Willett, one of the Stanford team members.

The designs differed significantly. The University of California team placed a paper-thin rectangle with 253 electrodes on the surface of the cerebral cortex to record brain activity in the area critical for speech. The Stanford device inserted two smaller arrays with a total of 128 microelectrodes deeper into the brain.

Each team worked with one volunteer: Stanford with a 68-year-old patient suffering from amyotrophic lateral sclerosis, and UCLA with a 47-year-old patient who had suffered a stroke.

Both projects used an artificial intelligence algorithm to decode electrical signals from patients’ brains, learning to distinguish clear patterns to form spoken words. The systems required extensive training.

There is still a lot of work to be done to turn the laboratory proof of concept into simple and safe devices that patients and their caregivers can use at home. An important step, according to the researchers, will be to create a wireless version that does not require the user to be connected to the implant.

As a reminder, a brain implant with artificial intelligence allowed the patient to restore sensitivity and mobility. The technology was developed at the Feinstein Institute for Bioelectronic Medicine, which is part of the American healthcare network Northwell Health.