Brain-computer interfaces can open up opportunities for people with severe paralysis to locked-in syndrome to communicate with their surroundings again. A computer translates certain patterns of brain activity into speech. Previous devices mostly use the motions presented. Scientists have now tested a system that does not require this detour: it recognizes thoughts directly from letters. Combined with the extensive built-in dictionary, this should make operation more intuitive and faster.
Severe neurological damage, such as damage from a stroke or the progressive disease amyotrophic lateral sclerosis (ALS), can leave people with no control over their body’s muscles. People living with the so-called locked-in syndrome have full control of their mental faculties, but they can no longer communicate because they cannot speak or move. Using a brain-computer interface, researchers are trying to reconnect them with the outside world. However, the previous systems have the disadvantage that the control is usually not very intuitive and each individual input takes a long time.
Enable natural communication
A team led by Sean Metzger of the University of California, San Francisco has now developed a system that is said to be faster and more intuitive to operate than previous models and has a low error rate. “Existing brain-computer interfaces for communication typically rely on decoding imaginary arm and hand movements into letters to spell out intended sentences,” the researchers explain. “Although this approach has already shown promising results, direct decoding of speech-to-speech tests may be more natural and faster.”
To do this, Metzger and his colleagues trained a system to recognize which letter a person was thinking of. The test subject was a 36-year-old man who is paralyzed from spasticity after a stroke and can no longer speak. As he is still able to move his head, he communicates in everyday life using a speech computer controlled in this way. For the brain-computer interface experiments, scientists, with his consent, implanted electrodes in areas of his brain associated with language. In an earlier study, he had already used it to test a system in which a computer could decode up to 50 words if the subject tried to say them out loud. However, this required considerable effort due to his paralysis and his vocabulary was limited.
The new system, on the other hand, is able to recognize imaginary letters. Metzger and his colleagues taught the subject to use the NATO spelling code, such as “Alpha” for A, “Charlie” for C, and “November” for N. They recorded his exact brain activity as he thought these letter codes and used them to train a self-learning artificial intelligence. For their own experiment, they presented the test subject with 75 different sentences, which he had to spell one by one. They also asked him several questions to answer using a brain-computer interface.
The software evaluated his brain signals in real time and also compared them to an integrated dictionary of 1,152 words to determine which letter and which word was most likely. The system thus achieved a relatively low error rate of 6.13 percent. Compared to the voice computer he uses in everyday life and with which the person tested can type around 17 characters per minute, he was significantly faster with the new device: on average he managed 29.4 characters per minute. For the spelling to begin, the subject had only to imagine speaking. He could terminate the program with an imaginary wave of his hand.
In further experiments in which the researchers tested the software’s ability to recognize speech without subjects, they expanded the integrated dictionary to more than 9,000 words. The character error rate increased only slightly to 8.23 percent. “These results demonstrate the clinical utility of a speech prosthesis for generating sentences from a large vocabulary through an orthography-based approach and complement previous demonstrations of direct decoding of whole words,” the authors summarize. In future studies, they want to validate this approach with other subjects.
Quelle: Sean Metzger (University of California, San Francisco) et al., Nature Communications, doi: 10.1038/s41467-022-33611-3