Neuroprostheses translate nerve signals directly from the brain, for example to control hand prostheses. The power of thought can also be used to move a cursor around the screen or maneuver a wheelchair through space. In the future, the interface should also help paralyzed people to communicate directly with their surroundings again. Because in many affected people the ability to speak is completely intact, for example in ALS patients or reclusive patients.
For years, therefore, there has been intensive research into interfaces that are supposed to “understand” imaginary speech. Among other things, it has already been possible to translate simple conversations between two people, at least in many cases. Such experiments are usually done with epilepsy patients who have electrodes in their brains because of their disease. The activity of certain nerve cells can be read directly there. Non-invasive methods such as electroencephalography (EEG) are now also being tested.
electrodes in the brain
Other studies work directly with the affected, such as the work by the team of Edward Chang from the University of California, San Francisco, which was just published in the journal “Nature Communications”: The subject was a 36-year-old patient who was almost completely paralyzed after a severe stroke and could no longer speak. He controls the computer-assisted interface with only small head movements.
For the experiments, a young, cognitively intact man was implanted with a credit card-sized implant with 128 electrodes. The researchers reported the first success last year in the New England Journal of Medicine. The program used at the time was practiced in almost 50 training units using “deep learning”, with the patient trying to articulate one of 50 given words.
In addition to machine learning methods from artificial intelligence, classic natural language models were also used, which, for example, indicate the probability of occurrence of words in a specific context. In this way, the program actually learned to translate brain waves into language. In the end, it was even possible to recognize whole sentences from a limited vocabulary. However, the error rate was a quarter.
The approach tested for the current study should not only be more precise and comprehensive, but also more direct. This means that the person tested no longer had to mentally try to actually – namely physically – speak, but was able to think the word or letter directly. This makes the process less unnatural and thus perhaps a bit more suitable for everyday use for the disabled, write Chang and Co.
Incidentally, other research groups are also trying to make the brain more readable by detouring through the motor system, for example in the form of “mindwriting” — like purely imagined handwriting, as one team put it last year. As new work now suggests, it may not be so important to a computer whether the brain produces words motorically or mentally.
Few letters, many words
Chang’s team also found a fairly simple trick to effortlessly expand the vocabulary of a neuroprosthesis. Instead of thinking about whole words, this time the subject had to spell them. The authors of the study point out that this would make it possible to represent a much larger number of words with just 26 letters. In order for the machine to better “understand” the letters, an international alphabetic table was used (“Alpha, Bravo, Charlie,…”). Artificial intelligence and classic language processing models were again combined for training and tests.
In fact, in the vast majority of cases, the computer was able to recognize mental sentences generated by the letter method from a basic vocabulary of 1152 words. The average error rate per character was just over six percent, and almost 30 letters could be processed per minute. This vocabulary would be very useful for everyday communication. Simulation calculations by scientists show that the vocabulary could be increased to 9,000 words without major losses. Even then, the average error rate would be just over eight percent.