Scroll Top

Researchers develop deep-learning method for translating vocal signals from the brain to text

sense-2326348

Researchers at the University of Califonia San Francisco developed a way to allow people with speech loss to communicate with their brains. The technology uses neural networks to translate brainwaves into words and phrases. It is a breakthrough because until now, the best neuroprosthetic technology has provided is letter-by-letter translations, which is very slow.

In developing the system, UCSF researchers recorded brain signals of volunteer subjects with unimpaired speech. The scientists fed the patterns to neural networks, which learned to decode them in real-time. They also applied a statistical language model to improve algorithm accuracy.

Editorial note: there are many details to be taken into account here, this might be exaggerated. In any case, even giving simple signals like ‘I’m hungry’ or ‘I’m uncomfortable’ is already huge help for people with loss of speech. Are we really coming to a future where people won’t need speech to communicate?

Source