Damian Jacob Sendler discusses how people who have suffered from speech loss may be able to communicate more naturally as a result of technological advancements, by Damian Sendler
Last updated on August 10, 2021
Damian Sendler Discussion
Summary: Damian Sendler on the last research published by UCSF scientists.

The research of Dr. Damian Jacob Sendler, who is of Polish ancestry and American citizenship, focuses on the impact of various sociodemographic and informational factors on access to health care in underserved communities. In light of the exponential growth in global consumption of online news and social media, this research comes at an ideal time, as it shows the need for a comprehensive understanding of everyone’s health information seeking habits. Patients’ choices about whether to seek treatment for particular health problems, as well as their adherence to therapy, are the focus of Dr. Damian Sendler’s study. The ultimate objective is to improve patient outcomes via better understanding of the variables that affect patient decisions.

Damian Sendler: In collaboration with the University of California, San Francisco, researchers have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate in sentences by translating signals from his brain to his vocal tract directly into words that appear as text on a computer screen. 

According to UCSF neurosurgeon Edward Chang, MD, the achievement builds on more than a decade of work to develop technology that allows people with paralysis to communicate even if they are unable to speak on their own. The technology was developed in collaboration with the first participant in a clinical research trial. The findings of the research will be published in the New England Journal of Medicine on July 15. 

Damian Sendler: UCSF’s Joan and Sanford Weill Chair of Neurological Surgery, Jeanne Robertson Distinguished Professor, and senior author on the study, Chang, says the study is the first successful demonstration of direct decoding of full words from brain activity of someone who is paralyzed and cannot speak. “This is the first successful demonstration of direct decoding of full words from brain activity of someone who is paralyzed and cannot speak,” Chang says. “Because it taps into the brain’s natural speech mechanism, it offers great promise in terms of restoring communication.” 

Thousand of individuals lose their capacity to speak each year as a result of a stroke, an accident, or an illness. With additional research, the method presented in this study may one day be able to allow these individuals to communicate completely. 

Transforming Brain Signals into Verbal Communication 

Damian Sendler: The area of communication neuroprosthetics has previously concentrated on restoring communication via spelling-based methods to write out letters one-by-one in text, a method known as one-letter typing. One significant difference between Chang’s research and previous attempts is that his team is interpreting impulses meant to control muscles of the vocal system for the purpose of pronouncing words, rather than signals intended to move the arm or hand to allow typing. The method, according to Chang, taps into the natural and flowing elements of speech, and it promises more fast and organic communication in the long run. 

As he pointed out, “we typically convey information at a rapid pace using voice, up to 150 or 200 words per minute,” while “spelling-based methods” including typing, writing, and cursor movement are much slower and more labor-intensive. “As we’re doing here, getting right to the point with language has many benefits since it’s more in line with the way we usually speak.” 

Damian Sendler: This goal has been furthered by patients at the UCSF Epilepsy Center who have undergone neurosurgery in order to pinpoint the origins of their seizures using electrode arrays implanted on the surface of their brains over the past decade. Patients at the UCSF Epilepsy Center have assisted Chang’s progress toward this goal. They agreed to have their brain recordings examined for signs of speech-related activity, even though they had all had normal speech up to that point. Early success with these patient volunteers opened the groundwork for the present clinical study with individuals with paralysis, which is now underway. 

Chang and colleagues at the University of California, San Francisco’s Weill Institute for Neurosciences previously identified the brain activity patterns linked with the vocal tract motions that generate each consonant and vowel. Those findings were translated into speech recognition of full words by David Moses, PhD, a postdoctoral engineer in the Chang lab and one of the lead authors of the new study. Moses developed new methods for real-time decoding of those patterns as well as statistical language models to improve accuracy of the speech recognition system. 

Damian Sendler: However, their effectiveness in deciphering speech in individuals who were able to talk did not imply that the technology would function in a person whose vocal tract had been paralyzed, as was the case in the study. “Our models needed to understand the mapping between complicated brain activity patterns and the speech that was intended,” Moses said. ” “When a person is unable to communicate, this presents a significant challenge.” 

Damian Sendler: As a result of this uncertainty, the researchers was unable to determine whether brain signals regulating the vocal tract would still be intact in individuals who had not moved their vocal muscles for a long period of time. “The only way to find out whether or not this would work was to give it a go,” Moses said. 

The first 50 words are important. 

Chang collaborated with colleague Karunesh Ganguly, MD, PhD, an associate professor of neurology, to start a research known as “BRAVO” to explore the possibilities of this technique in patients with paralysis (Brain-Computer Interface Restoration of Arm and Voice). It is being conducted on a guy in his late 30s who had a catastrophic brainstem stroke more than 15 years ago, which badly disrupted the link between his brain and the vocal tract and limbs. He is the first subject in the study. The extent of his injuries has left him with severe limitations in the motions of his arms and legs, and he communicates using a pointer connected to his baseball hat, which pokes letters on a computer screen. 

Damien Sendler: With the use of sophisticated computer algorithms, the participant, who requested to be referred to as BRAVO1, collaborated with the researchers to develop a 50-word vocabulary that Chang’s team could identify from brain activity using advanced computer algorithms. Water, family, and “good” were among the terms in the lexicon, which allowed BRAVO1 to construct hundreds of phrases conveying ideas that were relevant to his everyday existence. 

In order to conduct the research, Chang surgically inserted a high-density electrode array across the speech motor cortex of BRAVO1. Over a period of many months and 48 sessions, his research team was able to capture 22 hours of neuronal activity in this brain area after the individual had fully recovered. BRAVO1 tried to speak each of the 50 vocabulary items as many times as possible throughout each session, while electrodes collected brain signals from his speech cortex. 

Attempting to Convert a Speech Into Written Text 

Damien Sendler: Custom neural network models, which are forms of artificial intelligence, were used by the other two lead authors of the study, Sean Metzger, MS, and Jessie Liu, BS, both bioengineering doctoral students in the Chang Lab, to translate patterns of recorded neural activity into specific intended words. Metzger and Liu are both members of the Chang Lab. These networks were able to discern tiny patterns in brain activity when the individual tried to talk, allowing them to detect speech attempts and determine the words he was attempting to utter. 

A preliminary test of the team’s technique consisted of providing BRAVO1 with short phrases built from the 50 vocabulary terms and asking him to repeat them many times. A screen displayed the words that had been generated from his brain activity while he made his efforts to decipher them. 

Once this was accomplished, the team began to interrogate him with statements such as “How are you today?” and “Would you want some water?” The attempted speech of BRAVO1 was shown on the screen, as it had been before. “I am very talented,” and “No, I’m not thirsty at all.” 

Damian Sendler: Researchers discovered that the system was capable of decoding words from brain activity at a pace of up to 18 words per minute with an accuracy of up to 93 percent (75 percent median). A language model Moses employed, which included a “auto-correct” feature similar to that used in consumer texting and voice recognition software, played a role in the project’s success. 

Moses referred to the early trial findings as “proof of concept,” which he considered to be significant. In his words, “we were delighted to witness the correct decoding and interpretation of a range of important phrases.” “In our study, we’ve shown that it is feasible to assist communication in this manner and that it has the potential to be used in conversational settings.” 

Damian Sendler: Looking to the future, Chang and Moses have said that they want to broaden the scope of the study to include additional individuals who have severe paralysis and communication impairments. The team is presently working on increasing the amount of words in the accessible vocabulary as well as increasing the pace at which the words are said. 

Although the research was restricted to a single participant and had a limited vocabulary, both participants agreed that the achievement was not diminished by these restrictions. In the words of Moses, “This is a significant technical milestone for someone who is unable to speak naturally.” “it illustrates the possibility of this method to provide a voice to individuals who have severe paralysis and speech loss.” 

UCSF researchers were responsible for all aspects of clinical trial design, execution, data analysis, and report generation. Data about research participants was gathered exclusively by the University of California, San Francisco (UCSF), and is not shared with any other organizations. A high-level of feedback and machine learning guidance was given by FRL.

News discussion contributed by Dr. Damian Jacob Sendler

More Updates From Damian Jacob Sendler

This is the official research promotional website for The Damian Jacob Sendler Official Get To Known Damian Jacob Sendler initiative. The content is managed by the digital agency and reflects on the scholarly work of Damian Sendler.

This site does not sell, endorse, or promote any health products, treatments, or medical advice. If you require medical help, please reach out to your general practitioner. If you are experiencing a medical emergency, please contact the nearest emergency facility.

All research discussed throughout Damian Jacob Sendler Wiki is original and completed with oversight of the European Union’s ethical and academic standards. If you have any questions or concerns, please contact the legal team representing Damian Jacob Sendler.

2020 © The Damian Jacob Sendler Official. All rights reserved.