A voice prosthetic built by a combined team of Duke neuroscientists, neurosurgeons, and engineers can convert a person’s brain signals into what they’re trying to say.
Appearing Nov. 6 in the journal Nature Communications, the new technique might one day allow those unable to talk due to neurological problems regain the ability to communicate through a brain-computer interface.
“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,” said Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”
Imagine listening to an audiobook at half-speed. That’s the best speech decoding rate currently available, which clocks in at roughly 78 words per minute. People, however, utter roughly 150 words each minute.
The latency between spoken and decoded speech rates is partially owing to the comparatively few brain activity sensors that can be fused onto a paper-thin piece of material that rests above the surface of the brain. Fewer sensors provide less decipherable information to decode.
Cogan collaborated with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, Ph.D., whose biomedical engineering lab specializes in manufacturing high-density, ultra-thin, and flexible brain sensors, to overcome previous constraints.
Viventi and his team placed 256 minuscule brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic for their project. When coordinating speech, neurons as close as a grain of sand apart might have dramatically diverse activity patterns, thus it’s vital to identify signals from surrounding brain cells to assist generate correct predictions about intended speech.
“I like to compare it to a NASCAR pit crew. We don’t want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and the medical team said ‘Go!’ we rushed into action and the patient performed the task.”- Greg Cogan, Ph.D.
Cogan and Viventi collaborated with numerous Duke University Hospital neurosurgeons, including Derek Southwell, M.D., Ph.D., Nandan Lad, M.D., Ph.D., and Allan Friedman, M.D., to recruit four patients to test the implants. The trial required the researchers to temporarily implant the device in patients undergoing brain surgery for another reason, such as treating Parkinson’s disease or removing a tumor. Cogan and his team had a limited amount of time to test their device in the OR.
“I like to compare it to a NASCAR pit crew,” said Cogan. “We didn’t want to disrupt the operating procedure, so we had to be in and out in 15 minutes.” We sprang into action as soon as the surgeon and medical team ordered ‘Go!’ and the patient completed the task.”
The objective was as simple as listening and repeating. Participants listened to a succession of meaningless words, such as “ava,” “kug,” or “vip,” and then pronounced them aloud. The device recorded activity from the speech motor cortex of each patient while it coordinated over 100 muscles that move the lips, tongue, jaw, and larynx.
Following that, Suseendrakumar Duraivel, the new report’s first author and a Duke biomedical engineering graduate student, fed the neural and speech data from the surgery suite into a machine learning algorithm to see how accurately it could predict what sound was being made based solely on brain activity recordings.
Some sounds and participants, such as /g/ in the word “gak,” were correctly decoded 84% of the time when they were the first sound in a string of three that formed up a given meaningless word.
However, accuracy suffered as the decoder parsed out sounds in the middle or at the end of a meaningless word. It also struggled with comparable sounds, such as /p/ and /b/.
In general, the decoder was 40% correct. That may appear to be a low test result, but it was pretty outstanding considering that similar brain-to-speech technological achievements require hours or days of data to pull from. Duraivel’s speech decoding system, on the other hand, was only working with 90 seconds of spoken data from the 15-minute test.
With a new $2.4M grant from the National Institutes of Health, Duraivel and his mentors are optimistic about developing a cordless version of the gadget.
“We’re now developing the same kind of recording devices, but without any wires,” Cogan went on to say. “You’d be able to move around, and you wouldn’t have to be tied to an electrical outlet, which is really exciting.”
While their work is encouraging, Viventi and Cogan’s speech prosthetic will not be available for purchase anytime soon.
“We’re still at a point where it’s much slower than natural speech,” Viventi said in a recent Duke Magazine article about the technology, “but you can see the trajectory where you might be able to get there.”
more recommended stories
-
Atom Probe Study Reveals Fluoride Patterns in Aging Teeth
Teeth are necessary for breaking down.
-
Annona Squamosa: Natural Remedy for Pain & Arthritis
In Brazil, researchers discovered compounds having.
-
SBRT and Sorafenib: A New Hope for Liver Cancer Patients
Recent findings from the Phase III.
-
Reducing Hand Hygiene Monitoring Saves Hospital Costs
A recent study in the American.
-
Surgeons Slow to Adopt Biomaterials for Bone Defects
Two million bone transplants are performed.
-
Silver Showerheads May Promote Biofilms & Resistance
To protect against hazardous waterborne germs,.
-
Kaempferol: A Breakthrough in Allergy Management
Kaempferol, a dietary flavonoid found in.
-
Early Milk Cereal Drinks May Spur Infant Weight Gain
New research published in Acta Paediatrica.
-
Gaps in Gestational Diabetes Diagnosis in Pregnant Women
According to research on gestational diabetes.
-
TaVNS: A Breakthrough for Chronic Insomnia Treatment
A recent study conducted by the.
Leave a Comment