Respiratory Disease Identified with Deep Learning

Respiratory Disease Identified with Deep Learning
Image by rawpixel.com on Freepik

A novel AI algorithm developed at EPFL and University Hospital Geneva (HUG) will power Pneumoscope, an intelligent stethoscope with the potential to revolutionize respiratory disease management in low-resource and distant settings.

Air generates a characteristic whooshing sound as it flows through the maze of microscopic channels in our lungs. When these pathways are narrowed by asthmatic inflammation or clogged by infectious bronchitis secretions, the sound changes in distinct ways. Screening for these diagnostic signals with a stethoscope on the chest, a practice known as auscultation, has become an unavoidable component of nearly every health check-up.

Nonetheless, despite two centuries of experience with stethoscopes, auscultation interpretation remains highly subjective, with one practitioner hearing something different from the next. Indeed, depending on where you are, a single sound can be characterized as sizzling, exploding candies, Velcro, fried rice, and other things. The accuracy is further influenced by the health worker’s level of experience and expertise.

These complexities make it an ideal challenge for deep learning, which has the potential to objectively classify audio patterns. Deep learning has already been shown to improve human perception in a variety of sophisticated medical exams, such as X-rays and MRI scans.

A new study published in npj Digital Medicine by EPFL’s intelligent Global Health research group (iGH), which is based in the Machine Learning and Optimization Laboratory, a hub of interdisciplinary AI specialists in the School of Computer and Communication Sciences, described their AI algorithm, DeepBreath, which demonstrates the potential of automated interpretation in the diagnosis of respiratory disease.

“What makes this study particularly unique is the diversity and rigorous collection of the auscultation sound bank,” said the senior author of the study, Dr. Mary-Anne Hartley, a medical doctor and biomedical data scientist who heads iGH. Almost 600 pediatric outpatients were recruited across five countries—Switzerland, Brazil, Senegal, Cameroon, and Morocco. The breath sounds were recorded on patients under the age of 15 presenting with the three most common types of respiratory disease—radiographically confirmed pneumonia and clinically diagnosed bronchiolitis, and asthma.

“Respiratory disease is the number one cause of preventable death in this age group,” explained Professor Alain Gervaix, Head of the Department of Pediatric Medicine at HUG, and founder of Onescope: the startup that will bring this intelligent stethoscope that integrates the DeepBreath algorithm to the market. “This work is a perfect example of a successful collaboration between HUG and EPFL, between clinical studies and basic science. The DeepBreath-powered Pneumoscope is a breakthrough innovation for the diagnosis and management of respiratory diseases,” he continued.

Dr. Hartley’s team is leading the AI development for Onescope and she is particularly excited by the potential of the tool in low-resource and remote settings. “Reusable, consumable-free diagnostic tools like this intelligent stethoscope have the unique advantage of guaranteed sustainability,” she explained, adding “AI tools also have the potential to continually improve themselves and I am hopeful that we could expand the algorithm to other respiratory diseases and populations with further data.”

DeepBreath is trained on patients from Switzerland and Brazil and then validated on recordings from Senegal, Cameroon, and Morocco, giving insight into the geographic generalizability of the tool. “You can imagine that there are many differences between emergency rooms in Switzerland, Cameroon, and Senegal,” said Dr. Hartley. He lists examples, including “the soundscape of background noise, the way the clinician holds the stethoscope that is recording the sound, the epidemiology, and the local protocols for diagnosis.”

With adequate data, an algorithm should be able to detect the signal among the noise and be robust to these variations. Despite the modest number of patients, DeepBreath maintained a remarkable performance across multiple sites, indicating the possibility for even greater improvement with more data.

The addition of strategies aimed at demystifying the inner workings of the algorithm’s “black box” was a particularly distinctive contribution of the study. The authors were able to show that the model did, in fact, use the breath cycle to make predictions and which sections of it were most essential. Proving that the algorithm uses breath sounds rather than “cheating” by exploiting biased signatures in background noise is a crucial gap in the current literature that can undermine trust in the system.

The multidisciplinary team is working to prepare the algorithm for real-world use in their intelligent stethoscope, Pneumoscope. A major next task is to repeat the study on more patients using recordings from this newly developed digital stethoscope, which also records temperature and blood oxygenation. “Combining these signals together will likely improve the predictions even further,” says Dr. Hartley.

Source Link

Driven by a deep passion for healthcare, Haritha is a dedicated medical content writer with a knack for transforming complex concepts into accessible, engaging narratives. With extensive writing experience, she brings a unique blend of expertise and creativity to every piece, empowering readers with valuable insights into the world of medicine.

more recommended stories