Feature

How your voice could reveal hidden disease


 

Mood and psychiatric disorders (depression, schizophrenia, bipolar disorders)

No established biomarkers exist for diagnosing depression. Yet if you’re feeling down, there’s a good chance your friends can tell – even over the phone.

“We carry a lot of our mood in our voice,” says Dr. Powell. Bipolar disorder can also alter voice, making it louder and faster during manic periods, then slower and quieter during depressive bouts. The catatonic stage of schizophrenia often comes with “a very monotone, robotic voice,” says Dr. Anderson. “These are all something an algorithm can measure.”

Apps are already being used – often in research settings – to monitor voices during phone calls, analyzing rate, rhythm, volume, and pitch, to predict mood changes. For example, the PRIORI project at the University of Michigan is working on a smartphone app to identify mood changes in people with bipolar disorder, especially shifts that could increase suicide risk.

The content of speech may also offer clues. In a University of California, Los Angeles, study published in the journal PLoS One, people with mental illnesses answered computer-programmed questions (like “How have you been over the past few days?”) over the phone. An app analyzed their word choices, paying attention to how they changed over time. The researchers found that AI analysis of mood aligned well with doctors’ assessments and that some people in the study actually felt more comfortable talking to a computer.

Respiratory disorders (pneumonia, COPD)

Beyond talking, respiratory sounds like gasping or coughing may point to specific conditions. “Emphysema cough is different, COPD cough is different,” says Dr. Bensoussan. Researchers are trying to find out if COVID-19 has a distinct cough.

Breathing sounds can also serve as signposts. “There are different sounds when we can’t breathe,” says Dr. Bensoussan. One is called stridor, a high-pitched wheezing often resulting from a blocked airway. “I see tons of people [with stridor] misdiagnosed for years – they’ve been told they have asthma, but they don’t,” says Dr. Bensoussan. AI analysis of these sounds could help doctors more quickly identify respiratory disorders.

Pediatric voice and speech disorders (speech and language delays, autism)

Babies who later have autism cry differently as early as 6 months of age, which means an app like ChatterBaby could help flag children for early intervention, says Dr. Anderson. Autism is linked to several other diagnoses, such as epilepsy and sleep disorders. So analyzing an infant’s cry could prompt pediatricians to screen for a range of conditions.

ChatterBaby has been “incredibly accurate” in identifying when babies are in pain, says Dr. Anderson, because pain increases muscle tension, resulting in a louder, more energetic cry. The next goal: “We’re collecting voices from babies around the world,” she says, and then tracking those children for 7 years, looking to see if early vocal signs could predict developmental disorders. Vocal samples from young children could serve a similar purpose.

And that’s only the beginning

Eventually, AI technology may pick up disease-related voice changes that we can’t even hear. In a new Mayo Clinic study, certain vocal features detectable by AI – but not by the human ear – were linked to a three-fold increase in the likelihood of having plaque buildup in the arteries.

“Voice is a huge spectrum of vibrations,” explains study author Amir Lerman, MD. “We hear a very narrow range.”

The researchers aren’t sure why heart disease alters voice, but the autonomic nervous system may play a role, because it regulates the voice box as well as blood pressure and heart rate. Dr. Lerman says other conditions, like diseases of the nerves and gut, may similarly alter the voice. Beyond patient screening, this discovery could help doctors adjust medication doses remotely, in line with these inaudible vocal signals.

“Hopefully, in the next few years, this is going to come to practice,” says Dr. Lerman.

Still, in the face of that hope, privacy concerns remain. Voice is an identifier that’s protected by the federal Health Insurance Portability and Accountability Act, which requires privacy of personal health information. That is a major reason why no large voice databases exist yet, says Dr. Bensoussan. (This makes collecting samples from children especially challenging.) Perhaps more concerning is the potential for diagnosing disease based on voice alone. “You could use that tool on anyone, including officials like the president,” says Dr. Rameau.

But the primary hurdle is the ethical sourcing of data to ensure a diversity of vocal samples. For the Voice as a Biomarker project, the researchers will establish voice quotas for different races and ethnicities, ensuring algorithms can accurately analyze a range of accents. Data from people with speech impediments will also be gathered.

Despite these challenges, researchers are optimistic. “Vocal analysis is going to be a great equalizer and improve health outcomes,” predicts Dr. Anderson. “I’m really happy that we are beginning to understand the strength of the voice.”

A version of this article first appeared on WebMD.com.

Pages

Next Article: