ML Used to Decode How Brain Interprets Different Sounds

K.C. Sabreena Basheer Last Updated : 04 May, 2023
3 min read

In a groundbreaking study published in Communications Biology, neuroscientists at the University of Pittsburgh have developed a machine-learning model that sheds light on how brains recognize and categorize different sounds. The insights from this study are expected to pave the way for a better understanding of speech recognition disorders and improve hearing aids.

Also Read: Use of ML in HealthCare: Predictive Analytics and Diagnosis
The University of Pittsburg developed an ML model that decodes how brains interpret different sounds.

Sound Recognition vs Facial Recognition

The researchers have drawn parallels between sound recognition and facial recognition. In facial recognition, our brain recognizes specific features instead of matching them with a perfect template. Similarly, while recognizing specific sounds, the brain picks up on useful features that define a particular sound. This ML model will significantly enhance our understanding of neuronal processing that underlies sound recognition.

Using a new speech recognition technology, neuroscientists decode that brains interpret different sounds in a way similar to recognizing faces.
Also Read: AI Model That Can Translate Brain Activity into Text

Significance of the Study

The study is a crucial step toward understanding the biology of sound recognition and finding ways to improve it. As everyone experiences hearing loss at some point in their lives, this study carries immense significance in treating speech recognition disorders and improving hearing aids. Moreover, the process of vocal communication is fascinating in itself as it involves the interaction of human brains conveying ideas through sound.

A new machine learning model decodes how brains interpret different sounds to help detect speech recognition disorders & improve hearing aids.

Humans and animals encounter a wide variety of sounds every day, yet they communicate and understand each other, including accents and pitch. For instance, when we hear “hello,” we recognize its meaning regardless of the speaker’s accent or gender.

Also Read: ML Model Predicts Insomnia With Considerable Accuracy

Guinea Pig Experiment

To recognize different sounds made by social animals, the team built an ML model of sound processing. They recorded brain activity from guinea pigs while listening to their kin’s communication sounds. This was to test if their brain responses corresponded with the model. Neurons responsible for processing sounds lit up with electrical activity when they heard a noise with the features of specific types of sounds.

To check the performance of the model against real-life behavior, guinea pigs were exposed to distinct sound signals. Researchers trained them to walk over to different corners of the enclosure and receive fruit rewards depending on which category of sound was played.

The researchers even took it a step further by mimicking how humans recognize words spoken with different accents. They ran guinea pig calls through sound-altering software, speeding them up or slowing them down, raising or lowering their pitch, or adding noise and echoes. The animals performed the task consistently, even with altered sounds with artificial echoes or noise.

Future Applications

According to lead author Satyabrata Parida, Ph.D., these insights can help people with neurodevelopmental conditions or engineer better hearing aids. Although there are better speech recognition models available, this model has a closer correspondence with behavior and brain activity, giving us more insight into biology.

The speech recognition ML model helps detect speech recognition disorders & improve hearing aids.

Our Say

This study is a significant milestone in understanding neuronal processing underlying sound recognition. This breakthrough machine learning model can significantly enhance our understanding of speech recognition disorders. It can also help improve the design of hearing aids and benefit those with neurodevelopmental conditions. The findings of this study will have far-reaching implications in the field of neuroscience and provide new avenues for research, ultimately leading to better treatments and outcomes.

Also Read: ChatGPT Outperforms Doctors in Providing Quality Medical Advice

Sabreena Basheer is an architect-turned-writer who's passionate about documenting anything that interests her. She's currently exploring the world of AI and Data Science as a Content Manager at Analytics Vidhya.

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details