You don’t notice it happening, but in a crowded, noisy room, your brain has the remarkable ability to effectively tune out all but the people you’re talking to. Trying to replicate this behaviour in gadgets like hearing aids has proven to be very difficult, but researchers might have finally found a solution by listening to not only sounds but also the wearer’s brain waves.
Like high-end headphones, the most advanced hearing aids on the market are able to reduce the sounds of white noise like nearby traffic so it’s easier to hear a person the wearer is speaking to. But in a situation where several people are all talking at the same time, the technology inside a hearing aid lacks the ability to boost the sound of one voice over another. It’s a longstanding challenge known as the cocktail party problem, which you might have experienced when trying to (unsuccessfully) talk to your Amazon Echo or Google Home during a party – it affects them as well.
But researchers at Columbia University had the opportunity to work with epilepsy patients undergoing repeated brain surgeries to test out a new approach to improving how hearing aids work. Using data gathered from electrodes implanted directly into the volunteers’ brains, they found that their brain wave activities tended to naturally mirror the speech patterns of a specific person they were focusing on and listening to, even when other voices were competing for attention. It’s this unique behaviour of the brain that researchers believe could be the key to radically improving the effectiveness of hearing aids.
Nima Mesgarani, from Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute, is working on designing a new type of microphone that uses a combination of modern technologies to boost the sounds of just a single voice. With the help of neural networks, speech processing algorithms first take all the din a microphone is picking up and separates it into streams of individual voices. Those streams are then compared to the brain waves of the listener, and the voice that closest matches the brain’s activity is automatically amplified so it’s easiest to discern.
The breakthrough is a big improvement to previous approaches by this team, which required the voice of a preferred speaker to be used to pre-train the algorithm. It worked great for someone wanting to always able to hear a spouse or a specific friend, but it was impractical for random encounters, and it was nearly impossible to quickly adapt to new people. The updated approach still has its challenges, however. The Columbia University researchers will need to find a way to accurately monitor brain waves without burying an electrode deep into the hearing aid wearer’s brain – as the hassle of that outweighs the technology’s benefits.
In addition to making it non-invasive, this technology has also only been tested in relatively quiet indoor environments. The next step will be to try it outdoors, where ambient noise increases dramatically, as do the number of distractions drawing the brain’s attention away from someone they might be talking to. It will be a few years before we even see functional prototypes of a wearable hearing aid packing this technology, but in addition to helping the hearing impaired, it could also go a long way to improving the attentiveness of smart assistants like Alexa and Siri who are often equally confused by multiple voices.