The image of a lab rat is an iconic symbol of scientific research, and for good reason: These rodents are remarkably good stand-ins for human subjects because of how closely their physiology and genetic make-up resemble ours. Because of this, mice and rats are used to study everything from cancer to diabetes to Alzheimer’s disease.
However, despite their tried-and-true use as animal models, there’s something about these rodents that puzzles scientists: What is all the squeaking about?
Up until now, researchers have relied heavily on ambiguous physical cues—such as rats pressing a lever to receive a dose of an addictive substance—or time-consuming manual analysis of rodent chatter to try to understand what drives their behaviour during trials. Both of these methods are vulnerable to human error and misinterpretation. But a new project from the University of Washington aims to better decipher the squeaks and chirps of these rodents by using deep learning to more quickly and reliably analyse their chatter, helping researchers to understand what they’re really saying, like in this example here.
The work is called DeepSqueak, and it uses deep learning and machine vision approaches to categorise the enigmatic chirps of mice and rats. Similar to how a self-driving car might take in and evaluate visual data from the road in front of it, DeepSqueak transforms audio recordings of rodent calls into sonogram images and then uses machine vision to analyse them. A paper describing the project was published in January in the journal Neuropsychopharmacology.
“We can train the software to analyse these calls in a way that is much more similar to how humans learn,” said Kevin Coffey, a lead author of the report and co-creator of the software. “Rather than mathematically describing what a vocalisation is, we just show it pictures and examples.”
After transforming the audio, DeepSqueak works to categorise the hills and valleys of the waveforms into different sound groups, such as distinct syllables or patterns of background noise, which Coffey and his co-creator Russell Marx taught the program to recognise by first feeding it manually labelled calls. The ability to minutely detect and filter out background noise is particularly important when working with rodents, said Coffey.
Graphic: Russell Marx and Kevin Coffey
“Even when humans would do this by hand, the calls are hard to pick out of an audio signal when they’re embedded in a lot of background noise,” said Coffey. “[Because] the animals are running around and banging into things.”
Rodents are naturally vocal animals, and previous research has tried to associate certain vocalisations with corresponding emotional states. For example, higher-pitched calls in rats are generally connected with a positive response (e.g. receiving a reward) while a lower-pitched call is considered a negative response. But this is an inexact science. The researchers behind DeepSqueak hope their tool will contribute to a more nuanced understanding of these sounds, like the vocalisations here.
Researchers can manually analyse the syntax of these calls to better understand behaviour, but this process is not only tedious but can be less accurate when looking at the more complex syllable structure of calls in a mid-range frequency.
The authors of the DeepSqueak paper write that their software not only reduces the number of misidentifications in manual analysis but can analyse calls up to 40 times faster.
In addition to automatically filtering out recognised background noises, DeepSqueak allows a user to easily manually review the identified syllables and to adjust parameters to their specific experiment, such as specifying the rodent species or the classification of syllables. While DeepSqueak can convert, analyse, and output call data on its own, Coffey and Marx both agreed it was important to design a software that could adapt to a researcher’s needs. For example, researchers more experienced in manual vocalisation analysis can use it as a tool to refine their work, and for those newer to the space, it can be an easy entry point into vocalisation research. To aid both groups, the software is free to download and modify from Coffey’s GitHub account.
While the use of deep learning to decode rodent vocalisations is novel, analytical software designed to interpret rodent calls is not. In their report, the researchers specifically make comparisons between a software designed in 2017 named MUPET (Mouse Ultrasonic Profile ExTraction) and a commercial product called UltraVox. Similar to DeepSqueak, these two softwares also allow researchers to perform syllable analysis and classification of vocalisations by transforming an audio file into images; however, DeepSqueak’s deep learning approach sets it apart from its predecessors.
The new paper found that while DeepSqueak did not always outshine the other software, it did show improvement in the filtering of background noise and the detection of varying frequency calls.
Allison Knoll, co-author of the MUPET paper and an assistant professor of research paediatrics at the Keck School of Medicine of the University of Southern California, said that DeepSqueak is a great complementary addition to the advances already being made in the pursuit of this question.
“There remains much mystery about the biological meaning of specific syllable shapes as they relate to ongoing behaviour,” said Knoll, “and increasing the number of tools that labs can use to investigate these differences is a plus!”
While there are no plans to feed human chatter into the DeepSqueak software, the researchers say they hope the better understanding of rodent behaviour and motivation enabled by DeepSqueak will help researchers fine-tune their treatments for humans as well.
“[For example] in drug addiction we need to know not just if the animal is taking drugs but why are they taking the drugs,” said Coffey. “Are they taking the drugs because they like it or because they’re escaping the negative feelings associated with withdrawal?”
With a better understanding of a rodent’s motivational state in a drug addiction trial, researchers might create more effective treatments for people. Additionally, Coffey and Marx say that DeepSqueak can also be used in researching animal models of depression, anxiety, and even Parkinson’s disease.
“The animals can just tell us how they’re feeling with these vocalisations,” said Coffey.
Rodent sounds courtesy of Kevin Coffey.
Sarah Wells is a freelance writer based in Boston writing about the intersection of technology, science, and society. Follow her on Twitter: @saraheswells.
Featured image: Getty Images