Fear Trumps Happiness in Vocal Cues

Scared man

Given only vocal cues, humans can identify fear faster in other voices than happiness, say researchers.

The cause lies in biological-survival imperatives. And the implications may prevent your next computer technical-support call from ending in a one-sided screaming match with a voice recording.

Understanding the time it takes to identify an emotion can help engineers develop better automated call centers, aid psychologists in training people with autism to learn subtle social cues, and assist public speakers to analyze the effectiveness of their speeches.

Researchers at McGill University in Canada and the Max Planck Institute for Human Cognitive and Brain Sciences in Germany have measured the time it takes for people to correctly identify certain emotions (anger, disgust, fear, sadness, happiness). Experimenters speak a neutral, meaningless phrase (e.g., The rivix jolled the silling), which is then broken into seven pieces based on syllables. The time each participant takes to react to each piece is recorded.

Marc Pell, from McGill University’s School of Communication Sciences and Disorders, says emotion recognition studies of the voice are important, though rarely studied compared with facial expression.

“When you look at a face, all of the information that will allow you to recognize the emotion is available instantaneously if you’re focusing on it,” says Pell. “What is different about the voice is that emotions have to evolve over time.”

According to Pell, certain emotions require less acoustic information before participants can recognize them. Fear, anger, and sadness were among the easiest of emotions to identify (about 500 milliseconds), while happiness and disgust took two to three times as long (1000-1500 milliseconds).

Researchers think there’s a biological reason for this. The faster an emotion can be identified by sound alone, the more important it is as an evolutionary survival mechanism. Emotions that take longer to identify “might have a more social function, such as happiness,” says Pell.

Strangely, this isn’t true for all of our senses.

“If you look at facial recognition, [identifying] happy is extremely fast,” says Pell. “So there’s what we call a happy face advantage because one can very quickly determine that a face conveys happiness and yet in the voice one sees that happiness actually takes quite some time to figure out.”

How about between men and women? The study found that while men and women clearly express emotions differently, there was no difference between male and female listeners in recognizing emotions. Women may pay more attention to emotions than men, according to Pell, but their listening ability appears to be the same.

In interactions with machines, automated call centers that deal with customer support or emergencies is one common area that can benefit from emotion recognition research, according to M. Ehsan Hoque, a doctoral student in the Affective Computing Group of MIT’s Media Lab.

“Assume that an automated system can recognize the frustration in your voice,” says Hoque. “For example, someone on the road gets in an accident, calls in and gets an automated system. Assume the automated agent can sense the fear or sense of urgency in the caller’s voice and then delegate the call right away to a human responder.”

People with autism could benefit from this research as well. Because autism hinders a person’s ability to recognize subtle changes in vocal expressions like intonation and emotional context, understanding how the brain connects auditory signals with their intended meanings could help create educational programs that teach those subtle changes.

“We’re trying to understand the prosody patterns of emotion,” says Hoque. “Especially with autism, it’s not about what you say, but how you say it.”

Hoque says other applications for emotion recognition research include systems that help people analyze their own speech, either in common dialogue or in public speaking situations, so they can fine-tune details like their voice inflections, pauses and average volume to maintain interest in their audiences.

Considering the increasing automation of the man-made world, researchers like Pell see emotion research as an important next step in blurring the line between our human and artificial environments.

“I think the goal of those systems in robotics is to have naturalistic, human-like interactions and emotion is the huge thing they need to add to these systems,” says Pell. “My motivation is to understand natural human communication in its full complexity.”

Garret FitzpatrickComment