Cry Me A Driver: Why Computers Fail At Detecting Emotions

By Rich Firth-Godbehere on at

Do you remember the episode of Star Trek: The Next Generation, when Data, the emotionless android, concluded that Chief O’Brien’s fiancé, Keiko, is going to be happier after cancelling their wedding? Data also thinks that as Chief O’Brien is happy when Keiko is happy, he should deliver the news that Keiko has cancelled their wedding as positively as he can? The result, as you might expect, is Chief O’Brien storming off in anger, and Data concluding that ‘it would appear that my program designed to predict emotional responses needs... adjustment’.

According to proponents of Affective Computing, the days when machines have emotions are much closer than the twenty-fourth century. There are dozens of apps and APIs that claim to be able to read emotions. These include Affectiva, EmoVu, Nviso, Kairos Emotion Analysis, Microsoft’s Cognitive Services, Sightcorp’s Insight SDK, SkyBiometry, Face++, and Imotions’s Emotient. There are also gadgets that read emotions, such as the tiny Bluetooth and Wi-Fi enabled EmoSPARK.

These gadgets and platforms all claim to be able to recognise emotions through facial recognition and speech analysis. The idea is that they can uncover how we feel at any given moment and then provide some information for, say, market analysis, or even save us from ourselves. For example, a self-driving car might be able to sense when its driver is driving angrily or feeling depressed, and so take over and deliver their human safely home.

The other side of this is that these and similar technologies could learn to feel these emotions themselves, allowing intelligent assistants – such as Apple’s Siri and Amazon’s Alexa found in its ‘Echo’ device – to chat more naturally with us, sharing our feelings and really joining in with our conversations.

Other researchers, such as David Hanson of Hanson Robotics, have been creating terrifying disembodied human heads, such as the lovely Sophia, that can emote right back at you, making any human-machine chinwag even more lifelike.

All this might muster up visions of a terrible dystopian future, in which our slave-like vacuum cleaners buzz around the floor sucking up the stale crumbs from last night’s Quattro Formaggi pizza with more enthusiasm than is comfortable. A world in which Siri sulks, refusing to talk to us for days every time we suggest that our iPhone might work better as a projectile, and our car autopilots wake up in a particularly bad mood. Thankfully, Affective Computing is a long way from creating machines that feel the way humans do because of three main problems.

The Psychopath Test

The first problem – and quite an alarming one – is that just because a machine can recognise emotions and act as if it has feelings of its own, that doesn’t necessarily mean it has any emotions. Just because it appears to be sad or happy or says that it thinks we ought to know that it’s feeling very depressed, it doesn’t mean that it is. There are a group of humans able to recognise emotions in others and then display these emotions themselves, while, in fact, lacking any genuine empathy. This group are also excellent at using this disconnect to manipulate others. We call them Psychopaths. I don’t know about you, but the thought of a bunch of psychopathic machines sitting in our pockets and driving our cars, simulating and manipulating emotional responses, while their cold, dark, processor hearts plan our downfall, makes me a tad nervous.

The second problem is that even if we can make machines that have human-like feelings, the idea that we can understand emotions from faces or voices is a little bit out of date. The man to blame is Charles Darwin, who in The Expression of the Emotions in Man and Animals, claimed that “the force of language is much aided by the expressive movements of the face and body”.

Ekman and Friesen's 6 Basic Emotions. Can you guess which is which?

In the late 1960s and early 1970s, two psychologists – Paul Ekman and Wallace Friesen – claimed to have shown that there were certain emotions that all humans shared. This was done by asking people to point to a picture of the face that someone might pull in a given scenario. For example, smelling a bad smell or looking up to see the ominous presence of a wild pig in the doorway (that is one of the actual stories used). By doing this research both in literate eastern and western cultures, and with the cannibalistic brain-eating Fore tribesmen of Papua New Guinea, Ekman and Friesen came to the conclusion that there are six ‘Basic Emotions’: anger, disgust, fear, happiness, sadness, and surprise.

Most forays into getting computers to read emotions from faces utilise a computerised methodology, also developed by Paul Ekman, called the Facial Action Coding System (FACS). FACS uses computer processing to break the face into Action Units (AUs) and working out how someone feels based on how each AU works with the rest. However, FACS has caused problems. For example, In 2015 a group at Glasgow University led by Dr Rachael Jack used FACS and awkwardly concluded that there were only four basic emotions, not six.

This was because fear and surprised shared an expression, as did disgust and anger. The latter came as little surprise (or is that fear?) to many researchers as they had long noticed that neither young children nor people with Huntington’s disease could tell the difference between those two expressions.

The notion of six Basic Emotions has been challenged many times, but many Affective Computing systems still rely on the four-decade-old science. Even Ekman has since revised his list of Basic Emotions, adding Amusement, Contempt, Contentment, Embarrassment, Excitement, Guilt, Pride in achievement, Relief, Satisfaction, Sensory pleasure, and Shame.

Noted scholars Disney, meanwhile, opted for only five of the six Basic Emotions in last year's Inside Out.

Analysing faces though, isn't the most difficult part of detecting emotions. Nothing compares to attempting to analyse the varied voices, accents, and languages found across human cultures. Imagine if you will a dry, sarcastic individual, jokingly suggesting (he thinks) that his psychopathic, loyal, and chatty autonomous car should drive through a busy crossing because the people on it don’t really matter. Would you trust the car to know that it shouldn't obey?

A New Model?

Ultimately, the very idea of Basic Emotions is getting a bit out-of-date. There are other models that fit more recent observations better.

The "Appraisal" Model suggests that we base how we feel as much on the circumstances at the time as the feelings themselves. The "Affective Trajectory" Model takes Appraisal and adds memories of the way we experienced the same feelings in the past, and predictions about how we might feel in the future.

Some believe that emotions are entirely social constructions and that the words we use for them and the place we grow up in moulds how we perceive them. The "Psychological Construction" Model combines Valence, or how pleasant or unpleasant a feeling, or Core Affect, is, and Arousal, or how strongly that Core Affect is felt, to the Appraisal and Affective Trajectory Models. It then adds how our society has constructed our reactions to, and understanding of, those feelings. Psychological Construction claims that the brain uses all of this data in parallel to ‘construct’ emotional experiences.

Oh, and then there are the "lost emotions". While all these models are being debated, historians keep digging up feelings that people seem to have felt in the past but don’t appear to experience anymore.

With this fog of conflicting theories, what hope is there of Androids or iPhones of the future dreaming wistfully of electric sheep?

All is not entirely lost: there have been some more recent attempts to bring other theories of emotions into the world of Affective Computing. The idea of using fuzzy logic systems – in which a computer analyses more fuzzy parameters than just 1 and 0 – in Affective Computing is at least ten years old. More recently, this form of computing has become tied to some of the theories of emotions that don’t put all human feelings into six boxes. For example, 2015 saw a journal article published that explored how adaptive fuzzy systems might use affective trajectories to monitor the emotional state of students.

Earlier this year, another article examined how fuzzy systems might combine with the Psychological Construction Model to allow computers to work out the appropriate labels, or words, for emotions. This is all good because emotions do seem to be pretty fuzzy, and any attempt for machines to feel like us should also be fuzzy. So at least the theory behind the next generation of emotional computers is being worked on.

Rutger Hauer as newly emoting , yet still psychotic, Replicant Roy Batty in Blade Runner.

Unfortunately though, this fuzzy version of Affective Computing doesn’t seem to have any practical applications as yet. There appear to be no apps, programs, or gadgets using other emotion models in development, and those that are being developed still use the slightly dodgy six Basic Emotions and FACS models.

But even if we did manage to crack computing emotions? The problem remains that even if gadgets and apps were to use other approaches to emotions, there still would be no genuine android tears in the rain. It’s hard to see how Affective Computing can do more than to ask machines to simulate, rather than experience, emotions. The unfeeling programs that try to predict our emotions still have some way to go.

Read More: Rutger Hauer on Blade Runner, Its Sequel and His Return to Sci-Fi with Alien: Out of Shadows