How AI is getting an emotionally intelligent robot

Robot : Tech that reacts to feelings and facial expressions is about to transform our lives

How AI is getting an emotionally intelligent robot

Rana el kaliouby has spent her career tackling an increasingly important challenge: computers don’t understand humans. First, as an academic at cambridge university and massachusetts institute of technology and now as co-founder and chief executive of a boston-based ai start-up called affectiva, ms el kaliouby has been working in the fast-evolving field of human robot interaction (hri) for more than 20 years.

“technology today has a lot of cognitive intelligence, or iq, but no emotional intelligence, or eq,” she says in a telephone interview. “we are facing an empathy crisis. We need to redesign technology in a more human-centric way.” 

That was not much of an issue when computers only performed “back office” functions, such as data processing. But it has become a bigger concern as computers are deployed in more “front office” roles, such as digital assistants and robot drivers. Increasingly, computers are interacting directly with random humans in many different environments.

This demand has led to the rapid emergence of emotional ai, which aims to build trust in how computers work by improving how computers interact with humans. However, some researchers have already raised concerns that emotional ai might have the opposite effect and further erode trust in technology, if it is misused to manipulate consumers.

In essence, emotional ai attempts to classify and respond to human emotions by reading facial expressions, scanning eye movements, analysing voice levels and scouring sentiments expressed in emails. It is already being used across many industries, ranging from gaming to advertising to call centres to insurance.

Gartner, the technology consultancy, forecasts that 10 per cent of all personal devices will include some form of emotion recognition technology by 2022.

Amazon, which operates the alexa digital assistant in millions of people’s homes, has filed patents for emotion-detecting technology that would recognise whether a user is happy, angry, sad, fearful or stressed. That could, say, help alexa select what mood music to play or how to personalise a shopping offer.

Affectiva has developed an in-vehicle emotion recognition system, using cameras and microphones, to sense whether a driver is drowsy, distracted or angry and can respond by tugging the seatbelt or lowering the temperature. 

And fujitsu, the japanese it conglomerate, is incorporating “line of sight” sensors in shop floor mannequins and sending push notifications to nearby sales staff suggesting how they can best personalise their service to customers. 

A recent report from accenture on such uses of emotional ai suggested that the technology could help companies deepen their engagement with consumers. But it warned that the use of emotion data was inherently risky because it involved an extreme level of intimacy, felt intangible to many consumers, could be ambiguous and might lead to mistakes that were hard to rectify.

The ai now institute, a research centre based at new york university, has also highlighted the imperfections of much emotional ai (or affect-recognition technology as it calls it), warning that it should not be used exclusively for decisions involving a high degree of human judgment, such as hiring, insurance pricing, school performance or pain assessment. “there remains little or no evidence that these new affect-recognition products have any scientific validity,” its report concluded.

In her recently published book, girl decoded, ms el kaliouby makes a powerful case that emotional ai can be an important tool for humanising technology. Her own academic research focused on how facial recognition technology could help autistic children interpret feelings.

But she insists that the technology should only ever be used with the full knowledge and consent of the user, who should always retain the right to opt out. “that is why it is so essential for the public to be aware of what this technology is, how and where data is being collected, and to have a say in how it is to be used,” she writes.

The main dangers of emotional ai are perhaps twofold: either it works badly, leading to harmful outcomes, or it works too well, opening the way for abuse. All those who deploy the technology, and those who regulate it, will have to ensure that it works just right for the user.

This news was originally posted on