Can AI really be emotionally intelligent?

Is emotional AI really the next frontier for artificial intelligence?

Can AI really be emotionally intelligent?

In Do Androids Dream of Electric Sheep, author Philip K Dick questions the notion that an artificial brain could ever be capable of understanding emotion. Is it possible for a machine to really ever have an emotional connection with a human if it can’t empathise or express emotion itself?

What may well have been science fiction in 1968 is now a topic up for debate amongst many within the artificial intelligence industry. As AI advancements march forward, many believe that emotional intelligence is what’s next for the industry. It’s the stepping stone that propels a virtual assistant from a simple call and response helper to something greater – a companion, an assistant, something more than simply a robot.

“The notion of machines actually having emotions seems in the realm of science fiction”

But is there really such a thing as emotional AI, and could it really be used to give emotion to artificial minds? This was the topic under discussion at Photobox Group’s Tech Week panel discussion, hosted by Alphr’s own editorial director Victoria Woollaston.

“The notion of machines actually having emotions seems in the realm of science fiction,” argued voices technology consultant Hugh Williams. “If you think about the inputs that lead to human emotions, it’s pretty complex… These emotions are generated by processing hundreds of millions of sensors in your body. It’s a product of being human.

“It seems unlikely we could come close to reproducing the same results in a machine – if it even makes sense to reproduce it in a machine in the first place.”

So, if emotional AI isn’t giving AI the power to utilise emotions, just what is it?

emotional_ai_robot

Understanding emotional AI

One notion of emotional AI is the simple act of making it seem as if robots and AI actually have human emotions, when they don’t. It’s an act of trickery; canned responses to fixed situations that give us humans a sense that the thing we’re interacting with is alive or capable of understanding us.

As Google Assistant’s technical lead Behshad Behzadi explains, you can see this approach in Google Assistant already. “If Google Assistant answers something wrong people express themselves by calling it stupid, and it apologies. Similarly, we get lots of ‘thank yous’ when it does what’s expected of it and it responds accordingly.”

READ NEXT: A London hospital wants to replace doctors with AI

By giving Google Assistant some level of personality, such as being able to apologise for being wrong or show gratitude when praised, it makes people feel it’s really listening. It has to show that it’s not simply a robotic device but something people want to come back and use time and time again, even if that polite response isn’t that much more advanced than pressing a doorbell and hearing a chime.

“Instead of humans trying to understand how to work with machines, we have to make machines understand humans”

“Instead of humans trying to understand how to work with machines, we have to make machines understand humans”, Behzadi continues. “It’s here that emotional AI can bridge that gap. When a machine can understand a user’s emotional state, it leads to more natural responses and useful interactions.”

Emotional AI has another use, though. Instead of giving machines emotion to play with as a tool to convince users they understand the human condition, emotional understanding could provide an AI with more context for a user’s actions. If it knows you’re angry, it can work out why from the situation you’re in. Similarly, if the same bad situation is happening but a user appears cheerful, the problem may need to be approached in a different manner.

“There’s a lot more we can do to recognise the different emotional states [of our customers] and put those rules into our products,” admitted Clare Gilmartin, CEO of The Trainline. “Emotional states can make a difference to how you react when your train is delayed. Either you’ll end up home late or you might just be a bit behind on your beautiful train journey from Lake Como to Milan.”

READ NEXT: The Trainline will use AI to send personalised disruption alerts

There’s a need for the services people use to be able to record and respond to what it is you’re thinking and feeling. Employing emotional AI into a service means you’ll get the responses and results you want at the time, instead of having to filter through the noise yourself.

emotional_ai_robot_plant

Emotionally disturbing?

If the concept of an emotionally aware AI sends a few shivers down your spine, it’s simply because we’re living in a world where virtual assistants still aren’t really the norm. As with anything, however, this will change over time.

“When a generation grows up with these [virtual assistant] devices as a part of their daily lives, interactions with them will be far more natural,” argues Behzadi. Within the next ten or so years, he envisions a situation where “you’ll always have an assistant that you’ll be able to talk to. Something you can confide in but also something that can help you.

“Today people confide in their devices only occasionally but, in the future, that’ll be something that’s far more common.” The Amazon Echo for Kids, for example, advises children talk to grownups about problems beyond its remit, like bullying.

As Behzadi sees it, the rise of emotional AI means that these natural discussions can happen. While slightly frightening, the world of Spike Jonze’s Her could be plausible – a place where an emotional connection can be formed between a human and a machine.

We see AI as a troubling area partly because it’s going to be responsible for so many moral choices in the future. Williams, however, believes that as the younger generation has a greater understanding of how AI works and the role it plays in our daily lives, we’ll see a wider pool of creative influences comes into play.

READ NEXT: This AI can see through walls to track your movements

Currently, AI’s critical decision making is set by a bunch of programmers and their moral compass. While this area is beginning to diversify, it’s certainly a white male dominated space, hardly all-encompassing viewpoints on how an AI can unbiasedly navigate the world. In time, Williams believes, this will change.

“We’re going to need people from diverse backgrounds to help us build AI’s for humanity as a whole”

“In a couple of years, we’ll see people come out of university as sociologists and psychologists, but with an understanding of what technology can do,” he explains. “It’ll enable them to come up with a deeper level of insights than those who came before them.

“We’re going to find these people sitting alongside computer scientists in the future, helping us tackle the issues around emotionally aware AI. We’re going to need other people from diverse backgrounds and skill sets inside organisations to help us build AI’s for humanity as a whole.”

emotional_ai_robot_error

Emotionally dangerous

Diversifying the range of understanding an AI has from a cultural and societal standpoint is, obviously, fantastic. But there’s another side to emotional AI that has me slightly concerned – and it’s one that can’t be satisfactorily answered, despite the panel’s best intentions: What happens when you arm an AI with the ability to understand emotion but, crucially, not feel it? Creating a sociopathic AI could be a disaster waiting to happen.

“We need to really look at the impact of what we’re doing with technology,” states Richard Orme, Photobox Group CTO. “The real question we need to be asking isn’t whether or not we can do something, but if we even should be doing something.”

“We need to really look at the impact of what we’re doing with technology”

Admittedly, Photobox Group’s take on emotionally intelligent AI isn’t quite Skynet-levels of creepy, it’s simply to help customers build memorable bespoke photo albums quickly. But the panel believes that understanding its impact on customers, along with the wider implications of emotional AI’s applications, is crucial no matter what you’re working on.

READ NEXT: Meet Norman, the “world’s first psychopath AI”

“It starts with understanding the impact [of emotional AI on humans] and how you measure that impact,” Orme continues. “The reality is that we might not generate genuinely emotionally intelligent machines, but they can be designed to respond in a certain way and be designed to have a particular type of impact on the human beings they interact with.”

Orme’s views on responsible deployment of emotional AI is echoed by the other panellists. For instance, Gilmartin believes technology companies that have access to this technology, such as The Trainline, Google or Photobox Group, all have an ethical and moral duty in understanding just what it is they’re building with AI.

Whether it’s Google using AI to organise the world’s information or The Trainline tapping into the technology to keep commuters happy, every application and company looking to Emotional AI systems need to understand just what it really wants to achieve.

“Ultimately,” Gilmartin explained, “there needs to be some sort of purpose behind what we do with emotional AI”.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.