Skip to main content Skip to Footer


April 23, 2017
How does AI respond to emotion
By: Niamh McKenna

Your AI has IQ, but does it have EQ?

I have done the tech nerd thing and bought an Amazon Echo and a Google Home … it’s been a lot of fun (and yes, I have done the thing where you get them to speak to each other)! I have been (over) analysing them and thinking about how we use them, and I have observed a few interesting things. When I speak to my Google Home and Amazon Echo, I automatically speak in a clear and concise voice and there’s never any problem in the devices understanding me (subject to the limitations of their capabilities). However, when my kids give the same instructions, they do so in much more haphazard and less clear tones, and the devices don’t always understand what they are asking them to do. AI devices like these don’t understand nuances of emotion or slightly colloquial/casual language. This exposes the limits of the Natural Language Processing (NLP) underneath.

It got me thinking about the sorts of users we may find amongst the clients I typically work with in the health and public sector. End users of some of our most core public and health services may be in a vulnerable or emotional state of mind when using their services, and this is likely to be reflected in their tone of voice. When we issue instructions like “please turn up the heating” or “play the radio,” our tone is neutral and mundane, but this isn’t the case when dealing with many of our public services. So, if someone is reporting a crime or processing probate following the death of a close relative, their tone of voice (and language) will echo the emotional state they are likely to be in.

When we look to implement AI (as many are considering), we will need to consider the emotional tones within a voice when designing technology for our public services, and ultimately the end users. Currently, front line staff who speak to citizens in emotional or difficult times can instinctively pick up on emotions and respond sensitively, “please don’t worry, Madam, I’m here to help, I know this is a difficult thing to talk about, etc.” So how can we get AIs to do the same?

To create true “citizen-centric design,” particularly for services like those I’ve mentioned, developers and designers must think about the needs of the end user, i.e. “emotion-centred design.” As AI technology matures in its capacity to manage any number of tasks, we need to ask ourselves, how can we maximise what this technology can do for us?

If we are entering a world where technology can interact with us and address our need for services, then we will should expect that it will be able to communicate with the person in need to the same level that another, thinking, feeling human being can. With increasingly stretched budgets and growing demand on public services, governments are trying to maximise both use of technology and the efficiency of services that public and health agencies provide. AI can’t replace everything we do, but its capabilities are expanding beyond anything we thought possible just a decade ago, so can we harness it to better reflect what we need? And, as I reflect, are there limits to how much of this we want?

Popular Tags

    More blogs on this topic