RESEARCH REPORT

In brief

In brief

  • Conversational AI is advancing fast, able to sustain ever-more human-like relationships with end users.
  • But new ethical and trust considerations are emerging alongside—creating extra AI design challenges for companies.
  • It’s why Accenture has developed a new approach for thinking through the ethical implications of building conversational AI tools.
  • It can help AI designers approach what are highly complex, nuanced and far-reaching ethical questions in a more systematic and comprehensive way.


How to ensure conversational AI is trusted?

Conversational AI has the potential to transform customer and employee experiences, supporting new levels of business growth. But as the technology advances, companies need to be ever-more aware of the ethical implications.

Technologies like natural language processing/generation (e.g. GPT-3, BERT, etc), affective computing and facial recognition are becoming more and more powerful. As they do so, conversational AI is evolving to support more human-like relationships—better able to build rapport, show empathy and drive collaboration in mutually beneficial experiences for companies and consumers.

Businesses in a wide variety of industries are looking to capitalize on these advances. They’re spotting opportunities to use conversational AI solutions to become more attentive and responsive to customers, enhance operational efficiency, and support future growth.

The catch? Advancing conversational AI opens a new set of ethical concerns and challenges in maintaining customer trust and complying with emerging regulations. And the sheer number of considerations and tension points make these questions highly nuanced and context specific.

As such, companies can often struggle to recognize all the ethical implications and consequences of their design decisions. In particular, AI designers have lacked a systematic way of identifying and addressing the unique ethical risks associated with advanced conversational AI.



A new approach from Accenture

Accenture has developed an approach for thinking through the ethical implications of building conversational AI tools. This considers the intricacies of technology development and human rights in tandem and is built around three high-level components:

Does the conversational AI have a human-looking avatar that might embed stereotypes?

Does it set out to understand the human user?

Does it behave like a human in a way that changes the relationship with the end user?

View All
  1. Looking human. The mimicry of human features and characteristics can be a useful way to engage end users. But designers must carefully and deliberately consider how stereotypes and discrimination can manifest in the choice of gender, body shape, skin color, and so on.
  2. Understanding the human. Newer conversational AI systems attempt to infer a huge amount about a person from their real-time behavior, tone of voice, and historical transactional data. It’s vital to consider If the technology is scientifically valid and if it appropriate to the use case. How much of this data is being collected, and why, and are stringent data security and privacy policies in place. Steps must also be taken to consider and mitigate bias in the data used to train AI systems.
  3. Behaving like a human. The better the technology gets, the more conversational AI solutions are able to seem like a human companion or a colleague. That fundamentally changes the relationship with the end user. Companies should therefore ensure their use of AI is transparent and that users are able to make free choices. They should also be prepared for user relationships to evolve in unexpected ways (such as people revealing personal secrets or becoming emotionally dependent on their AI assistant).
How can we capture the opportunities of advancing conversational AI, while simultaneously addressing the risks?

As conversational AI promises a fundamental change in the way machines support and improve human lives, this approach can help companies heed the call for greater ethical oversight—and deliver AI solutions that are both trustworthy and trusted by the people that use them.

Laetitia Cailleteau

Lead – Data & AI, Europe


Patrick Connolly

Research Manager – The Dock, Accenture Research

MORE ON THIS TOPIC

From AI compliance to competitive advantage
The future of customer conversation
Maximize collaboration through secure data sharing

Subscription Center
Visit our Subscription and Preference Center Visit our Subscription and Preference Center