How to ensure conversational AI is trusted?
Conversational AI has the potential to transform customer and employee experiences, supporting new levels of business growth. But as the technology advances, companies need to be ever-more aware of the ethical implications.
Technologies like natural language processing/generation (e.g. GPT-3, BERT, etc), affective computing and facial recognition are becoming more and more powerful. As they do so, conversational AI is evolving to support more human-like relationships—better able to build rapport, show empathy and drive collaboration in mutually beneficial experiences for companies and consumers.
Businesses in a wide variety of industries are looking to capitalize on these advances. They’re spotting opportunities to use conversational AI solutions to become more attentive and responsive to customers, enhance operational efficiency, and support future growth.
The catch? Advancing conversational AI opens a new set of ethical concerns and challenges in maintaining customer trust and complying with emerging regulations. And the sheer number of considerations and tension points make these questions highly nuanced and context specific.
As such, companies can often struggle to recognize all the ethical implications and consequences of their design decisions. In particular, AI designers have lacked a systematic way of identifying and addressing the unique ethical risks associated with advanced conversational AI.
A new approach from Accenture
Accenture has developed an approach for thinking through the ethical implications of building conversational AI tools. This considers the intricacies of technology development and human rights in tandem and is built around three high-level components:
As conversational AI promises a fundamental change in the way machines support and improve human lives, this approach can help companies heed the call for greater ethical oversight—and deliver AI solutions that are both trustworthy and trusted by the people that use them.