Conversational AI has the potential to transform customer and employee experiences, supporting new levels of business growth. But as the technology advances, companies need to be ever-more aware of the ethical implications.
Technologies like natural language processing/generation (e.g. GPT-3, BERT, etc), affective computing and facial recognition are becoming more and more powerful. As they do so, conversational AI is evolving to support more human-like relationships—better able to build rapport, show empathy and drive collaboration in mutually beneficial experiences for companies and consumers.
Businesses in a wide variety of industries are looking to capitalize on these advances. They’re spotting opportunities to use conversational AI solutions to become more attentive and responsive to customers, enhance operational efficiency, and support future growth.
Accenture has developed an approach for thinking through the ethical implications of building conversational AI tools. This considers the intricacies of technology development and human rights in tandem and is built around three high-level components:
Does the conversational AI have a human-looking avatar that might embed stereotypes?
Does it set out to understand the human user?
Does it behave like a human in a way that changes the relationship with the end user?
Looking human. The mimicry of human features and characteristics can be a useful way to engage end users. But designers must carefully and deliberately consider how stereotypes and discrimination can manifest in the choice of gender, body shape, skin color, and so on.
Understanding the human. Newer conversational AI systems attempt to infer a huge amount about a person from their real-time behavior, tone of voice, and historical transactional data. It’s vital to consider If the technology is scientifically valid and if it appropriate to the use case. How much of this data is being collected, and why, and are stringent data security and privacy policies in place. Steps must also be taken to consider and mitigate bias in the data used to train AI systems.
Behaving like a human. The better the technology gets, the more conversational AI solutions are able to seem like a human companion or a colleague. That fundamentally changes the relationship with the end user. Companies should therefore ensure their use of AI is transparent and that users are able to make free choices. They should also be prepared for user relationships to evolve in unexpected ways (such as people revealing personal secrets or becoming emotionally dependent on their AI assistant).
How can we capture the opportunities of advancing conversational AI, while simultaneously addressing the risks?
As conversational AI promises a fundamental change in the way machines support and improve human lives, this approach can help companies heed the call for greater ethical oversight—and deliver AI solutions that are both trustworthy and trusted by the people that use them.