It sometimes feels like we’re seeing AI breakthroughs on an almost daily basis, whether that’s near-perfect video manipulation or Google Duplex conducting natural-sounding phone calls with real people. The implications are massive. The line between truth and fiction–and our ability to tell the difference–is blurring.
What does that mean for brands–both in how they use AI and in how they might be affected by it? This was the subject of a fascinating panel discussion we held at CogX with Dr Kate Devlin, Senior Computing Lecturer at Goldsmiths, Ravi Naik, Partner at ITN Solicitors, and my colleague Russell Marsh, Managing Director for Accenture Digital.
In an era when AI can be used to alter content with alarming ease, manipulating a video to change what someone is saying for instance, the technology brings obvious risks for brand reputation. And, as Ravi pointed out, the damage can be done long before any kind of legal remedy can be sought. There’s a need to think much more dynamically about these issues.
I asked the panel whether technical solutions can help. Russell’s view was that, while we might eventually develop AIs to help us spot the use of other AIs, in truth there’s probably only so much technology can do. As Kate pointed out, measures like digital signatures aren’t much good when the fake content is already out there.
For Russell, the question of brand trust becomes paramount. Those with a brand heritage, and a history of living up to their values, will ultimately be trusted. And those who don’t have that heritage will struggle. Ravi made the point that transparency will be a key part of acquiring that trust.
Should that transparency be mandatory? In other words, should brands be required to reveal when they’re using an AI to interact with people? Kate was strongly in favour, noting that technologies like Google Duplex are now so convincing there needs to be some way of identifying when they’re being used.
Russell agreed. For him, it again comes down to trust. None of us likes being fooled. The feeling of having been duped by, say, a spam phone call is huge–and terrible for brand trust. So brands will ultimately have to be clear up front when they’re using this technology.
Ravi drew an interesting comparison with the GDPR, wondering whether we could think about an "AI minimisation" principle that restricts the use of AI to when it’s needed and transparent. For Ravi, GDPR is a good starting point because it targets the data–which is ultimately what AI relies on.
For Kate, it’s important not to understate the work involved in developing an effective regulatory environment for AI, especially when we can’t even agree on what the ethical principles should be. But she believes corporations must be held to account–and that means transparency. It needs global jurisdiction. It’s just too big a question for individual countries to answer on their own.
Russell’s view was that GDPR is important, but it has to be remembered it’s only a framework to drive behaviour. How many of us have simply clicked "ok" to get rid of a GDPR-related pop-up without really thinking about it? He raised the intriguing idea that we may ultimately need something like a Hippocratic Oath for data scientists or developers to sign up to before using our data. Again, it comes back to transparency and living up to brand values. There’s no room today for saying one thing and doing another.
In the end, as Kate noted, we may see a pushback to the human side. Just as artisan foods and microbreweries have proved incredibly popular, perhaps the same will happen with human-computer relationships. People will want a human face in their interactions with brands, even if AI is driving things behind the scenes.
While we covered many concerns, and possible approaches to alleviate them, two overarching themes stand out for me: transparency and “human at the centre”. Transparency will be so critical because people are most willing to trust what they understand and earning, or keeping, trust will remain an essential brand value. “Human at the centre” should be our guiding principle in every implementation of AI: after all, AI should exist to make our collective lives better and if we uphold that principle, people will come to trust AI rather than fear it.