|
In the 1982 film Blade Runner, Harrison Ford’s character Rick Deckard comments that the android "Replicants" he "retires" are "like any other machine… either a benefit or a hazard."
Forgive me for opening with a film quote, but with the Blade Runner sequel out in cinemas it feels both topical and apt. Harrison Ford’s character, Rick Deckard, was describing the fine line we tread today. It’s the line where technological innovation, represented by the Replicants in Blade Runner, could so easily deliver a benefit or a hazard to how we as humans live our lives and evolve.It’s a fine line that marketers are debating now regarding the best use of new technologies when communicating with their customers. How do we deliver personalised experiences without becoming too personal? When do we know too much about our customers? How does programmatic deliver relevance without becoming a spam-filled black box? And if the machine is learning, who is teaching it?
The risk of AI bias
Where AI gets its intelligence from is an interesting question. Clearly, there’s always been the potential for marketers, and indeed brands, to show their bias, unconscious or otherwise, in what they create and how they go to market. But when it comes to programming computers to make decisions and take operations on our behalf, this risk becomes amplified.
In many cases marketers and digital innovators are fundamentally disconnected from their target market. Take for example the average UK income; in July 2017, the Office for National Statistics predicted that the average household income in the UK would rise marginally to £27,200. Compare this to the average salary in a UK marketing agency which is likely to far exceed £27,000 after just a few years. And the diversity of many marketing departments is also unrepresentative of the UK as a whole. So, are we teaching machines biases and assumptions that just aren’t true of the humans we market to?
Humanising AI
The scale, frequency and impact AI-led technologies enable marketers to have on their audiences is very different. To get it wrong can create more far-reaching damage to their brand. Marketers therefore need to understand that their reality isn’t necessarily their customer’s reality. They need to build a true and deep understanding of the humans who buy their products and services. And this is where the world of analytics blurs with the creative world: We can use the power of data to get deeper understanding and act accordingly. Analytics can provide deep mapping of preferences and so on, but it doesn’t reveal the human element or provide empathy.
We recently delivered a project for West Midlands Police to bring digital technology into its contact channels. The brief was clear: Only deploy digital platforms if they’ll enrich the relationship with the people who use the centres, including citizens, partners, officers and the contact staff. We spent hundreds of hours in police stations, response cars and the call centres getting to know the real people we were designing for. The result is an innovation that works, enabling faster responses to people’s queries, reducing pressure on the contact centre staff while protecting citizens and preventing harm.
Some tips for human-focused design
As a take-out from this project, there are a few things that we learnt:
AI promises to be a huge benefit to society. Equally, if not deployed correctly, it could cause real harm. To avoid the latter, AI design must focus on outcomes, and above all else ensure that every innovation benefits people.
SIGN IN WITH SOCIAL
Comment submitted
Submitted comment may not display automatically.