How ethics, experience and excellence make a difference to how you run IT
December 13, 2021
December 13, 2021
As my recent blogs have highlighted, there’s some tricky territory to navigate when you’re involved in IT and data science matters. On a daily basis, global IT teams are faced with dilemmas around the three “e’s”—ethics, experience and excellence—and it raises some questions that I’ve tried to answer.
Ethics are important from a couple of perspectives. First, there’s a question around whether digital ethics stand alone or whether they’re actually a lens through which all ethical issues should be viewed. We all like to think societies have accepted ethical standards that govern shared experiences—don’t steal, don’t skip the queue—but does that also include digital ethics? We don’t so much go online as be online—which includes making friends with people we might never meet or expecting our news feed to be fast and free. Digital is quickly becoming less a component of our lives and more about redefining how we live our lives. Take a look at my argument in “When will Digital Ethics become simple ethics?” for more.
Second, there’s another perspective around the ethics of Artificial Intelligence (AI). We know how fond the tech industry is of unifying multiple standards into one comprehensive, overarching standard. But when it comes to AI ethics we’re actually talking about multiple different ethical domains, such aslaw, medicine, agriculture and so on. Each domain has its own ethical considerations and standardizing across these domains could potentially do more harm than good. In some ways it might be better to let AI practitioners choose which guidelines best suit them—so that AI can work as intended, for the many not the few. Read “In harmonising ethical AI standards, who calls the tune?” to see if you agree.
There is a school of thought that says AI can fix bias—such as detecting gender disparities in health care. But although AI can certainly perform certain diagnostic tests faster and more reliably than humans, it’s unlikely to reduce human bias. AI essentially attempts to automate a human decision-making process. So any bias present in that process is going to end up being present in the AI. Even worse, by training AI on the outputs of broken systems we may make the systems’ bias far worse. Access the blog “AI is not an antidote to bias” to discover how experience is affected by the bias of the human performing the task and the bias of the human who built the AI.
Another aspect that can influence experience is customer behavior. Rather than asking users to tell if you if your product or service is working, look to their behaviors to find out if they are happy. Customer responses to surveys don’t tell the whole story—but you can tell a lot from how they behave. See whether you agree with the examples in my blog: “Stop asking users to tell if you if your product or service is working.”
At Accenture, we talk a lot about the benefits of combining technology with human ingenuity—and we rely on it being a symbiotic relationship. Neither one should act as a substitute for the other.
Machines excel at retaining and manipulating information, but humans excel at relating that information to human experiences. The case of personal information is a good example. People aren’t keen on the idea of robots handling their personal information—they want a personal interaction with a human being.
You know when you swap contact details with someone it’s the first stage in a unique connection with them; whereas a machine relates information unemotionally to a goal. Put the two together, though, and you have the potential for AI to handle complex data processing tasks and free up humans to spend more time on value-add tasks, which is why I’m suggesting you “Don’t send an AI to do a human’s job.”
Head over to my LinkedIn for more—and let me know how the three “e’s” are working in your own organization.