Skip to main content Skip to Footer

BLOGS


Digital Perspectives
New views. Applied now.

Digital Perspectives

New Views. Applied Now.

June 07, 2019
Organisations need to start putting their AI principles into practise
By: Caryn Tan

This week at CogX in London, some of the biggest thinkers in artificial intelligence will be gathering to discuss the latest innovations in the field. We’re set to see a whole host of new disruptive ideas, but there will be one elephant in the room that will be impossible to hide from: ethics.

There has been a huge amount of recent work on ethical AI principles, guidelines and standards across different organisations, including IEEE, ISO and the Partnership on AI. However, we haven’t yet made much headway in finding practical ways for organisations to tackle real-life ethical dilemmas and make decisions when faced with them.

The use cases for AI are getting increasingly varied and complex. It will undoubtedly bring huge benefits in terms of speed and efficiency, for example, in predictive maintenance. But it may also force organisational leaders to make difficult trade-offs especially in navigating the space where AI is used to make decisions that will impact human lives. Given fairness does not have a universal definition and requires organisations to think about what AI outcomes they are comfortable with, there’s a huge grey space that organisations are yet to navigate.

So how do organisations enable people trained in different fields, with different metrics of success, think about practical solutions to move forward?

The grey area, where there’s often no single right answer, is the first place to start. They need to bear in mind that ethical issues rarely present themselves as black and white. Many different responses might be appropriate. These have to take into account the organisation’s wider purpose and mission. And as the world doesn’t stand still, effective governance has to be a continual process that’s open to constant review and reflection. That process needs to consider both the organisation’s vision and values as well as changes in the external context.

Combining the skills of people from an interdisciplinary background—data scientists, lawyers, business people and so on—will help to make sure that ethical dimensions are explored and analysed from multiple perspectives. Interdisciplinary teams are likely to have the breadth of vision needed to clarify the range of ethical standards to which a company must hold itself accountable. For example, legal may see data protection and legal liability issues, whereas business people may detect ethical problems as damaging brand and reputational trust that could drive customers away. And data scientists may see all these as potentially stifling their work. But by working together and sharing their perspectives, different team members can identify a balanced response and make decisions that keep the business within its ethical guardrails.

So how can organisations organise these interdisciplinary teams in practice?

An analogy well used by our Responsible AI team is that rather than a police patrol passing down orders, you need to think about them as fire wardens. That means they are responsible for spotting and escalating issues that need attention. They raise the alarm if something looks like it could cause a problem.

Top of the list is executive level buy-in to adopting a new approach to decision making about how AI is built. This means organisation should be training and supporting team members to comfortably work with employees from other functions. The objective is to collaboratively make effective decisions. Key individuals should be chosen from within development teams so that they can escalate issues as they arise.

There also need to be strong links between data scientists and legal/compliance specialists. People who can operate comfortably in both worlds are increasingly valuable so training should focus on embodying such ways of thinking and working. While data scientists wield the power of the code, and ultimately the outcome of the AI, others within the organisation have critical input in guiding how the code is written. Effective communication between data scientists and other team members is key to enable interdisciplinary decision making.

It is also important to encourage teams to raise potential risks, even if they turn out to be false alarms. To understand what’s involved in developing responsible AI, teams will need to develop their instincts. That takes time. So they should be encouraged not to fear the consequences of pressing the alarm button. After all, dealing with a false alarm is infinitely preferable to trying to limit the damage from one that’s gone unnoticed as is spiralling out of control.

Responsible, sustainable success with AI is ultimately all about people. They have to understand the ethical approach that best embodies their organisation’s purpose. They need to be empowered to take responsibility and act accordingly. That approach is most likely to support effective, agile governance resulting in responsibly designed, built and maintained AI.

For AI to fulfil its potential, organisations must seek the opinions of those whose lives it will affect, whether that’s consumers, employees or citizens more broadly. That way, they’ll ensure that the AI they are developing remains responsible and ethical in a fast-changing world.

Popular Tags

    More blogs on this topic

      Archive