Skip to main content Skip to Footer

BLOGS


April 11, 2018
Performance reviews for AI?
By: Brad Ryan

Why organisations need to raise their artificial intelligence (AI) solutions to be responsible, productive members of society.

Like people, AI solutions are intelligent. They can sense, comprehend, learn and act autonomously. Already, intelligent solutions are managing our supply chains and approving our bank loans. Soon, they will be determining our salaries and driving our cars.

According to a global survey carried out as part of the Accenture Technology Vision 2018 report, 82% of Australian executives believe that within the next two years, AI will work next to humans in their organisations as a co-worker, collaborator and trusted advisor.

If AI solutions are going to have as much influence on organisations as human workers, businesses and governments need to stop treating their AI like a software program—and start treating them like an employee. They must comply with organisational policies, bound by behavioural standards and measured by Key Performance Indicators (KPIs).

In other words, organisations should expect AI solutions to model the values of the business, portray its brand image and align to societal norms of responsibility, fairness and transparency.

Human face, human values
This is especially important where an AI is the face of entities that support vulnerable members of society, as is becoming increasingly common. Today, there is an expectation on governments and organisations that they will play a more active role in improving society through their products and services, rather than simply being expected to drive profits for shareholders.

For example, Melbourne company, Good Hood, is about to launch a chatbot to support people affected by family and sexual violence. Due to the high number of calls that support organisations get from people in need, people who call these help lines often face long wait times. Good Hood’s chatbot, Hello Cass, gives people calling the service, basic support and advice without the wait.

In the UK, where demand for aged care facilities outstrips supply, Accenture is piloting an AI solution, based around Amazon Alexa, to allow elderly people to live at home longer. The intelligent solution integrates reminders for medicine, exercise and activity tracking, basic shopping and home services, video calls and other functions to help elderly people stay safe, connected and independent.

Garbage in, garbage out
If we want AIs to uphold our values, we must give them high quality information. The fuel for an AI is data (and lots of it). Just like cars, to run properly, AIs require fuel that is clean and fit for purpose.

AI solutions trained on data that is incomplete, misrepresentative or biased will make decisions that would be obviously incorrect to a human. For example, researchers at the University of Virginia, recently trained an AI solution on photo data set that is widely available. Studying the results, they found that the AI solution, trained on this data, exhibited predictable gender biases found in the photos, including categorising a man standing next to an oven, as a woman.

Governments and organisations need to invest in cleansing and curating their data to the quality needed to train their AI solutions. They must also ensure data is continually updated and models retrained as products, markets and services change. Doing this ensures trained models will continue to reflect the current circumstances, in the same way people continually learn through new experiences.

Transparency = trust
For consumers to trust AIs, the decisions and the process by which an AI solution made them, need to be explainable. Almost 9 in 10 (88 percent) of the executives in our Technology Vision survey say it’s important for employees and customers to understand the general principles used to make AI-based decisions by their organisations. Nearly three quarters (72 percent) report their organisations seek to gain customer trust and confidence by being transparent in their AI-based decisions and actions.

However, this is easier said than done. Many AI solutions implement complex processes that leverage many millions of calculations to arrive at the results that they do. So, developing explainable AI solutions is a significant challenge.

Organisations need to make sure they can, not just provide the logic for, but interrogate the rationale for AI-based decisions. This may require recording input data, like voice and video, to know what the AI was "thinking" when it made its decision. Was it listening to the right input or did it pick up unintended noise?

The buck stops here
Just as organisations are accountable for the decisions and actions taken by their employees, so they are for the implications of the decisions made by their AI solutions. To this point, Audi recently announced it will take accountability for accidents involving its "Traffic Jam Pilot" automated system in its new 2019 A8 model.

As the sophistication of AI usage grows rapidly this year, governments and organisations must ensure the members of this new virtual workforce are as ethical and trustworthy as their human colleagues. That starts with reviewing AI performance using the same measures as for an employee.

RELATED CONTENT

Popular Tags

    More blogs on this topic

      Archive