Artificial intelligence (AI) now has a deeper touch in healthcare. It is much more than a technological tool—it is part of the workforce. AI is using algorithms to diagnose patient wounds via smartphone.1 It is allowing caretakers and doctors to remotely monitor the elderly so that they may safely live independently.2 AI is helping health systems to digitally verify a patient’s insurance information.3

More and more, AI touches the end-to-end care experience—and it will only grow in scale. A majority of health executives (85 percent) surveyed agree that every human will be directly impacted on a daily basis by an AI-based decision within the next three years.

Meanwhile, AI is growing up. Just as a child begins to learn to be responsible on his or her own, AI is gaining knowledge without simply being programmed. Learning-based AIs build models based on vast amounts of training data. They spot patterns and self-check against test data for success across a variety of factors. And the more data an AI is given, the better its predictions become.

The power of AI in healthcare is both awesome and alarming, considering the potential implications of one bad decision. Healthcare organisations must recognise this impact and build AI with responsibility, fairness and transparency—but many aren’t there yet. Eighty-one percent of health executives agree that organisations are not prepared to face the societal and liability issues that will require them to explain their AI-based actions and decisions, should issues arise.

Raising it right

When AI is “raised” successfully, it can begin to do more to help the healthcare enterprise. For instance, AI might support human decision making in a wider variety of areas, such as providing a recommended treatment plan or accepting or rejecting medical claims. In healthcare, this is life or death business.

Healthcare leaders must ensure that the data used to inform AI solutions are created without any embedded bias. Bias can be harmful to people. For instance, if a healthcare organisation builds a model around heart disease detection, and the model is full of data about older white males, it may yield incorrect results such as a denied claim, or cause harm such as an inaccurate diagnosis, for a young African American.

For this reason, companies using AI must be very mindful of what is in the data used to train their AI. They must continuously test for and mitigate biases to minimise risk, error and potential harm.

Healthcare organisations must raise action that it takes. Companies using the technology must think carefully about the responsibility and liability of the actions their AI takes on their behalf.
Organisations might consider being abundantly clear about how they will use patient data by sharing a public statement around responsible and explainable AI and AI governance.

Lack of trust. Lack of adoption.

Raising explainable and responsible AI is essential to earning trust with those interacting with the technology. And trust is the only way to foster adoption. For healthcare to realise AI’s full potential, each organisation must be transparent about its motives for using data, as well as consistent with the motives of consumers. That means understanding what the consumer believes the business has the right to do with their information.

Health plans likely have more trust-related issues to overcome than providers. For instance, consumers perceive medical information shared by health plans to be biased. An Accenture survey found that when researching treatments, 69 percent of consumers believe health plan sources are steering them in a preferred direction.4

As health plans start to use AI to augment employees for core transaction processing, such as claims processing, this insight and path to decisions must be transparent so that it is explainable to consumers—not hidden in a black box. What’s more, regulators are often involved in reviewing a consumer appeal or grievance on a claim denial and expect a clear and explainable answer for a determination.

Organisations might consider being abundantly clear about how they will use patient data by sharing a public statement around responsible and explainable AI and AI governance.

In addition to earning trust with consumers, clinicians must trust AI to adopt it. When clinicians are using AI to make decisions, they need to believe the technology is trustworthy and dependable. But how dependable must the technology be before making it available to clinicians? As reliable as a medical student? A nurse? A physician’s assistant?

Proving outcomes is one way to earn clinicians’ trust in AI. Jvion is using an Eigen-based cognitive computing engine to help healthcare organiations identify patient-specific risk of adverse events and recommend individualised actions to help mitigate those risks. By using Jvion’s solution to reduce readmissions, Health First has avoided more than 800 unnecessary inpatient length of stay days and realised about $2 million in material savings.

New solutions will allow healthcare organiations to be more transparent in how AI models make decisions, thus providing the evidence that analytical thinkers, such as doctors and regulators, are seeking to validate decisions and eliminate bias. For example, Pegasystems created a customer decision hub that has a “T-Switch” feature that allows an organisation to set transparency thresholds for AI. This gives customers a choice: Some models are opaque and some are more transparent and, therefore, explainable. Such solutions allow users—whether clinicians, administrators or consumers—to see the inner workings of an AI model, including contributing factors to why a decision was made. With the T-switch, AI becomes a consultative companion for decision making. A feature like this is especially important for compliant AI in light of the European Union’s General Data Protection Regulation, which includes the “right to explanation of decisions made by automated systems.”

73%

of health executives are planning to develop internal ethical standards related to the use of AI to ensure their AI systems are designed to act responsibly.

81%

of health executives agree that organisations are not prepared to face the societal and liability issues that will require them to explain their AI-based actions and decisions, should issues arise.

Keeping up with AI

AI is changing every day—and health executives (80 percent) believe it is advancing faster than their organisation’s pace of adoption. As AI capabilities in healthcare continue to evolve, leaders must quickly take on the challenge of raising AI in a way that acknowledges its new roles and impact in healthcare.

Eighty percent of health executives agree that within the next two years, AI will work next to humans in their organisation, as a coworker, collaborator and trusted advisor.

Deeper insight into disease predictions

Researchers at New York’s Icahn School of Medicine at Mt. Sinai have a unique collaborator in the hospital: their in-house AI system, affectionately known as Deep Patient. Armed with an analysis of electronic health records from 700,000 patients, Deep Patient taught itself to predict risk factors for 78 different diseases—and doctors now turn to the system to aid in diagnoses.

Deep Patient may not be a person, but it’s more than just a program. Artificially intelligent systems learn, make autonomous decisions, and have grown from a technological tool to a partner among people, coordinating and collaborating with humans in the workforce and society. With increasing autonomy and sophisticated capabilities, AI now often has as much influence as the people putting it to use.

RELATED: Watch the Digital Health Tech Vision 2018 webcast replay

FOOTNOTES

1 “AI startup KroniKare diagnoses wounds with smartphones,” The Edge Markets, May 31, 2017.

2 “Here’s looking at you, gran: the rise of elder tech;” Financial Times, February 21, 2018.

3 “Zocdoc appointment booking app now verifies insurance with AI;” Healthcare IT News; October 25, 2017.

4 Accenture, “Building Trust and Using the Wisdom of the Crowd,” 2014.

Kaveh Safavi

Senior Managing Director – Global Health Industry


Brian Kalis

Managing Director – Digital Health

MORE ON THIS TOPIC


Get the essentials

Explore Accenture’s Digital Health Tech Vision report, featuring five health IT trends that redefine the future of intelligent healthcare enterprises.

Trend 1 report

5 minutes read

Citizen AI

Artificial intelligence (AI) now has a deeper touch in healthcare. It is much more than a technological tool—it is part of the workforce.

VIEW FULL REPORT

Digital Health Tech Vision 2018: Full report

20 minutes read

Explore Accenture's Digital Health Tech Vision 2018 report, showcasing five health IT trends that are going to redefine how intelligent enterprises of the future will work.

VIEW

Explore more

Digital Health Tech Vision 2018: Infographic

5 minutes read

Get an overview of Accenture's Digital Health Tech Vision 2018, showcasing five health IT trends that are going to redefine how intelligent enterprises of the future will work.

VIEW

Digital Health Tech Vision 2018: SlideShare

5 minutes read

Explore a summary of our five health IT trends that are going to redefine how intelligent enterprises of the future will work.

Connect With Us

Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter