Skip to main content Skip to Footer

BLOG


September 15, 2017
LET’S GET A GRIP ON ARTIFICIAL INTELLIGENCE IN HEALTHCARE
By: Fauzia Zamin-Malik

Examining digital health questions surrounding the ethical, responsible use of AI by doctors

In the last few years, there has been a growing presence of artificial intelligence technologies or AI in healthcare. Though AI will not displace human relationships in the delivery of care, it will provide for an unchartered transformation of the healthcare industry. As highlighted in Trend 5: The Uncharted from the Accenture 2017 Digital Health Tech Vision, such a potentially massive change will demand a dramatically different governance structure from what we see today.

The recent announcement of the Software Pre-Certification Pilot Program by the Food and Drug Administration (FDA) is one step in that direction. The FDA’s stated purpose of the program is to balance the reduced time and cost of market entry for digital health against patient safety and quality of care. The FDA openly acknowledges that a new paradigm is needed to help regulate digital health technology while fostering innovation. More importantly, the program, and hopefully the new governance structure, will include input from both those who develop and offer AI and those who use AI in the delivery of care. These two groups working together can help properly balance technological opportunity with patient access to and safety of digital health technologies.

A key factor in the success of the program and resulting governance will be ensuring that the AI used in digital health technologies is explainable and ethical. As has been previously highlighted by Accenture, this type of governance will help AI meet the responsibility imperative. And, in healthcare, Responsible AI means appropriately reviewing the overall safety, quality and effectiveness of the care that is delivered by AI. This means honing in to address questions around:

  • Data Integrity. Is the source and input data used to establish the AI solution’s underlying algorithms based on a meaningfully-sized and diverse population?

  • Architecture. Are the algorithms based on ethical and accurate assumptions? Is the underlying AI technology designed to learn from its mistakes and successes? Is the AI solution adaptable to adjust to changes in the environment, technology medical findings, population changes, new treatments etc.?

  • Consistency. Is the AI solution being rolled out, implemented and updated in a manner consistent with other AI in healthcare? Is the AI solution going to act consistent with common medical and healthcare standards?

  • Monitoring. How often should the AI be checked for quality, safety and effectiveness? Who should do this? How should updates be reviewed given the swift changes inherent in AI?

Appropriate governance will help find a path for meaningful answers to these and related questions. It will also help manage the overall consistency in the development of AI in healthcare and provide a forum to review that, generally, AI is updated appropriately based on disparate or diverse outcomes as well as changes in technology, diagnosis and treatment options.

Perhaps most importantly, a governance structure specific to healthcare will ensure an ethical review of the underlying health data used in setting up AI as well as data inputted during the clinical practice and other environmental changes. Its mission here: Guarantee that the data is complete, accurate and has integrity. We are hopeful that the FDA will include these factors as it rolls out the pilot program and considers the mechanism of governance for digital health technologies.

There are two other key factors to consider in the ethical deployment of AI in healthcare: transparency and risk exposure. I will discuss them in my next blog post.

Popular Tags

    More blogs on this topic

      Archive