Developing ethical artificial intelligence in healthcare requires transparency
In my last post, I began my discussion on how clear oversight can help ensure inclusion of ethical considerations when deploying artificial intelligence technologies, or AI, in healthcare. Another key area—transparency—will also need to be addressed.
Today, transparency in medicine is comparatively simple. A clinician knows his or her limitations and can seek the help of his or her peers through processes such as peer review. Likewise, a patient can question or provide additional information based on his or her trust of the clinician.
But with AI, there is little guidance around when and how to provide transparency. Some key questions one might ask are when using or benefiting from AI in healthcare are:
Lack of clarity on the above dilutes trust and without trust from clinician and patient alike, AI adoption, and its subsequent success, will be severely hampered.
One way to establish clinician trust would be to ensure they are aware of and understand the overall varying factors, objectives, parameters and limitations of the AI. This could come through education similar to professional credential training or what clinicians receive as part of a drug or medical-device rollout. This would help clinicians understand if, when and how to provide input when the AI seems inapplicable, incomplete, inaccurate or inconsistent. Further, an ongoing governance board or committee both within the clinician’s organization as well as within the oversight organization should be included to ensure the AI is explainable.
Patients should also be informed and educated on the overall use of the AI in the course of his or her diagnosis or treatment, as well as key objectives and limitations of the underlying AI. This could be similar to educational materials used in the marketing or sale of drugs or medical devices.
Two other key aspects of transparency should also be considered:
Right now, a clinician can be held responsible by determining compliance with professional standards, laws or regulations. However, this gets murkier with AI. How accountable are developers and implementers if the AI provides a faulty outcome or fails to learn? Who is responsible to monitor for this liability?
Getting clarity on who can be held responsible, in an equitable manner, will benefit everyone. We need to start taking this journey now as it will help clarify whether the risk is worth the reward for everyone, from the developers of the algorithms to the patients benefiting (or not) from the AI. Whether it is done by statutory, regulatory, contract or industry standards, the process for determining liability needs to take into account both the criticality of meeting baseline standards while also supporting innovation and the ability to fully leverage AI in healthcare.
This underscores that the universe of stakeholders who collaborate in establishing an ethical role for AI in healthcare must include both the traditional providers of healthcare and pharmaceuticals as well as the IT companies and medical consultants developing and marketing AI. One example of such collaboration is Xcertia which, overall, aims to provide governing principles to help promote the safety and effectiveness of mobile health applications. Hopefully, organizations such as Xcertia will help incorporate feedback on these areas from the different players to advance the quality, usability and safety of AI for both consumers and health care professionals.