Lack of trust. Lack of adoption.
Raising explainable and responsible AI is essential to earning trust with those interacting with the technology. And trust is the only way to foster adoption. For healthcare to realise AI’s full potential, each organisation must be transparent about its motives for using data, as well as consistent with the motives of consumers. That means understanding what the consumer believes the business has the right to do with their information.
Health plans likely have more trust-related issues to overcome than providers. For instance, consumers perceive medical information shared by health plans to be biased. An Accenture survey found that when researching treatments, 69 percent of consumers believe health plan sources are steering them in a preferred direction.4
As health plans start to use AI to augment employees for core transaction processing, such as claims processing, this insight and path to decisions must be transparent so that it is explainable to consumers—not hidden in a black box. What’s more, regulators are often involved in reviewing a consumer appeal or grievance on a claim denial and expect a clear and explainable answer for a determination.
Organisations might consider being abundantly clear about how they will use patient data by sharing a public statement around responsible and explainable AI and AI governance.
In addition to earning trust with consumers, clinicians must trust AI to adopt it. When clinicians are using AI to make decisions, they need to believe the technology is trustworthy and dependable. But how dependable must the technology be before making it available to clinicians? As reliable as a medical student? A nurse? A physician’s assistant?
Proving outcomes is one way to earn clinicians’ trust in AI. Jvion is using an Eigen-based cognitive computing engine to help healthcare organisations identify patient-specific risk of adverse events and recommend individualised actions to help mitigate those risks. By using Jvion’s solution to reduce readmissions, Health First has avoided more than 800 unnecessary inpatient length of stay days and realised about $2 million in material savings.
New solutions will allow healthcare organisations to be more transparent in how AI models make decisions, thus providing the evidence that analytical thinkers, such as doctors and regulators, are seeking to validate decisions and eliminate bias. For example, Pegasystems created a customer decision hub that has a “T-Switch” feature that allows an organisation to set transparency thresholds for AI. This gives customers a choice: Some models are opaque and some are more transparent and, therefore, explainable. Such solutions allow users—whether clinicians, administrators or consumers—to see the inner workings of an AI model, including contributing factors to why a decision was made. With the T-switch, AI becomes a consultative companion for decision making. A feature like this is especially important for compliant AI in light of the European Union’s General Data Protection Regulation, which includes the “right to explanation of decisions made by automated systems.”