Skip to main content Skip to Footer

February 22, 2019
Great expectations of AI perfection
By: Niamh McKenna

According to Accenture’s Executive Survey on AI in Healthcare, the extent of the UK's trust in healthcare AI is likely to be put to the test quite soon. The survey indicates that the UK is a leader in the adoption of AI in healthcare. Health executives in the UK were well ahead of the field in terms of integrating AI into their organisation.

Source: Accenture Executive Survey on AI in Healthcare, multi-country results (n=180)


When asked to what extent they expected to realise value from AI over the next three years, 90 percent of the UK health executives polled said they expected better clinical outcomes, and 87 percent said they expected improvements in patient satisfaction.

This begs the question: what exactly will satisfy patients? After all, robots have a bit of a reputation. Apart from being a bit short on emotional awareness, the associations we make are usually of precision, exactitude – perfection, even. I mean, robots are perfectly tuned machines, after all. So, when it comes to human lives, we should expect nothing short of perfection, non? We might be willing to deal with the odd machine error when it comes to administrative matters like billing, bed allocation or operating theatre scheduling, but when it comes to life and death issues we expect things to work perfectly! If we look at the transport sector – that’s why human death involving AI-driven Uber vehicles deserves all the media attention it gets, right? Robots shouldn’t be making mistakes like that!

Not so fast.

When we talk of improved clinical outcomes, the benchmark we’re referencing (whether we realise it or not) is the quality of current clinical outcomes. To use the vehicular analogy: The question is not whether AI-driven cars cause or are involved in fatal accidents – the question we need to ask ourselves is whether they are statistically more or less likely to cause or be involved in fatal accidents than cars driven by humans.

“Improved” doesn’t have to mean “perfect”

The uncomfortable truth is that humans aren’t perfect. The Lancet reports that the NHS “spent £1.8 billion on negligence claims in the 2017–18 financial year” (twice what it spent in 2010). “Estimated total liabilities for 2017–18—the cost facing the NHS if all claims against it were successful—is £65.1 billion, up from the 2015–16 estimate of £56.4 billion,” says the journal. Yet, as public health systems go, the NHS ranks very well among global health systems. A 2017 Commonwealth Fund report places it at the top of a list of highest performing public health systems that included Australia, Canada, much of Europe and the US. In other words, the facts tell us that despite all negligence claims against it, the NHS represents a more-than-reasonable benchmark of what a public health system should be.

In this light, we should probably reframe the issue of “better clinical outcomes”. In the end, whether we’re talking about a human healthcare team, a human-AI hybrid team or purely AI-driven care, the only rational approach (while always striving for perfection) must be that as long as we’re improving on current standards, we’re doing the right thing. So while we should continue to ask questions such as those raised last year about whether Babylon Health can miss signs of serious illness (‘High-profile health app under scrutiny after doctors’ complaints’, FT.com, 13th July 2018’) – we should start comparing those results to human caregivers using benchmarking research and proper control groups.

Pushing boundaries responsibly

This is critical when deciding how to regulate ourselves in the world of healthcare AI. When it comes to health technology, there is always likely to be tension between pushing the boundaries and being responsible. Yet it’s difficult to make any progress without pushing boundaries. That’s why, as we see in the results of our Survey on AI in Healthcare, it is a sensible approach to focus on operational use of AI. Having said that, dipping a toe into the clinical care pond thoughtfully and cautiously is also wise, and will help make sure the technology does indeed improve on patient outcomes. Got any thoughts? Please get in touch – happy to chat through these issues as they pertain to your organisation.

Popular Tags

    More blogs on this topic

      Archive