Jeremy Howard was pioneering ways for deep learning to help physicians interpret medical data better when the challenge he was tackling suddenly hit close to home.
When Jeremy Howard’s wife, Rachel, was diagnosed with a brain cyst while she was pregnant with their first child three years ago, Jeremy and Rachel did what comes naturally to data scientists like them. They created a spreadsheet. It contained: possible treatments, their known likelihood of success and failure, the value they assigned to different outcomes, and the potential problems if things went wrong.
When most of us fall ill, we find ourselves thrust into a world of frantic Googling, confusing choices, and fear of the unknown. We place trust in our doctors to know what’s best. Jeremy though, trusts data. When it comes to making decisions, from investing money, to raising children, to taking medicine, he uses probabilities, priors and statistics. At his home just outside San Francisco, he works at a personalised home-built station, where he usually paces on a treadmill at exactly 0.8 mph. Through experiments he’d calculated that this increases his productivity by around 50 percent.
Running some rudimentary statistics on his spreadsheet about Rachel’s condition, Jeremy and Rachel calculated that one course of action – an immediate operation – was significantly better than the alternatives. When he reported this to Rachel’s doctors, however, it was disregarded. “We are dealing with six different departments and we’re in the middle of it; and being told opposite things.” For a man who’s a pioneer in using big data to diagnose health conditions, this was a sudden meeting of the professional and personal.
“Today we’ve got this perfect storm of the unprecedented computing power combined with the largest-ever pool of human talent who know how to drive it.”
Five years earlier, Jeremy was trying to figure out how to best apply his knowledge of data. He’d spent the previous 25 years in machine learning – a branch of artificial intelligence – including a stint running Kaggle, a watering hole for the world’s top machine learning specialists and the home of data science. He’d already made enough money to relax a little, and won several international data science competitions. But he was looking for a new challenge: how to apply machine learning to a pressing social problem. He cuts an uncommon figure in the world of Silicon Valley, like an 18th century scientist – intellectually curious, ruffled hair, excitable, friendly. Maybe it’s because he sees himself as an outsider – an Australian in California, and one who studied philosophy to boot.
Few technologies have leapt from science fiction to fact as quickly as artificial intelligence. But forget cold killer robots a la Terminator or sentient cunning ones like Ex Machina. The real action is task-specific machine learning, which allows machines to mimic a particular human behaviour by feeding them lots of examples. “Today we’ve got this perfect storm,” Fernando Lucini, European Artificial Intelligence Lead at Accenture Digital, says of this unprecedented computing power combined with the largest-ever pool of human talent who know how to drive it. AI relies on data to learn, which creates a powerful feedback loop: more data fed in makes it smarter, which allows it to make more sense of any new data, which makes it smarter, and on and on, and on. Machines don’t need to be conscious to undertake complicated tasks, they just need to be well trained. One type of approach in particular called “deep learning,” is driving stunning advances. It involves teaching machines to solve problems for themselves rather than just feeding them examples, by setting out rules and letting them get on with it. This has had particularly promising results when training “neural networks” (networks of artificial neurons that behave a little like real ones).
Machine learning is developing faster than even its proponents expected. From driverless vehicles to clerical work, from burger flipping to voice recognition to beating the world’s best Go players – machines are able to undertake more and more tasks, including those long thought to be uniquely human. The magic of AI is that it can draw on millions of examples of anything — from Go games to commuter traffic routes — compared to human experience, which, even for the dedicated, is likely to be in the thousands. From banking to mining, AI can now be found absorbing data and examples in every sector imaginable. “It has a massively broad knowledge of cases,” Fernando explains, “that you could never experience in your life.”
Jeremy, who has no qualifications in medicine at all, attended a talk about worldwide doctor shortages at Davos in 2014. It was his eureka moment. “I was like: ‘holy…,” he recalls. “There’s a huge gap that can’t be filled, in an area that’s basically data analysis.” He decided diagnosticians could use machine learning, and that he'd build it for them.
It might sound odd to describe doctors as data analysts, because it is such a human job. When you turn up at the hospital and present with some ailment – let’s say undiagnosed lung cancer – the doctor has to make a judgement. She will have all sorts of experience and past cases to draw on, not to mention various rules and guidelines. She might have a CT scan to look at, and will think back to the many previous examples she’s seen. It’s a very artisanal trade, but at its core, a fair chunk of doctor’s work is diagnosing an illness and proposing a course of action – and that’s a data-driven decision. “I always thought of medicine as very human – human judgements, human decisions,” Jeremy says. “But it’s also a data problem. And I understand data.”
Jeremy was especially interested in those CT scans. Every day around the world doctors look at various images – X-rays, MRIs, ultrasounds and so on – and try to work out what is wrong. If those scans are digitised and run over with algorithms, thought Jeremy, suddenly the doctor could have more than just a few dozen cases to draw on. She would have millions.
By mid-2014, Jeremy founded Enlitic with the aim of revolutionising medical diagnostics. It takes pathology images, lab results such as blood tests, genomics, patient histories, and uses machine learning to spot patterns. By taking thousands of scans, and their known results, an algorithm can detect patterns which are invisible to even the sharpest human eye. Within a couple of months, Jeremy and his team had built algorithms that could spot lung cancer, and it was more accurate than a panel of the world’s best radiologists.
We've become immune to tech miracles these days. But consider that last sentence for a moment. Jeremy and his team of data analysts, with no medical training, were able to build a machine that gave doctors a way of dramatically increasing their ability to care for patients. In the test, the humans spotted 93 percent of the cancers; but Enlitic spotted 100 percent. (The average point at which a doctor sees a tumour in your lungs is when it is 4 cm in radius – at which point it’s odds-on you’ll not survive. But a machine with the right data can identify miniscule abnormalities nearly invisible to human eyes.) Neither is perfect. The humans incorrectly diagnosed cancer in 66 percent of the cases, but in Enlitic it was less than half. And then add speed. One human would take over a thousand years to examine 30 million scans; one Enlitic programme could do it in just over a week.
These numbers obscure that each misdiagnosis is an expensive human tragedy. In the U.S. alone, the National Institute of Medicine estimates that diagnostic errors affect 12 million Americans every year, and some studies of radiology have found that false positive rates can be up to 2 percent, while false negative rates can exceed one in four. Such costly error is one reason why machine learning in health care has become big business. It’s also replete with enormous volumes of valuable data. Accenture is working on AI projects with innovative healthcare partners around the globe; Google is testing technology to spot signs of diabetic blindness, working with Moorfields Eye Hospital in London; while earlier this year Jeremy’s old group Kaggle announced the winners of a $1 million contest in which more than 10,000 researchers competed to build machine learning models that could detect lung cancer from CT scans. Growth in the AI health market is expected to reach $6.6 billion by 2021, according to Accenture research. While the highest profile medical AI examples are currently diagnostic, the most exciting applications in future could be preventative technology for use at home, according to Fernando Lucini.
Shortly after Enlitic was founded, Jeremy attended the world’s biggest radiology conference in Chicago, speaking to as many people as he could with the message that machine learning could transform the whole craft. “No-one had any idea what I was talking about,” he recalled. Hardly anyone followed up. He left feeling like someone from the future who was talking in a language no-one really understood. But later that year IBM, to considerable media hype, started buying huge volumes of medical data (it has now spent over $4 billion acquiring scans, journals, and the like) and announced it would turn its Jeopardy-winning Watson machine into a medical student.
Andrew McAfee, a renowned academic, wrote at the time that a supercomputer doctor would be a game-changing diagnostician – since it could hold all available medical knowledge, never forget any of it, be extremely accurate, consistent and cheap. “Suddenly my phone wouldn’t stop ringing,” Jeremy says. It was curious health professionals, having remembered some slightly eccentric data guy with a mild Australian accent from a few months earlier who wouldn’t stop talking about machine learning.
Just as Enlitic was starting to take off, Rachel fell ill. She was given an MRI scan, and, in a dark irony, her cyst was missed in the initial scan: the very problem that Jeremy had built a system to solve.
His experience with Rachel’s doctors gave Jeremy a personal perspective on how data might improve patient care, in addition to any aggregate benefits. Because of the complexity of Rachel’s case, several doctors were involved, including a neurologist, neuro-endocrinologist, neuro-surgeon, obstetrician, and infectious disease specialist. They would frequently disagree about the best course of action. On one occasion, one doctor blocked a certain medication claiming the pregnancy wouldn’t permit; even though the obstetrician said it would be fine. These communication and co-ordination challenges caused delays, sometimes while Rachel was in unbearable pain. “I couldn’t believe it,” he said, with audible frustration. “I was in the middle of the very thing I have built the technology to solve. When you’re looking after a family member, your first step is show confidence, but behind the scenes I felt helpless.”
This too, reckons Jeremy, is where machine learning might help. As opposed to several doctors with different areas of specialism, machine learning might integrate several disciplines together. After all, data isn’t divided into disciplines, each with its own rules and bureaucracies. “The goal,” Jeremy said, “is a totally personalised way to know what’s going on with your health right now and …then we can look at all the possible interventions, and the probabilities of what will work. My wife is not broken into various disciplines. She’s one person.” In the end, Rachel did have the surgery – and recovered with only a minor complication. “We were effective patient advocates. They realised we weren’t going to shut up. But what about the people that don’t have that? I don’t want it to happen to anyone else.”
This is disruptive for doctors too. Machine learning approaches are still new – doctors cannot simply discard their training and rule-based systems to this new field of data science until it’s well established. And there will undoubtedly be problems along the way. According to the author and tech critic Nicholas Carr, professionals lose something important the more they rely on data and tech. Computer Assisted Detection has been around in medicine for many years, and has never really delivered the outcomes it promised. One recent study of doctors that adopted electronic records found decreased clinical knowledge and increased stereotyping of patients. Carr fears the more doctors rely on data outputs, the less they will exercise their own judgement (this is sometimes called “automation bias”). Other studies have found that doctors already spend between 25-55 percent of their time looking at computer screens – which also limits their time with patients, and delivering the sort of hands-on approach to patient care that is so important.
The hope is that, over time, AI can eliminate back-office work to free up doctors to focus more on patients, says Niamh McKenna who leads Technology Consulting in Accenture’s UK Health practice. “AI might bring completely different dimensions to jobs that we’ve not thought about,” she says. “If, as a doctor, I can move past the problem-solving diagnosis point because I’ve got something to help me do that, then it becomes much more of a human service industry. Machines do the diagnosis and what I’m doing is helping that person through difficult times — and of course that’s what doctors do today, but maybe in the future that will be their only focus.”
Then there are the privacy issues. Medical data is the fuel for machine learning, but it is also highly personal, and in the wrong hands can be misused. According to Sam Smith of the charity MedConfidential, straight-forward image-based diagnosis doesn’t need much personally identifiable data because historical scans can be anonymised. But when it comes to more predictive work – such as trying to assess the likelihood that someone might develop a certain illness in the future – the more data, and the more personal the data, the better the AI. That’s when it gets more controversial, explains Sam, partly because personal health data might be handed over to a private company driven by the profit motive rather than patient welfare. Earlier this year, for example, the Information Commissioner’s Office reported that London’s Royal Free hospital failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind, a Google subsidiary, who were looking to use the data to build a detection system for kidney injury.
But if the technology is rushing ahead, public opinion appears to be heading in the opposite direction. Rather than excitement about the wonders of smart machines, many people are worried. According to a recent survey, significant proportions of Americans report feeling nervous about AI: whether it will lead to job losses or even an existential threat to humanity. The whole subject seems to elicit a very emotional response: it’s a canvas onto which we seem to paint all our worries about the future.
There are reasons to be cautious. But it’s important to not lose sight of the possible benefits on offer. Technological advances often promise to free professionals up, and give us all more time, but they rarely do. There are reasons to think this time might be different. “A doctor has three main roles: collecting data, making a diagnosis and then performing the actual intervention,” Jeremy says. “This technology is designed to help with the middle bit.” That leaves more time for other stuff, including more patient care and communication. “This will help doctors be human doctors.” We ought not romanticise how much patient time doctors have now: repeated studies show they feel overworked, and say themselves they don’t spend enough time with people in their care.
“If you can combine AI with mobile technology” says Niamh McKenna, “you can potentially diagnose an infinite number of people effectively. That will be truly transformative.”
Machine learning approaches hold out the prospect of a leap forward in health outcomes not witnessed since penicillin or the arrival of epidemiology. While the tech is still in a development phase – much of Enlitic’s work is currently in China for example – Jeremy hopes it will expand quickly across the world, especially into the developing world where in some countries there are barely any trained radiologists at all. “Replacing doctors is not the plan,” he said. ‘I hope that we can get this software to places where there are currently no radiologists, and a community health worker with two months of highly specific training could deliver world class assessments.”
Some start-ups are looking into how to use medical AI with mobile technology to help diagnose. “If you can combine AI with mobile technology” says Niamh McKenna, “you can potentially diagnose an infinite number of people effectively. That will be truly transformative.”
Faced with the crisis in healthcare funding and outcomes – both in the developed and developing world – machine learning will continue to advance at speed.
And so it should – provided it can be designed in a way that improves doctors' ability to do their work, and privacy or profit worries can be ironed out. In the end, perhaps the question is a personal one: would you trust Jeremy’s algorithm to diagnose you? Or is there something unique about a doctor making a decision that should never be ceded to a machine? Who do you most trust, in a matter of life and death – man or machine? In the next few years, this is a decision you will probably have to take.
Read More Stories on AI