We’ve talked a lot about the human+AI approach, and for good reason. Humans and artificial intelligence working together can accomplish so much more than either can do apart. But it’s not always clear how to begin harnessing the potential of these powerful collaborations.

That’s where experimentation comes in. Purposeful experiments on the use of human+AI systems in your workplace can help you uncover the path to new business value. We’ve done experiments with the human+AI approach, and the lessons we learned can help you to craft your own experiment and use it to ensure that when there is change, it’s for the better.

And it’s more important than ever to ensure that we’re implementing AI thoughtfully. 2019 focused the business world’s attention on the needs of stakeholders beyond the shareholder, including the need for greater security for employees and greater equality of opportunity in communities. The pandemic and recession of 2020 deepened those needs. Then, at the end of 2020, research revealed that machines were taking over tasks from humans faster than ever, the result of shifts driven by the pandemic and recession. And it’s well understood that those with lower levels of skills and education are at the most risk, since automation and AI take on lower complexity tasks most readily.

As an AI tech researcher, building and promoting AI systems, I might have wondered whether we’re only making matters worse if I hadn’t learned otherwise from a prior experiment. It was run at our Lab and Accenture’s innovation hub, The Dock, in Dublin. My colleagues and I had co-created and tested a new role for workers who had an AI teammate. You can read more about our experiment at Sloan Management Review, but in short: bringing AI into the workforce doesn’t have to be a zero-sum game. It’s not human versus AI.

<<< Start >>>

<<< End >>>

In fact, our experiment proved that people gained valuable skills and knowledge while training an AI system designed to help them in their roles. People were enabled to teach their AI colleague, improving the accuracy of the process for customers. We then saw how the up-skilled workers could support the development of new products and services. Employee up-skilling, customer service improvement and enterprise value – all as a result of how we had experimented with AI.

For me personally, this was a uniquely interesting project. People often cite lessons from history that technology innovations create more jobs than they eliminate – but these had fallen short of being completely reassuring for me. What if this time it’s different? It certainly feels different from previous waves of automation. Machines are getting ever better at mimicking activities we consider especially human, like talking and reasoning. Can we make sure that new jobs are created, and that it happens in time to not leave people behind?

We ran our project as an experiment precisely because we had more questions than answers (these and many others). As you’ll see below, the experiment showed that the outcomes we valued were possible and illuminated practical steps to make them real. You can use a similar approach to map the steps from your organization’s shared values to new human+AI value in your business. Where do you start? Here are guidelines based on our experience, along with examples of what they helped us to learn.

  • Don’t limit your learning. Careful experiments tend to yield much more than a simple proof or disproof of their hypotheses. They often reveal “unknown unknowns” along the way. For example, we learned that the workers in our case loved sharing their hard-earned expertise, both with each other and with their new AI colleague. That personal motivation was a source of win-win value that we might otherwise never have discovered. It had the power to improve expertise across the human+AI team and make the team capable of tackling more and different tasks.
  • Agree with your project sponsors on what an “experiment” means. We agreed at the outset that we ought to learn first and industrialize any changes later. This reduced risk and it meant we could be nimble by working with just nine users of the system we studied, a small subset of the workforce in that business area.

    For many teams, experiments are unfamiliar. Projects with more certain outcomes are the norm. Some may use “A/B testing” of alternatives, but that essentially precludes learning about “unknown unknowns.” Others might use the word experiment to mean “let’s try X, and who knows what will happen!” But if you haven’t done the homework to shape a crisp hypothesis, you might not be asking the questions that will result in the greatest learning.

  • Don’t write requirements. Write questions and hypotheses. At the core of an experiment is its hypothesis. Ours was that people working symbiotically with AI could be a sustainable and beneficial arrangement for both the people and the AI, and a source of business value. For us “symbiotically” implied a mutually beneficial exchange, as exists between some pairs of species in nature. You can see that things we valued were baked into the hypothesis, including ensuring people’s work was sustainable and beneficial for them. Then, by following the experimental process, we found the practical steps to realize the outcomes we valued.
  • Understand your people and their work. Most of our learning came from paying close attention from the very start to those people whose work would change – learning about their roles, goals, motivations, fears and aspirations. The nine experiment participants from that group were our key partners in co-creating the new role. The intervention in our experiment would impact the processes and tools they used, so those were important to understand too.

    We planned to test our hypothesis by enhancing an AI system that was already in place. Registered nurses working as medical coders had been using the existing AI system to help them annotate medical records. The records, produced as patients interact with care providers, are often unstructured. Annotating them with standardized codes helps with payment processing and data analysis that can improve patient care.

  • Engage a team with broad skills to co-create, evaluate and iterate. Understanding the status quo and thoughtfully changing it required melding a range of skills on the experiment team. Designers mapped as-is and prospective to-be states with extensive user input and feedback. AI developers advised on the capabilities of the technology. Together, both groups brought possibilities to life in a series of protypes, iterating them based on feedback from users. A journalist, originally engaged to document our project, became our key interpreter of the relationships within the coding team, both amongst the humans and with their evolving AI colleague. That understanding helped refine what success could look like and how to reach it while avoiding pitfalls.

    We found that the existing AI was not a medical know-it-all. Rather, it learned from the medical coders, who had extensive medical knowledge and clinical experience and used it to help improve the system’s outputs. That learning was a clunky process, though. It required coders to correct the AI several times before a data scientist was alerted to update the AI’s knowledge base. So, the relationship between the old system and the medical coders did not fit our vision of being “symbiotic.” To create symbiosis, we needed to tighten the loop between the coders and the AI. The AI would continue help the coders, but the coders would take on a direct role in teaching the AI. 

  • Choose the right AI. Not all “AI” is created equal when it comes to how well it can play on a human+AI team. “Black box” varieties, known for high accuracy when data abound, cannot offer explanations to human colleagues nor easily take direction from them. Our system relied on a “knowledge graph” – a succinct way of representing facts in a manner that is intelligible by humans but also amenable to machine reasoning. In fact, such human+AI systems can make AI processing possible even when data are limited. Moreover, using this system fostered collaboration and knowledge sharing amongst the human colleagues as the knowledge graph socialized individuals’ expertise. 
  • Measure your success. Essential to a well-defined experiment is defining and measuring success. As we co-created with the medical coders, we listened constantly through interviews and surveys. We learned in real time from analytics built into the prototypes – including on one occasion that an experimental feature was overly taxing to use. All this helped us to know whether we were moving in the right direction.

    Ultimately, the new role was a success. Medical coders who had no previous data science training learned to take on the role of training the AI. The accuracy of AI training by the coders was high, and they strongly agreed that they were acquiring new knowledge and skills; meanwhile, assessments also showed that coders could apply the new concepts to other scenarios. Eight out of nine coders were more positive about working with AI than they had been prior to taking part in the experiment.

It’s going to be quite some time until AI surpasses human abilities in many areas. Meanwhile, there’s much value to be had by teaming humans and AI together. Spotting that value and how to tap it sustainably and responsibly in your context might be best discovered through an experiment like ours.

This research was a collaboration between colleagues at Accenture Labs, The Dock, Accenture Research, Accenture Insights Driven Health and Dr. Claire O’Connell, Irish Science Writer of the Year 2016.

To learn more about our ongoing work in the human+AI space or about how an experimental approach can help you unlock its value for your business, contact Diarmuid Cahalane.

Diarmuid Cahalane

Research Principal – Accenture Labs

Recent posts

No posts available at this time.
Subscription Center
Subscribe to Accenture's Technology Innovation Blog Subscribe to Accenture's Technology Innovation Blog