RESEARCH REPORT

In brief

In brief

  • Most business investments are approved based on predicted returns; however, estimating future returns for AI projects can be challenging.
  • We explore the issues involved and how you can get started with AI when determining future risks and returns is difficult.


Accenture’s research shows two-thirds of organizations surveyed are planning investments in AI over the next year. That’s not surprising when you consider that, in some industries, these investments are expected to boost revenue by over 30 percent over the next four years.

Some AI applications link neatly to projected returns, making ROI calculations straightforward. An energy producer, for example, could tie its investment in an AI-powered predictive maintenance tool directly to increases in equipment uptime or reductions in maintenance costs.

Other applications are more complex and unpredictable, making it challenging to use typical ROI approaches. To what extent, for instance, could reductions in crime be tied to AI projects when many other factors may also be having an impact. Yet in any scenario, we need to make a solid business case for AI investment.

Where it is difficult to make such a business case—be it because of inherent complexity or available capabilities—organizations can risk either losing competitive advantage by delaying investments or sinking money into the wrong AI initiatives.

So how can organizations get started on AI projects where future risks and returns are hard to quantify?

30%

In some industries, AI investments are set to boost revenue by over 30% over the next four years.

A staged approach to assessing future value

When returns are difficult (or impossible) to calculate, or where risks need to be minimized systematically, we need to take a staged approach to AI projects. This allows us to accurately estimate the value or risk of future, larger-scale or live implementations. This staged approach can take three main forms:

  • Scout the area. Revealing useful patterns in large datasets is one of AI’s main strengths. But not all large datasets are necessarily going to contain useful patterns, let alone patterns that can provide more value than the cost of looking for them. So before moving at pace into a new AI project, a limited pilot is often used to scout the area, test hypotheses and gain the knowledge you need to calculate the value of scaling up.

    For example, a forestry company asked us to apply AI to drone footage of a portion of their land to look for any useful patterns. The analysis of the images revealed patches with low or zero tree density. We determined that, if they could plant new trees in those spaces, the company could achieve a big improvement in yield. This allowed us to calculate the potential yield gains the company could make by using AI to identify these spaces. The company was able to see the impact this would make to their revenue per hectare of forest, and this justified applying the model to larger tracts of land.
  • Establish a control. A/B testing measures the impact of two approaches head-to-head. Using this to assess an AI application against the status quo can help organizations quantify the difference in value.

    For example, we worked with a policing authority to improve the effectiveness of its reoffending prevention program. For the test group, our AI model flagged risk factors for potential reoffending, and the authority made targeted interventions in response.

    The control group went through the existing program, which included a high-touch protocol applied to each participant, irrespective of risk factors. This was time-consuming and expensive, but it was also effective. We showed—in less than two years—that the AI model could match this overall effectiveness while reducing time and costs. In the process we also collected enough quantitative evidence to support the business case for a wider roll-out.
  • Simulate the model. There are times when the strength of an AI model needs to be proven—both in principle and in practice—before implementation. This is often the case when any potential failure could cause harm to employees, customers, equipment or the environment. Here, you may need to demonstrate not only that a model can make reliable predictions but also show how people will use those predictions and the wider impact this will have on the organization.

    We worked with an industrial company on a predictive maintenance model for a critical piece of equipment in their plant. The asset was an important part of the plant’s operations, with an impact on both the throughput and quality of the final product. The model indicated that the machines could be safely serviced less often, which would result in significant savings for the business. However, if those predictions were wrong, that could lead to a lengthy shutdown or damage to key assets. As a result, before implementing it, we ran the model for six months, which enabled us to simulate the roll-out and the impact on the organization. This period also allowed employees to build trust in the model, which in our experience has also proven crucial to successful implementation.

A common feature of these approaches to assessing ROI is that they usually are heavily customized to each organization’s data and circumstances. This is necessary when AI projects are the focus, because most often it is your organization’s context and data—used to train, test and refine the AI model—that will shape the ROI equation. This differs markedly from traditional hardware or software investments where the costs and impacts are more neatly defined and predictable, to the extent that standardized ROI templates are commonly available for several types of investment.

When returns are difficult (or impossible) to calculate, or where risks need to be minimized systematically, we need to take a staged approach to AI projects.

Accounting for unexpected value

One final, but important, point is that AI can deliver unexpected, additional insights. In other words, AI can deliver valuable results in addition to achieving (or not achieving) the primary objective of the project. When that happens, those insights should be included in the assessment of the project’s value. Planned or not, they are part of the project’s outputs. For this reason, you should be clear with your stakeholders that the exploratory nature of AI means you may be left with, or led to, additional or alternative results that may be no less valuable than your primary targets.

Kamal Mannar PhD

Managing Director – Applied Intelligence Center of Excellence Singapore

MORE ON THIS TOPIC


Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter