Skip to main content Skip to Footer

ACCENTURE FEDERAL SERVICES


November 01, 2017
What makes AI “responsible”?
By: Dominic Delmolino

What, exactly, makes AI "responsible"?

Artificial intelligence (AI) is already at work in the private and public sectors—alleviating time-intensive and repetitive low-value manual activities like data entry and routine content tagging. Even so, many people remain understandably skeptical about just how much to trust AI with decision-making activity, especially within government.

As just one example, if AI is used to evaluate applications for benefits, how can applicants be assured of a fair and impartial review? How can potential beneficiaries be confident that the AI isn’t prone to errors or inherently biased based on how it was trained? And, is the AI a “black box,” or can agencies provide evidence of how and why the tool approves or denies benefits?

These valid concerns shouldn’t become barriers. They can be addressed through the proper implementation of responsible AI.

What, exactly, makes AI “responsible”? When you think about responsible human decision makers, you likely think of individuals who are reliable, well-trained, and well-grounded, who deliver high-quality work, and who strive to make fair and unbiased decisions. They may not be perfect, but they can explain and respond to criticisms of their decisions.

When it comes to AI, those criteria don’t fundamentally change. Responsible AI also requires similar qualities—and federal agencies can apply proven techniques to deliver it:

  • Transparency. Start with a commitment to transparency and governance. Ensure that your AI has the ability to explain itself. It needs its own memory function so that it can document the decision pathway, or thinking process, used to make a determination. You also need a formal governance process for overseeing the AI and analyzing and improving decision pathways as needed.

  • Training. Implementing AI is, in many ways, akin to hiring a new person to do a job. As with humans, you must train AI. Be sure that any training data represents an unbiased and comprehensive view that doesn’t unintentionally disadvantage a set of people for no reason. Like human workers, AI also must be trained to recognize personal or otherwise sensitive data and treat it accordingly. Further, responsible AI mandates the ability to demonstrate how you trained your AI—sharing the examples you provided and highlighting their currency and relevance.

  • Tuning—and never assuming. Although people like to think that computers never fail, decisions made by AI aren’t always binary. If a tool is 51 percent sure of something, we don’t necessarily want it to proceed. We want the AI trained to “raise its hand,” asking for intelligent and experienced human support to help make the ultimate decision while providing new information for future decisions. And, finally, just because we can use AI to support a particular process doesn’t mean we should. Service design can help in determining the best ways to orchestrate human-AI interaction (more on that in a future blog).

While AI can add value to virtually any organization’s operations, the opportunities are particularly striking for federal agencies. With a responsible approach to transparency, training, and ongoing supervised tuning, AI can support higher efficiency and capability—and free up more resources to focus on mission outcomes.

Popular Tags

    More blogs on this topic

      Archive