Responsible AI is about creating governance frameworks to evaluate, deploy and monitor AI to create new opportunities for better citizen and mission services. It means architecting and implementing solutions that put people at the center. By using design-led thinking, organizations examine core ethical questions in context, evaluate the adequacy of policies and programs, and create a set of value-driven requirements governing AI solutions. The four foundational elements of Responsible AI include:

Govern – You must create the right framework to enable AI to flourish—one that is anchored to your organization’s core values, ethical guardrails, and regulatory constraints. Standards bodies such as IEEE are providing guidance for global organizations to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations.

Design – Any new solution should be architected and deployed with trust built into the design. This means that requirements for privacy, transparency, and security have equal weight with new product features. The resulting systems should address the need to include AI solutions that can explain their rationale for decisions making. Capital One is researching ways to make AI more explainable, hoping to use it to review credit card applications since banking regulations require that financial companies furnish an explanation to customers when their applications are denied.

Monitor – AI needs close supervision using ongoing human monitoring and auditing of the performance of algorithms against key value-driven metrics such as accountability, bias, and cybersecurity. Automakers Volvo and Audi are addressing accountability with announcements that they will assume liability for any accidents that happen when automated driving technology is in use.

Cases of bias may be subtle and more difficult to catch and thus require close attention. Accenture is developing a tool to help businesses detect gender, racial and ethnic bias in artificial intelligence software. It lets users define the data fields they consider sensitive—such as race, gender or age—and then see the extent to which these factors are correlated with other data fields. Most importantly, it creates the feedback needed so that teams can govern AI and make adjustments to address bias.

Train – According to the forthcoming Accenture report "AI and the Federal Workforce," 74 percent of federal workers believe that it will be somewhat, very, or extremely important for them to develop skills to work with AI. Therefore, it is incumbent on agencies to prepare and equip them to take full advantage of AI and the new work styles that it fosters.

This requires training in the near-term to better understand how AI systems operate and potential upskilling long-term. As federal agencies will need to prepare their workforces to work with AI, an integrated approach is needed, including:

Educate

employees to how AI will be integrated into operations and why

Ask

employees where and how AI might improve their day-to-day roles

Engage

employees in co-creation to determine how people, processes and AI technology come together to create a more intelligent agency

Develop

the skills needed for employees to take advantage of the insight offered by AI to achieve better, more consistent outcomes

View All

Dominic Delmolino

Chief Technology Officer


Mimi Whitehouse

Manager – AI Strategic Growth Initiative

MORE ON THIS TOPIC


Connect with us

Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter