As you evaluate new AI solutions, there are two critical areas of impact to consider: designing solutions imbued with core values and transforming your workforce to ease the transition to new ways of working. Trust and transparency are the core principles and self-reinforcing mechanisms that ensure AI innovation occurs within the context of accountability, security, and fairness.

Design solutions with AI

Designing solutions using responsible AI frameworks covers four interrelated areas: human-centered design, governance, training data, and monitoring performance.

Human-centered design – Great design begins with empathy—having a deep understanding of user needs, and human-centered design is ideally suited to uncovering latent, unexpressed needs. The design process in this context needs to draw out user concerns about privacy, security, and trust to inform the design of the solution. This is critical given the complexity and often lack of transparency around AI solutions and how they work.

A recent study by MIT on genetic privacy policies point to potential best practices for handling privacy issues in the future. Patients asked to participate in a study were given two options: 1. Provide access to their genetic data and have control over this data, or 2. Be warned about privacy risks and asked to consent to those risks with no further control over their data. For the first group, the incidence of testing increased by 83 percent. The second group saw a decrease of 69 percent.

Governance – Implementing governance for responsible AI requires addressing organizational leadership, project teams, and external stakeholders. Leadership needs to develop the communication channels, cultivate a culture of responsibility, and create internal governance processes aligned with regulations and industry best practices. Commensurate with this, project teams need to embed transparency into their daily practices and processes while implementing governance mandates developed by the leadership team. The focus for stakeholders should be to engage and educate, providing a consistent level of transparency based on the principles of explainable AI.

Training data – AI uses machine learning algorithms and real-time data to improve continuously. However, the self-correcting aspect of this process should not be left solely to the machines. Self-changing, data-driven processes need human workers who can act rapidly on the opportunities that machines discover in real time. They also require people to continuously assess the need for improvements to safety, fairness, and auditability by training the algorithms over time. Accenture believes this will create a new category of job in the “missing middle,” where humans help machines and machines help humans to achieve better outcomes collectively.

Monitoring performance – Model monitoring ensures AI models continue to function as intended after deployment into production.

Model monitoring should include the following four processes:

  1. Active monitoring
  2. Performance drift
  3. Operational bias review
  4. Model re-training

Active monitoring tracks user behavior to identify irregular patterns that may indicate signs of unintended consequences. Performance drift monitors model KPIs to determine model performance and triggers a retraining process. Operational bias review tracks model inputs and outputs to identify irregularities which may indicate bias. Model retraining uses new data to account for changes in user behavior. By using a combination of these methods, you can more readily spot and correct irregularities or bias in your results.

Training the workforce

Agencies need to build workforce trust in the smart machines that employees will increasingly rely upon. They can do so by teaching them how to interact with, train and augment these systems. Organizations that fail to take these steps will find many of the benefits of AI elusive and may encounter a talent crunch within the next few years. Over the long-term, active upskilling may be required for many roles. Through education and training, valued employees can move from executing rote tasks to providing more in-depth analysis requiring greater judgment. Beyond benefits for the government workforce, federal agencies can help establish the best practices needed for a transitioning economy.

As Reps. Will Hurd (R-TX) and Robin Kelly (D-IL), then Chair and Ranking Members of the Oversight & Government Reform Subcommittee on Information Technology, wrote in a recent congressional whitepaper, Rise of the Machines, “[t]he federal government should also lead by example by investing more in education and training programs that would allow for its current and future workforce to gain the necessary AI skills.”

Getting started

Simply put, AI represents a new way of working. It will bring about profound changes within organizations and society that we can’t fully understand and predict today. In this context, responsible AI is a critical component of an organizational change model that focuses on rapid learning and adapting. Accenture’s AI-driven organizational change model provides a framework for how agencies can begin this process. It starts with defining your AI journey and strategy, followed by building a talent strategy and learning architecture. Finally, organizations execute rapid upskilling programs and closely measure progress.

Your AI strategy should include an AI operating model with identified stakeholders and objectives, a defined AI talent strategy, required ecosystem partners, and initial prototype initiatives. To build a talent strategy and learning architecture you need to design AI-specific talent profiles and segments with competency frameworks, develop the required competencies, and design curriculum and learning paths. For rapid upskilling, agencies need to build and launch targeted learning initiatives and monitoring progress against established goals.

Organizations that want to maintain momentum for AI-driven change should consider creating specific AI Centers of Excellence (CoE) that provide advisory support, engineering and development, research and learning, communities of interest across the enterprise, and governance and change management. This is a model that the U.S. Department of Homeland Security recently implemented. CoEs are an essential mechanism to ensure responsible AI practices are followed, and solutions are developed in a responsible manner. By embedding responsible AI into your approach for organizational change, you ensure that the critical element of trust is cultivated and maintained among key stakeholders, the most important of which being employees and customers.

Dominic Delmolino

Chief Technology Officer


Mimi Whitehouse

Manager – AI Strategic Growth Initiative

MORE ON THIS TOPIC


Connect with us

Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter