A think big, start small sustainable approach
More so than any preceding technologies, AI creates living systems that will continue to advance, evolve and mature over time. This is inherent in the self-learning and improving nature of many AI technologies. Successful adoption requires both clear guidelines and guardrails for use as well as a commitment to cultivating the technology so it can be adapted and trained for specific use cases.
This means that enterprises including federal agencies need to clearly define upfront how AI should and should not be used. They should also embrace user-driven design and iterative agile development to best leverage available data and configure technologies for required roles. And recognizing its pervasive and transformative effect, they must also plan to make it a core competency for their organization—both within IT as well as the mission, business and operational groups that AI will support.
Defining your trusted AI framework
Federal executives already recognize that their ability to take direct action is based on specific authority granted to them. As agencies expand their reliance on technologies to make recommendations and decisions, they need to ensure that they are delegating appropriate authority. An important safeguard within AI systems is putting in provisions to document decision making to show adherence to established business rules and policies. In other words, ensuring that these systems have the ability to explain how they arrived at specific conclusions.
More broadly, agencies need to champion responsible AI to maintain trust in our civic institutions. This means using AI in an ethical, transparent manner that protects privacy and other rights, and actively monitoring for unintended consequences. For example, developing systems to avoid the biases too often prevalent in our everyday lives; this can be supported by using training data that is fair and representative. Already, 70 percent of federal executives believe that ensuring transparency around AI is critical to maintaining citizen trust, according to the 2018 Accenture Federal Services Technology Vision.
Identifying and capturing opportunities to automate and augment
As discussed earlier, AI technologies can both automate and augment human tasks that map to four different process models: Efficiency, Effectiveness, Expert and Innovation. When identifying opportunities, agencies need to:
- Identify the Value Case – Articulate how AI can enhance and improve the service delivery value chain, from both a user and operational perspective, for example, by remediating current bottlenecks and delivering added value.
- Use Human-Centered Design – Employ user-centered and service design to optimize both the end-to-end user journey and human/system interactions.
- Define Data Requirements – Determine what data is available for use, how it can be augmented and how it can be safeguarded.
- Prototype and Test – Create the technology (preferably, cloud-based) environment and build a minimum viable product to pilot, test, iterate, refine and scale the proposed solution.
- Integrate into Operations – Redefine business processes and train users to take full advantage of AI’s contributions; building trust in AI systems is an important change management focus.
As a point of reference, a value-targeting exercise may take two weeks to three months to execute; development of a program pilot often requires similar timeframes.
Growing enterprise AI competency
AI is unique from an enterprise perspective for a number of reasons, including the projected pervasiveness and dynamic nature of the technology; the depth and intimacy of its interactions with users; and its long-term transformative impact. As noted previously, 82 percent of federal executives believe that AI will emerge in the next two years “…as a co-worker, collaborator, and trusted advisor.” Ultimately, AI is likely to be infused in nearly all facets of operations, requiring that we collaborate with systems in entirely new ways.
In establishing AI as a core enterprise competency, agencies need to consider a number of issues, including:
- Maintaining the Right Data – Establish the complete AI landscape necessary to meet agency objectives including developing a Big Data strategy and architecture and defining and maintaining appropriate data governance.
- Investing in Capabilities – Creating an AI development road map that encompasses both scalable platforms and modular AI capabilities or services that can be leveraged across multiple requirements; determine sourcing (e.g., insource, co-invest, etc.) criteria.
- Developing Skills and Culture – Aligning agency leadership with the AI imperative, building a culture that understands and embraces AI’s potential and responsibilities, and developing a “Liquid Workforce” of multidisciplinary teams with necessary skills (e.g., business analysis, process design, analytics and AI) to directly support.
An important element of change management for AI is engaging staff at all levels in the design of the tools and in training the solution. By doing so, they can identify the lower value tasks to automate, which will free up their time and ultimately augment their knowledge and capabilities, allowing them to handle more complex situations.
Think big, start small
Given the breadth of use cases that AI can address, it can be a bit overwhelming to think about where to begin. With AI, agencies can now start to think big and not shy away from tackling the biggest obstacles to mission success that have been seemingly unsolvable in the past.
By starting with a well-defined area of opportunity, iterating, testing and piloting AI solutions mapped to specific processes and tasks, agencies will begin to see rapid changes in service delivery capabilities and staff productivity. By combining humans and AI, agencies can achieve breakthroughs in mission capabilities that were unthinkable in the past.