Generative AI in federal agencies: Five focus areas
April 12, 2023
Generative AI capabilities made available by OpenAI and Microsoft are disrupting the tech ecosystem in a startling fashion. Every day brings a new open-source large language model (LLM), a new partnership that extends generative AI into new spaces, and an unforeseen impact that can give pause.
Generative AI capabilities are not just news headlines – they are real releases that millions of people are using, creating with, and being impacted by. It’s a lot to keep up with.
The power of these new capabilities may cause us to feel like we’re headed into uncharted territories – and to a large extent, we are.
But their rapid evolution does remind me of a previous era of generative AI-based disruption. Several years ago, unconstrained face recognition became exceptionally accurate within a short timeframe. My teams partnered with federal agencies during this period to understand impacts, mitigate risks, and build high-performing solutions for essential missions.
During that time – as well as the other years myself and my teams have worked on piloting and scaling cutting-edge technology for federal agencies – we learned some key lessons that can apply today, and help agencies best navigate this rapidly shifting space.
Based on this experience, Accenture Federal Services sees five focus areas for federal agencies in this new era of generative AI:
Generative AI can be integrated within a holistic enterprise strategy to make data – regardless of type, structure, or scale – a federal agency’s most valuable asset. When combined with modern approaches, such as data mesh architectures, foundation models will be capable of training on relatively small amounts of data to synthesize and illuminate diverse enterprise data holdings. Generative AI models trained for specific use cases will be transferrable and adaptable to new use cases, both within and across federal agencies.
However, approaches to training models on federal agencies’ data will need to balance innovation and affordability. We are exploring ways to best implement open-source and commercial models for federal use cases, recognizing that bleeding-edge models are not yet (and may never be) suitable for use in sensitive applications.
While most AI practitioners were familiar with OpenAI and had likely worked with GPT-2 in some way, the speed with which the stunning capabilities of GPT-3 and GPT-4 were developed caught many in the industry off-guard. This underscores that generative AI breakthroughs applicable to federal agencies can come from early-stage startups, the largest technology companies, and the open-source community.
The diversity of federal missions means agencies will need a diversity of partners to meet their needs. Furthermore, the volatility of the generative AI domain is such that federal agencies will need to be agile, hedging their bets to avoid being locked into faulty or risky technologies.
The availability of stateful generative AI chat interfaces, where models retain and build on context to tailor responses, suggests a future where models are custom-tuned to each individual and updated over time. This manifestation of generative AI will enable federal agencies to reimagine customer services to become more intuitive, personalized, and responsive.
For federal employees, the transformation will be equally profound. To take one example among dozens, the ability to interact with data in a conversational way will fundamentally change analytic activities. Interactive ideation, generation of opinionated hypotheses and points of view, and assistive prompt engineering will empower analysts to be more efficient and insightful.
These developments do not replace analytic activities – instead they focus, simplify, and strengthen. Agencies can reimagine their workforce with solutions that empower and strengthen employees, regardless of background or skill level.
Generative AI developers do not fully understand how their models operate, but this is a manageable problem that comes with the territory of modern AI. However, it is alarming that generative AI behaviors are not completely under the control of their developers and that guardrails are being built after the fact. We don’t yet have a complete understanding of all the harms that can come from generative AI applications, even when put to innocuous uses.
Federal agencies will need to prioritize the trustworthy and responsible use of generative AI however and whenever possible. The imperative for trustworthy, responsible use may slow the rate of generative AI adoption in federal agencies relative to industry. Federal agencies can turn to labs or sequestered prototyping environments to do hands-on work while assessing and mitigating potential risks.
Fortunately, federal agencies are not at square one. While targeted more at developers than implementers, the NIST AI Risk Management Framework is an ideal starting point from which agencies can build approaches to responsible, trustworthy generative AI.
A combination of federal focus backed by responsible commercial practices will be necessary to meet the challenge of this moment. This is a balance Accenture Federal Services is familiar with, as we build on Accenture’s leadership in responsible AI for the commercial space.
While federal agencies will range from early generative AI adopters to cautious skeptics, every federal agency urgently needs capabilities to defend against malicious, inappropriate, and hostile uses of generative AI. The generative AI genie is out of the bottle, such that bad actors – from individuals to state actors – can now create convincing, harmful narratives out of thin air. Further, the rush to release new technologies means that generative AI is being red teamed in a crowd-sourced fashion, and it’s difficult for federal agencies to anticipate how new threats will emerge.
We’ll be sharing further thoughts on generative AI in the federal space in the upcoming weeks. To learn more, I’m happy to share Accenture’s perspective and insights in its recent report, A new era of generative AI for everyone.