Federal leaders see themselves walking a fine line when it comes to initial AI efforts. They want to start small – with manageable projects that deliver quantifiable wins. At the same time, they want to think big, putting in place processes and systems that will support expanded AI efforts over the long haul.

Early efforts may focus on mundane tasks and routine work. Agencies may deploy a limited-scope AI application as a test case. That makes sense, but it makes even better sense to design that initial project in a way to allows for future expansion. Agencies want to take the learnings from the early AI endeavors – what worked, what didn’t, and where were the sticking points – and then apply them in a broader context.



"You need to start small. I think you have to," says BFS's Craig Fischer. "But then I think you need to think big. What happens if this does start to pick up steam and go to scale? If you haven't thought through that, you might get tripped up."

This forward-thinking view speaks to the fundamental nature of AI as a "living" system rather than a fixed endpoint: AI is ever learning and constantly evolving. As Winship explains about USCIS's pioneering chatbot Emma, "we've had a lot of success being able to be as agile as possible . . . we are doing continuous integration and deployments, and that allows us to be responsive to whatever the public need may be at the moment."

In reality, these systems utilize ongoing feedback to constantly refresh their skills. That means the humans must constantly review and refine, ensuring the emerging AI-driven approach continues to align with agency goals and policies. Just as the human workforce undergoes annual training in order to stay fresh, AI systems also need ongoing attention in order to stay current.

Good governance

Thanks to perilous sci-fi films, there is some public wariness around the ethics of artificial intelligence. Will the machines behave the way we want them too? Even strong supporters of AI in government say these questions will have to be tackled head on.

Agencies need to ensure that the data feeding their AI systems does not contain inherent bias or prejudice. This requires a deep and nuanced understanding of the data we leverage to drive AI. Here again, wider data literacy across the board becomes a key driver of success.

Solid governance is likewise a significant factor here. "Often just implementing good data management practices in government departments will help reduce some of that [bias]," Karlin says. "Having good data management practices, having some contextual information about how the data was collected, having a data-quality framework in place is all super important."

At a fundamental level, it will be important to have humans serving as overseers to the AI. "My best solution is that we have a whole bunch of circuit breakers. Humans! Human groups, human connectors into these technologies that can present our values or double check our values," Daimler says. "We don't want to just be taking the output from many of these algorithms as somehow infallible, just because they came from a machine."



Ultimately, federal leaders predict that citizens and government workers will develop a new comfort level around machine-driven outputs. For example, Aronson reflects back on public skepticism around the coming of the automatic teller machine. "The bank tellers were nervous because…they really believed no one would use them. People were absolutely afraid of the ATM. They thought it would destroy their accounting because the passbook was gone," she says. "Does anyone know what a passbook is today?"

If the process is managed thoughtfully, and implemented collaboratively rather than top down, the coming of the AI may be no more disruptive than other technological upheavals that once seemed daunting, and now are seen as commonplace. Ultimately, success requires balancing the ability to experiment and foster innovation, on the one hand, while also maintaining the necessary guardrails and supervision to avoid or minimize unintended consequences. A national strategy on AI should "make sure that we focus on a regulatory environment that doesn't stifle innovation, that we are creating a skilled workforce prepared to thrive in an AI economy," Congressman Hurd concludes. Such a policy would "foster norms and best practices to protect American national and economic security."

Connect with us

Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter