RESEARCH REPORT

In brief

In brief

  • Concerns regarding AI related to loss of privacy, potential biases in decision-making and lack of control over automated systems are addressable.
  • Responsible AI is a framework for the ethical, transparent and accountable use of AI technologies through planning, oversight and governance.
  • As these technologies are deployed, responsible AI becomes critical to cultivating and maintaining trust with workers, citizens and stakeholders.


Artificial intelligence (AI) is maturing rapidly as an incredibly powerful technology with seemingly limitless application. It has demonstrated its ability to automate routine tasks—such as our daily commute—while also augmenting human capacity with new insight. Combining human creativity and ingenuity with the scalability of machine learning is advancing our knowledge base and understanding at a remarkable pace.

However, with great power comes great responsibility. Specifically, AI raises concerns on many fronts due to its potentially disruptive impact. These fears include workforce displacement, loss of privacy, potential biases in decision-making and lack of control over automated systems and robots. While these issues are significant, they are also addressable with the right planning, oversight and governance.

This is the crux: Federal agency leaders are increasingly being asked to answer three critical questions regarding their planned use of AI:

  1. How do I ensure that AI solutions are designed to operate responsibly?
  2. How do I employ AI to act in a manner that is compliant with stakeholder expectations and applicable laws?
  3. How do I use AI to unlock the full potential of my workforce?

Responsible AI is a framework for bringing many of these critical practices together. It focuses on ensuring the ethical, transparent and accountable use of AI technologies in a manner consistent with user expectations, organizational values and societal laws and norms.

Responsible AI can guard against the use of biased data or algorithms, ensure that automated decisions are justified and explainable, and help maintain user trust and individual privacy. By providing clear rules of engagement, responsible AI allows organizations under public and congressional scrutiny to innovate and realize the transformative potential of AI that is both compelling and accountable.

Given the significance of their decisions and the scrutiny that they operate under, responsible AI should be a priority for federal agencies. Furthermore, close adherence to responsible AI principles can allow federal agencies to set broader expectations for the positive use of AI to benefit society.

Promise and peril of AI

As AI technologies become more pervasive, deeply embedded in existing solutions and responsible for an increasing number of decisions like benefit payments, mortgage approvals, and medical diagnoses, they become less visible and transparent. Unlike the autonomous car or the warehouse robot, algorithms are not visible. And organizations face ethical as well as legal and regulatory risks if they use a “black box” approach to AI.

Amplifying bias

One of the real risks with AI is amplifying and reinforcing existing human biases. Some of these biases are unintended and come about due to a lack of diverse perspectives when developing and training the system. In other examples, decision-making can be skewed by reliance on incomplete data where other relevant factors are omitted. Finally, historical data, on which the system will train, may be incompatible with modern attitudes, especially around individuals’ roles in society as it relates to gender, race and similar attributes.

Making AI explainable

In addition to being free of bias, AI-based decisions should be understandable to those impacted and adhere to existing rules and regulations. For example, the 1974 Equal Credit Opportunity Act has long required that those denied credit must be advised as to the reasons behind that decision. More recently, 2011 Federal Reserve System SR 11-7: Guidance on Model Risk Management advised banks to employ active risk management to guard against faulty assessment models. In New York City, the Council recently created a task force charged with determining which “automated decision systems” used by the city should be subject to further governance procedures that include:

  1. Allowing citizens to request an explanation as to how decisions were derived using these systems
  2. Assessing whether those decisions disproportionally impact age, race, creed, color, religion, national origin, gender, disability, marital status, partnership status, caregiver status, sexual orientation, alienage or citizenship status.

Factoring in privacy

In a similar vein, steps should also be taken to ensure that AI doesn’t inadvertently uncover Personally Identifiable Information (PII) and similar data in supposedly anonymized data. This will place greater pressure on data supply chains and the need to ensure that data is used consistently with the expectations that it was originally given. Beyond traditional data sources, the more widespread use of computer vision for observational monitoring and analysis opens new questions for society at large.

Developing trust

According to Accenture Research, 82 percent of federal executives believe that “AI will work next to humans as a co-worker, collaboration and trusted advisor” within 2 years. A high degree of trust will be required for the workforce to increase their reliance on automated systems for often life-impacting decisions. This trust can develop from a widespread understanding for how these decisions are made, ability to guide the machine as it learns, as well as knowledge about how humans and machines augment each other for improved outcomes.

Dominic Delmolino

Managing Director – Chief Technology Officer


Mimi Whitehouse

Manager – AI Strategic Growth Initiative

MORE ON THIS TOPIC


Connect With Us

Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter