RESEARCH REPORT

In brief

In brief

  • Artificial Intelligence's (AI) proliferation in government for mission-critical operations will have a significant impact on citizens' lives.
  • Agencies need to ensure that their AI platforms act as responsible members of society, adhering to all applicable rules and regulations.


IDC’s Adelaide O’Brien on ethical AI in agencies.

How does IDC define responsible or ethical AI, and what should government agencies know about this?

Adelaide O’Brien: As AI proliferates in government agencies and is being deployed for mission-critical operations, it will make or assist in decisions having significant impact on almost every aspect of individuals' lives. As the role of data-enabled automated decision making becomes more pervasive, transformative applications are demonstrating the potential that human and machine pairing can bring. At the same time, ethical challenges are being raised regarding the potential for errors, due to inadvertent algorithmic or data bias in sensitive areas, particularly regarding gender, race, class or age.

Responsible or ethical AI includes the practices that government agencies can and should take to manage, monitor and mitigate these risks. Responsible and ethical use of AI includes protecting individuals from harm based on either algorithmic or data bias or unintended correlation of personally identifiable information (PII) even when using anonymous data. Agencies need to ensure that their AI platforms act as responsible members of society, adhering to all applicable rules and regulations, the same rules and regulations all employees are required to follow.

Download the full IDC Analyst Connection: Responsible, Ethical Artificial Intelligence in Government Agencies for additional insight on maintaining citizen trust when using AI to deliver citizen services and integrating AI in the workforce.

How can government agencies mitigate the potential for bias in their data?

AO: Humans have some level of bias impacting everyday decisions. And since machine learning is trained by humans, these biases may be inherently and inadvertently built into machine-based AI systems. We make decisions based on the information we know, and often lack full comprehension of, or access to other information that should be taken into consideration. Similarly, machines can entrench bias if learning is based on limited demographic information or a preponderance of historical data that doesn't reflect today's reality. Understanding and addressing the ethical, legal and societal implications of AI is identified as a priority in The National Artificial Intelligence Research and Development Strategic Plan. To mitigate bias, agencies should ensure diversity in their data and follow basic data management and governance practices such as ensuring that there is an information access and analysis strategy in place that involves a robust data foundation, data governance and analytics. Have processes to document sources, label and organize files, and address the issues with respect to integration of data from different data sources to build a complete and true picture of constituents. Data scientists should verify the veracity and lineage of the data prior to training the model. Historical databases can be weighted to allow an emphasis in training on more current data, reflective of current demographic realities.

Agencies must be committed to training their AI platform and use training data that is as comprehensive and unbiased as possible. Bias can also be mitigated through human-centered design practices that create personas inclusive of various populations and through testing to ensure that AI decisions do not harm the personas that the agency's mission is to serve. It's important to realize that just as humans learn and adjust thinking over time, mitigating bias is not a "once and done" effort for AI but a continuous learning process that involves quality data, rigorous data management practices, and active testing for algorithmic bias. The bottom line is agencies are accountable for AI decisions and must ensure that the algorithms follow the law, do not treat demographic groups unfairly, and protect groups and individuals from algorithmic bias. Biases are not a new phenomenon, and properly trained AI systems will offer a long-term ability to better address them.

Learn More: Read the entire IDC Analyst Connection: Responsible, Ethical Artificial Intelligence in Government Agencies report for additional insight. Other topics include maintaining citizen trust when using AI to deliver citizen services and integrating AI into the workforce. Use the View Full Report button to access and download the expanded PDF.

Adelaide O'Brien

Research Director – IDC Government Insights

MORE ON THIS TOPIC


Subscription Center
Stay in the Know with Our Newsletter Stay in the Know with Our Newsletter