January 23, 2018
AI opens up ways to “see the unseeable” in public safety–provided it’s used responsibly
By: James Slessor

These days, there’s a lot of discussion about how artificial intelligence (AI) might be applied in public safety. However, what images does this trigger in your mind?  A dystopian future like Robocop or Terminator, where machines have seized control and humans’ liberty is threatened? If that’s the case, you should think again and take a fresh look at what AI can deliver.

Why? Because there are many ways in which AI can help public safety agencies deliver their mission and enhance our safety and security as citizens. To explore and seize these opportunities, the public safety community needs to engage positively with AI–but also be conscious of the need to ensure it’s used in a responsible and legitimate way.

So, what are the areas where AI can make the biggest contribution to public safety? For me, three spring immediately to mind:

First, managing the mundane. This means reducing the repetitive and procedural activities that officers and staff must undertake. AI can be applied to many administrative tasks, as we’re already seeing in areas such as HR, finance and rostering. In other industries, AI-enabled chatbots and virtual assistants are handling repetitive tasks like processing of insurance claims. In policing, it’s possible to see AI extending into operational areas such as intelligence management, investigation management and case file preparationhelping to free up valuable time for officers and staff to focus on higher-value activities and interacting with communities.

Second, seeing the unseeable. AI can process huge volumes of data at a pace that a human simply can’t. As a result, it can extract insights and identify patterns and actions that would otherwise be missed. This can be especially valuable with vast volumes of “unstructured” data captured from video and voice recordings, meaning humans no longer have to sit through and interpret numerous video feeds. AI also has the ability to automatically identify faces, objects, movements, and other features, and then create alerts or other responses. This has implications for public order, including securing major events, policing roads and monitoring hotspots. These benefits will grow as our society and cities become “smarter” though rising use of sensor technologies. 

Third, policing the virtual world. Today, public safety agencies must not only keep our physical spaces and communities safe, but also our virtual ones. The scale and speed of cyber-attacks mean responding to them is beyond humans’ capability, making AI imperative. And because AI is highly effective for activities that are time-sensitive or require hyper-precision, it’s well suited to addressing not only cyber threats, but also the whole and growing world of online crime. Examples include usage of AI-enabled “webcrawlers” to track criminals across both the open and “dark” net. In some areas, such as online child exploitation, technology has amplified existing threats. Again AI can help, for example by patrolling chatrooms and automatically flagging or taking down harmful content.

However, while all these areas offer exciting opportunities, it’s vital that any use of AI in public safety takes into account the trust and legitimacy that underpin its public service mandate. This is why we need to make AI “responsible.”  

Interest in what this means is growing by the day. I recently participated in a fascinating roundtable on responsible AI in policing, which we hosted jointly with the think tank Reform. The sessionpart of Accenture’s Engage Digital programmefexplored how to harness AI in policing in a sustainable and responsible way. And the debate confirmed that this requires public safety leaders to consider four factors:

  • Governance – implementing accountability frameworks and codes of ethics to manage how AI is used, and compensate for the way innovation outpaces regulatory and legislative cycles.
  • Design – creating AI capabilities that are transparent, so public safety agencies can explain why a particular decision was made—with core values of equality, diversity and lack of bias built in.
  • Monitor – recognising that AI will learn and evolve, and therefore ensuring its performance can be checked continually against a set of values and accountabilities—linking back to eliminating bias.
  • Re-skill – thinking about the workforce impacts and how best to combine the complementary strengths of AI and humans—potentially freeing people up to focus on activities requiring empathy, human judgement and interaction. 

As these factors underline, adopting AI in public safety raises some challenges. But I believe these are far outweighed by the benefits, not just for public safety agencies but alsomore importantlyfor the public and communities they serve. Realising these benefits comes down to how smartly agencies can leverage new technologies and innovate to help them “pivot and scale in the New,” moving towards a more preventative model.

With AI, it’s vital to think carefully about how to make the pivot, understand the implications for the organisation and workforce, and stay laser-focused on legitimacy and public trust. Given the right strategy and controls, and a readiness to learn from other sectors, “responsible AI” offers great benefits for public safety. We mustn’t let a few Hollywood-inspired dystopian visions hamper our ability to realise this potential.

See this post on LinkedIn:  AI opens up ways to “see the unseeable” in public safety-provided it’s used responsibly

Popular Tags

    More blogs on this topic




        COMMENTS (0)




        Your Data Privacy

        By providing your e-mail address, you agree to the terms
        outlined in our privacy statement associated with
        commenting on the site. Your e-mail address will not be
        used for promotional marketing purposes.

        Retype the CAPTCHA code from the image
        Change the CAPTCHA codeSpeak the CAPTCHA code