Advances in deepfake technology have made it harder to distinguish between real and fabricated media, posing a very real threat to organizations. However, with the right training, detection, and response strategy, you can tackle fraudulent media head-on.  

<<< Start >>>

<<< End >>>

Deepfake—a combination of the words ‘deep learning’ and ‘fake’—refers to an AI-based technology used to create or alter images, audio, and video resulting in synthetic content that appears authentic. This can range from replicating a voice over the phone—to sound like a CEO or CFO asking for a money transfer—to portraying an accurate resemblance on video. It can be altered in real time or recorded media.

With the world more connected by digital media and the costs for creating deepfakes slumping dramatically, this emerging technology poses a serious risk to your organization. In this article, we’ll explain the risks of this technology and, more importantly, how you can prepare your organization to ward off this threat.

<<< Start >>>

"Advances in technology have made it harder to tell what is real and what is fake."

<<< End >>>

What are deepfakes?

You’re probably already familiar with a simpler version of deepfakes used by social media companies like TikTok and Snapchat. These face-changing filters take real-time data and feed it through an algorithm to produce a synthetic image.

The same AI technology can instead be trained, using sufficient audio and video material as input, to look and sound like you. Yes, artificial intelligence can learn what makes you look like yourself and what makes your voice sound like you do. This ‘you-filter’ can then be applied to any other person and make them look like you and sound like you.

Though in the past deepfakes appeared visibly doctored, advances in technology have made it harder to tell what is real and what is fake. As with many technologies, deepfakes have endured a maturity curve on the way to realizing their full potential. As algorithms improve, less source material is needed to produce a more accurate deepfake.

Deepfakes of influential political figures like Barack Obama, Donald Trump, and Nancy Pelosi have already fooled many with their accurate portrayals and could impact public opinion. With the technology further maturing, it’s inevitable that it will become indistinguishable from reality for uninformed observers. See also the example of a speaking Mona Lisa, created by the Samsung AI Center in Moscow.

<<< Start >>>

AI brings Mona Lisa to life

The Samsung AI Center in Moscow shows the ability to convincingly turn artworks and celebrity photographs into moving images. Brought to you by The Telegraph

<<< End >>>

Risks that deepfakes pose

Beyond the technology maturing, the risk of deepfakes is further increased by the fact that workplaces have gone virtual overnight due to COVID-19. This type of digital transition increases the use of video conferencing and other digital workplace tools, meaning more access to deepfake material as well as more opportunity to be deceived.

We classify the type of threats that deepfakes pose to companies under two categories:

  • Social engineering is the act of manipulating individuals to perform malicious action, such as sharing confidential information. This could be tricking an employee into transferring money based on deepfaked audio or video material. High-profile scams have already been successfully carried out using deepfake technology. In 2019, a UK energy executive was scammed out of £200,000 when he received a faked phone call from his boss asking him to wire emergency money.
  • Public opinion can be swayed by fake videos of CEO and other influential people sharing disinformation about a company. When not adequately responded to, this can impact reputation, influence consumer behavior, and potentially affect the stock price.

Although this emerging technology poses a significant risk, organizations already have the possibility to prepare for the malicious use of deepfake technology and minimize its potential impact. In the next paragraph, we will elaborate on how we view this.

<<< Start >>>

<<< End >>>

How to guard your organization against deepfakes

To help define an adequate response to the growing deepfake threat, we’ve brought together experts from both our Security and Applied Intelligence practices. Through this multidisciplinary collaboration, we've designed a straightforward methodology that you can implement to help prepare for the impact of malicious deepfake attacks. This methodology is based on three pillars:  

  1. Employee training and awareness – By offering adequate training and creating awareness employees can be turned into an additional line of defense. Training should focus on how the technology is leveraged in malicious attempts and how this can be detected: enabling employees to spot deepfake-based social engineering attempts. We’ve applied a similar methodology at clients to help counter the threat of email-based phishing via security awareness programs with much success.
  2. Detection model – Although complete risk mitigation is impossible, detecting false media early can help minimize the impact on your organization. We’ve partnered with startups to develop models that can detect fake images and videos. This is especially relevant when countering attempts by malicious actors to influence public opinion through deepfakes.
  3. Response Strategy – Ensure that your organization is ready to adequately respond to a deepfake. Have a plan in place that can be set in motion when a deepfake is detected. It’s important that individual responsibilities and required actions are defined in this plan.

For example, the communication department will issue a press statement that exposes the malicious deepfake, including evidence from the software-based detection model provided by the IT team. This process should be integrated into your standard incident management or crisis response process.

Looking at the current trajectory of the technology, deepfake protocols will become part of every company’s security/incident response strategy in the near future. To ensure your organization can react pro-actively, we advise you to get a head start and start preparing now. Include the role of deepfakes in security awareness training, draft a response strategy, and implement a detection model.

<<< Start >>>

"Organizations already have the possibility to prepare for the malicious use of deepfake technology and minimize its potential impact."

<<< End >>>

Taking on future technologies together

In our organization, we explore the benefits and risks of new technologies by applying a multi-disciplinary lens. Our Applied Intelligence team focuses on getting value from data through the latest technologies. And our Security team helps organizations prepare, protect, detect, respond, and recover along all points of the security lifecycle. Together, they can assess the threat of new technology, like deepfakes, and help you define a response.

Though deepfakes pose a real threat, they also represent remarkable advances in technology. The Applied Intelligence team has been investigating some of the positive applications of deepfakes, including collaborating with academic institutions for applying the technology to healthcare. In a follow-up article, we’ll look into this more positive side of deepfakes.

Have you already considered deepfakes and the risk they pose to your organization? We’d love to discuss designing your response together!

Koen Putman

Security Manager at Accenture Consulting

Dario Raffaele

A.I. Innovation Manager – Accenture Strategy & Consulting, Resources Industry, the Netherlands

Gerjen van den Dool

Applied Intelligence Consultant – Accenture the Netherlands

Subscription Center
Subscribe to Accenture Insights Subscribe to Accenture Insights