Deepfakes: How to prepare your organization for a new type of threat
October 6, 2020
October 6, 2020
Advances in deepfake technology have made it harder to distinguish between real and fabricated media, posing a very real threat to organizations. However, with the right training, detection, and response strategy, you can tackle fraudulent media head-on.
<<< Start >>>
<<< End >>>
Deepfake—a combination of the words ‘deep learning’ and ‘fake’—refers to an AI-based technology used to create or alter images, audio, and video resulting in synthetic content that appears authentic. This can range from replicating a voice over the phone—to sound like a CEO or CFO asking for a money transfer—to portraying an accurate resemblance on video. It can be altered in real time or recorded media.
With the world more connected by digital media and the costs for creating deepfakes slumping dramatically, this emerging technology poses a serious risk to your organization. In this article, we’ll explain the risks of this technology and, more importantly, how you can prepare your organization to ward off this threat.
<<< Start >>>
"Advances in technology have made it harder to tell what is real and what is fake."
<<< End >>>
You’re probably already familiar with a simpler version of deepfakes used by social media companies like TikTok and Snapchat. These face-changing filters take real-time data and feed it through an algorithm to produce a synthetic image.
The same AI technology can instead be trained, using sufficient audio and video material as input, to look and sound like you. Yes, artificial intelligence can learn what makes you look like yourself and what makes your voice sound like you do. This ‘you-filter’ can then be applied to any other person and make them look like you and sound like you.
Though in the past deepfakes appeared visibly doctored, advances in technology have made it harder to tell what is real and what is fake. As with many technologies, deepfakes have endured a maturity curve on the way to realizing their full potential. As algorithms improve, less source material is needed to produce a more accurate deepfake.
Deepfakes of influential political figures like Barack Obama, Donald Trump, and Nancy Pelosi have already fooled many with their accurate portrayals and could impact public opinion. With the technology further maturing, it’s inevitable that it will become indistinguishable from reality for uninformed observers. See also the example of a speaking Mona Lisa, created by the Samsung AI Center in Moscow.
<<< Start >>>
The Samsung AI Center in Moscow shows the ability to convincingly turn artworks and celebrity photographs into moving images. Brought to you by The Telegraph.
<<< End >>>
Beyond the technology maturing, the risk of deepfakes is further increased by the fact that workplaces have gone virtual overnight due to COVID-19. This type of digital transition increases the use of video conferencing and other digital workplace tools, meaning more access to deepfake material as well as more opportunity to be deceived.
We classify the type of threats that deepfakes pose to companies under two categories:
Although this emerging technology poses a significant risk, organizations already have the possibility to prepare for the malicious use of deepfake technology and minimize its potential impact. In the next paragraph, we will elaborate on how we view this.
<<< Start >>>
<<< End >>>
To help define an adequate response to the growing deepfake threat, we’ve brought together experts from both our Security and Applied Intelligence practices. Through this multidisciplinary collaboration, we've designed a straightforward methodology that you can implement to help prepare for the impact of malicious deepfake attacks. This methodology is based on three pillars:
For example, the communication department will issue a press statement that exposes the malicious deepfake, including evidence from the software-based detection model provided by the IT team. This process should be integrated into your standard incident management or crisis response process.
Looking at the current trajectory of the technology, deepfake protocols will become part of every company’s security/incident response strategy in the near future. To ensure your organization can react pro-actively, we advise you to get a head start and start preparing now. Include the role of deepfakes in security awareness training, draft a response strategy, and implement a detection model.
<<< Start >>>
"Organizations already have the possibility to prepare for the malicious use of deepfake technology and minimize its potential impact."
<<< End >>>
In our organization, we explore the benefits and risks of new technologies by applying a multi-disciplinary lens. Our Applied Intelligence team focuses on getting value from data through the latest technologies. And our Security team helps organizations prepare, protect, detect, respond, and recover along all points of the security lifecycle. Together, they can assess the threat of new technology, like deepfakes, and help you define a response.
Though deepfakes pose a real threat, they also represent remarkable advances in technology. The Applied Intelligence team has been investigating some of the positive applications of deepfakes, including collaborating with academic institutions for applying the technology to healthcare. In a follow-up article, we’ll look into this more positive side of deepfakes.
Have you already considered deepfakes and the risk they pose to your organization? We’d love to discuss designing your response together!