The term “deepfakes” has become well-known as AI-generated videos of real people, including prominent government and business leaders, have become more common. And while the original term referred to altered images and videos, deepfakes have branched out into a slightly broader definition to include spoofed audio too. Earlier this year we talked about using deepfakes for good, an area that Labs continues to explore. At the same time, we know that bad actors will continue their efforts to use deepfakes for malicious reasons – so we’re also exploring ways to automatically detect them and stop their spread.

Many researchers have tried to train artificial intelligence models to detect deepfaked videos or images. One type of model in particular, a convolutional neural network, is very well known in the space. These models tend to perform well with training data; they can teach themselves to detect deepfaked images and videos from a curated dataset. But they often struggle to detect deepfakes “in the wild” when presented with new data. It’s a bit like performing great on a practice exam but failing the real test.

Why is this happening? It has a lot to do with the way these AI models learn. What we’ve seen is that when the models learn from the training data (curated collections of real and fake videos and images), they’re cheating. They’re not learning from the digital artifacts that would suggest something might be a deepfake, like identifying the blending occurring between forged and non-forged regions of an image. Instead, they’re learning which faces belong to which category of data. If a certain face shows up repeatedly among the faked training data, the next time the model sees that face, it says it’s found a deepfake. They’ve basically learned a shortcut to find deepfakes in the training data – a cheat for the “practice test” that won’t work in the real world.

We set out to build a better approach. We developed an ensemble of models, using some of the previously known methods as well as patent-pending novel approaches we developed here at Accenture’s DC Cyber Lab. The models learn different features of the content being analyzed. This helps minimize the chances that the results will be based on a “cheat.” Once each model has done its job, the solution calculates the likelihood that the content being examined is the result of deepfake technology.

We tested our approach on the “Celeb-DF” dataset. Two other well-known models have also been benchmarked against this dataset, which gave us a chance to compare our solution’s performance to others. Using the two metrics they leveraged, we found that our approach led to a more robust and reliable analysis. On average precision – the accuracy of a model when it predicts that a piece of content is a deepfake – our solution scored about 9% and 32% better than the other two models. Our approach also significantly outperformed the same models on the ability to identify deepfakes without incorrectly flagging real data as fake.

We see solutions like ours as part of a multi-pronged effort to protect and defend against malicious deepfakes. Faked videos and audio can be easily propagated through social media to spread misinformation or disinformation, which not only affects those being misled, but hurts social media platforms and news organizations themselves by eroding trust. These organizations have good reason for concern about malicious deepfake content being circulated on their platforms, but human content moderators need help from automated detection algorithms like ours to stay ahead of the increasing stream of fakes.

And while misinformation campaigns will certainly be fertile ground for deepfakes, the use cases will not end there. As time goes on, there will be many more direct attacks. A European energy firm fell victim to direct attack with a deepfaked voice: a CEO was on the phone with someone he believed was his boss, the CEO of the parent organization. Unfortunately, the person on the other end of the line was not his boss but rather software impersonating his boss. The caller requested an urgent transfer of funds to a supplier. Believing the call was authentic, the executive complied. The company was defrauded out of $243,000 and alarm bells were only set off once the fraudster called back requesting a second transfer.

As time moves forward, integrating automated software solutions like ours to content screening and perhaps even direct communications systems will be crucial to the battle against malicious deepfake content. Of course, as deepfake detection improves, so will the deepfakes themselves. That’s why it’s critical for researchers in this space to continue sharing ideas and building on previous efforts. Be on the lookout for our soon-to-be-published paper, “An Exploration into Multiple Deepfake Detection Approaches and the Case for Model Stacking” which will outline both successful and failed approaches.

For more information about deepfake detection work from our DC Cyber Lab, contact Neil Liberman and Malek Ben Salem.

Neil Liberman

Technology Research and Development Senior Analyst

Recent posts

Subscription Center
Subscribe to Accenture's Technology Innovation Blog Subscribe to Accenture's Technology Innovation Blog