Latest Research


This content is currently locked.

Your current Info-Tech Research Group subscription does not include access to this content. Contact your account representative to gain access to Premium SoftwareReviews.

Contact Your Representative
Or Call Us:
1-888-670-8889 (US/CAN) or
+1-519-432-3550 (International)

Comprehensive software reviews to make better IT decisions

Google Stocks Ammunition in the War Against Deepfakes

Google continues to be an ally in the war against deepfakes, which are incredibly realistic videos that are synthetically produced, by releasing a large dataset of video deepfakes that it has produced with the intent of helping researchers detect visual deepfakes.

Google collaborated with Jigsaw, an incubator that lives within Google’s parent firm Alphabet, to release the dataset to the FaceForensics benchmark project. The project is also cosponsored by Google and spearheaded by researchers at the Technical University of Munich and the University Federico II of Naples.

By having a large dataset of known deepfakes and the source material that they were created from, researchers exploring ways to combat deepfakes can benchmark their algorithms for doing so. The FaceForensics benchmark program will then test any algorithms submitted by researchers and post their performance in determining the synthetic content successfully.

Google created the dataset without dredging up any user privacy issues by hiring paid actors to be filmed in a variety of scenes. In an animated GIF from Google, you can see the original, real actors (top) and the deepfakes (bottom) created by using another actor to imitate the original actor’s movements and then altered with artificial intelligence. (View the original GIF here.)

Above: Real actor (top) and deepfake (bottom)

Courtesy: Google AI Blog

Concerns about the negative implications of deepfakes are amplifying as the tools to create them become more easily accessible. As the US enters an election year in 2020, following unearthed examples of foreign interference in its previous presidential elections via digital means, there is a real risk that convincing deepfake videos could mislead voters and increase distrust of media sources. It’s as if the “fake news” problem just took a big dose of steroids.

While many researchers will welcome Google’s contributions to fighting the impact of deepfakes with algorithmic countermeasures, others say that machine learning isn’t the answer. A report from Data and Society argues that a social remedy is required in addition to a technical one.

Our Take

The only way to truly know that video footage is unaltered between the moment it’s captured and the moment it’s viewed is to know what equipment it was recorded on and to have a failsafe mechanism to prove its heredity. Blockchain innovations firm Factom provides a system for video surveillance that stores captured footage to a blockchain, so viewers know it can’t be tampered with.

Without knowing the source of a video, as is almost always the case with content on the internet, even the best deepfake detection algorithms can’t be trusted as foolproof. As cybersecurity researchers know, the more you train an algorithm to detect the signature of a bad actor, the more creative those bad actors will get to beat your algorithms. To avoid a never-ending “arms race,” deepfake detection should develop heuristic methods that consider both the content of a possible deepfake and the behavior in its distribution online. Who shares content and how content is being distributed could point to an intent to mislead just as much as the synthetic video content itself does.

Visit our IT Cost Optimization Center
Over 100 analysts waiting to take your call right now: 1-519-432-3550 x2019