Facebook

Facebook Scientists Say They Can Now Tell Where Deepfakes Have Come From

Kyle Walsh
  • Facebook researchers claim that their software can identify the AI that was used to create a deepfake.
  • Deepfakes are videos that have been digitally altered in some way with AI.
  • They've become increasingly realistic in recent years, making it harder for humans to determine what's real on the internet, and indeed Facebook, and what's not.

Artificial intelligence researchers at Facebook and Michigan State University say they have developed a new piece of software that can reveal where so-called deepfakes have come from.

Deepfakes are videos that have been digitally altered in some way with AI. They've become increasingly realistic in recent years, making it harder for humans to determine what's real on the internet, and indeed Facebook, and what's not.

The Facebook researchers claim that their AI software — announced on Wednesday — can be trained to establish if a piece of media is a deepfake or not from a still image or a single video frame. Not only that, they say the software can also identify the AI that was used to create the deepfake in the first place, no matter how novel the technique.

Tal Hassner, an applied research lead at Facebook, told CNBC that it's possible to train AI software "to look at the photo and tell you with a reasonable degree of accuracy what is the design of the AI model that generated that photo."

The research comes after MSU realized last year that it's possible to determine what model of camera was used to take a specific photo — Hassner said that Facebook's work with MSU builds on this.

'Cat and mouse game'

Deepfakes are bad news for Facebook, which is constantly battling to keep fake content off of its main platform, as well as Messenger, Instagram and WhatsApp. The company banned deepfakes in Jan. 2020 but it struggles to swiftly remove all of them from its platform.

Hassner said that detecting deepfakes is a "cat and mouse game," adding that they're becoming easier to produce and harder to detect.

One of the main applications of deepfakes so far has been in pornography where a person's face is swapped onto someone else's body, but they've also been used to make celebrities appear as though they're doing or saying something they're not.

Indeed, a set of hyper realistic and bizarre Tom Cruise deepfakes on TikTok have now been watched over 50 million times, with many struggling to see how they're not real.

Today, it's possible for anyone to make their own deepfakes using free apps like FakeApp or Faceswap.

Deepfake expert Nina Schick, who has advised U.S. President Joe Biden and French President Emmanuel Macron, said at the CogX AI conference on Monday that detecting deepfakes isn't easy.

In a follow up email she told CNBC that Facebook and MSU's work "looks like a pretty big deal in terms of detection" but stressed that it's important to find out how well deepfake detection models actually work in the wild.

"It's all well and good testing it on a set of training data in a controlled environment," she said, adding that "one of the big challenges seems that there are easy ways to fool detection models — i.e. by compressing an image or a video."

Tassner admitted that it might be possible for a bad actor to get around the detector. "Would it be able to defeat our system? I assume that it would," he said.

Broadly speaking, there are two types of deepfakes. Those that are wholly generated by AI, such as the fake human faces on www.thispersondoesnotexist.com, and others that use elements of AI to manipulate authentic media.

Schick questioned whether Facebook's tool would work on the latter, adding that "there can never be a one size fits all detector." But Xiaoming Liu, Facebook's collaborator at Michigan State, said the work has "been evaluated and validated on both cases of deepfakes." Liu added that the "performance might be lower" in cases where the manipulation only happens in a very small area.

Chris Ume, the synthetic media artist behind the Tom Cruise deepfakes, said at CogX on Monday that deepfake technology is moving rapidly.

"There are a lot of different AI tools and for the Tom Cruise, for example, I'm combining a lot of different tools to get the quality that you see on my channel," he said.

It's unclear how or indeed if Facebook will look to apply Tassner's software to its platforms. "We're not at the point of even having a discussion on products," said Tassner, adding that there's several potential use cases including spotting coordinated deepfake attacks.

"If someone wanted to abuse them (generative models) and conduct a coordinated attack by uploading things from different sources, we can actually spot that just by saying all of these came from the same mold we've never seen before but it has these specific properties, specific attributes," he said.

As part of the work, Facebook said it has collected and catalogued 100 different deepfake models that are in existence.

Copyright CNBC
Contact Us