How Facebook Is Using Artificial Intelligence To Fight A Digital Danger


The fight against misinformation appears to be a never-ending battle, despite the best efforts of so many tech security companies. That being said, Facebook is pouring some of its resources into combating the onset of deepfakes making their way into social media. How are they doing it though?

Power of AI…again

Artificial Intelligence is at the forefront of many solutions nowadays, even though AI is also being used in less than benign ways to create these deepfakes, to begin with. However, Facebook’s proposed model should be able to track the “fingerprint” of whichever AI created the malignant image or video.

Deepfakes: Who Can We Trust Really? | The Startup

This content could include misinformation, but we’ve seen previous cases of non-consensual pornography. Facebook’s lead researcher on this topic, Tal Hassner, said the importance of identifying telltale traits of unknown AI models is vital since deepfake software is very easy to customize.

In an interview, he explained, “If this is a new AI model nobody’s seen before, then there’s very little that we could have said about it in the past. Now, we’re able to say, ‘Look, the picture that was uploaded here, the picture that was uploaded there, all of them came from the same model.'” With all of that established, it makes finding the culprit a tad easier.

The downside

Unfortunately, not only is this tech still in development, trials have not been wholly accurate either. When Facebook held a deepfake detection competition last year, the highest accuracy rate of detection by the winning program was just 65.18%. That’s a little more than half. Take a look at these faces below: all of them are computer generated.

Image: The Verge

One reason is that this field of AI is highly active, making it even more difficult to keep up with new developments. That being said, there’s hope that with Facebook’s backing, the “good” AI will eventually outrun the “bad”, and end this cat and mouse game once and for all.

One major threat, as mentioned earlier is that of non-consensual pornography. Since usually, the AIs would require large amounts of photos to generate these illicit videos, celebrities tend to be victims of this. However, with so much of our lives displayed on social media, it’s only a matter of time before everyday citizens are at risk of being placed in compromising positions without knowledge or consent.

Should there be more regulation and limited access to video editing software?