The Integrity Project

View Original

See why AI detection tools can fail to catch election deepfakes

The Washington Post processed three altered and three genuine photos using eight popular deepfake detectors, analysing several factors including face outline, voice/lips, pixel anomalies, diffusion and more, with mixed and unreliable results.

The Washington Post
Artificial intelligence-created content is flooding the web and making it less clear than ever what’s real this election. From former president Donald Trump falsely claiming images from a Vice President Kamala Harris rally were AI-generated to a spoofed robocall of President Joe Biden telling voters not to cast their ballot, the rise of AI is fueling rampant misinformation.

Deepfake detectors have been marketed as a silver bullet for identifying AI fakes, or “deepfakes.” Social media giants use them to label fake content on their platforms. Government officials are pressuring the private sector to pour millions into building the software, fearing deepfakes could disrupt elections or allow foreign adversaries to incite domestic turmoil.

But the science of detecting manipulated content is in its early stages. An April study by the Reuters Institute for the Study of Journalism found that many deepfake detector tools can be easily duped with simple software tricks or editing techniques.

Meanwhile, deepfakes and manipulated video are proliferating.

This video of Harris resurfaced on X the day Biden dropped out of the race, quickly gaining over 2 million views. In the clip, she seems to ramble incoherently. But it’s digitally altered.

MORE

ADDITIONAL NEWS FROM THE INTEGRITY PROJECT