The Integrity Project

View Original

How the FBI, NSA are preparing for deepfakes and misinformation issue ahead of 2024 elections

Federal Bureau of Investigation Director Christopher Wray testifies before the Senate Judiciary Committee on Dec. 05 in Washington, DC.

CNBC
More than half of the world’s population will cast votes this year, making election security a global risk to democracy that extends far beyond the U.S. A rising risk component in democratic elections is the role of artificial intelligence in spreading misinformation that influences voters, including the use of deepfakes.

The risk isn’t new, according to FBI Director Christopher Wray, who last week noted at CNBC CEO Council Virtual Roundtable on AI that information warfare, disinformation, and misinformation have been around for decades. What has taken place on social media in recent election cycles was the “next escalation of it,” Wray said, but he added, “It wasn’t a new weapon. It’s just a new way of making that weapon more effective,” with threat actors able to create more credible fake personas, more sophisticated false messages, and manufacture evidence that’s harder to discern as phony.

Experts are already noting the potential for generative AI to become a weapon of information warfare and spread disinformation or misinformation. This summer, for example, a super PAC in support of Florida Governor Ron DeSantis’ campaign used AI to make it seem like Donald Trump was reading his controversial tweets out loud. At the time, the ad did not disclose that Trump’s voice was AI-generated. And because AI makes it that much easier to produce content that perfectly aligns with a campaign’s ideology, its use has been ramping up in political campaigns. MORE

PLUS: Watch the full interview from the CNBC CEO Council Virtual Roundtable.