Meta says AI content made up less than 1% of election-related misinformation on its apps

TechCrunch
At the start of the year, there were widespread concerns about how generative AI could be used to interfere in global elections to spread propaganda and disinformation. Fast-forward to the end of the year, Meta claims those fears did not play out, at least on its platforms, as it shared that the technology had limited impact across Facebook, Instagram, and Threads.

The company says its findings are based on content around major elections in the U.S., Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the U.K., South Africa, Mexico, and Brazil.

“While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content,” the company wrote in a blog post. “During the election period in the major elections listed above, ratings on AI content related to elections, politics, and social topics represented less than 1% of all fact-checked misinformation.”

Meta notes that its Imagine AI image generator rejected 590,000 requests to create images of President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden in the month leading up to election day in order to prevent people from creating election-related deepfakes.

MORE

ADDITIONAL NEWS FROM THE INTEGRITY PROJECT

TIPAZ.org