AI Chatbots Like GPT-4 Share Harmful Misinformation, Study Says

Illustration: Getty Images

Inc.
Artificial intelligence chatbots such as OpenAI's GPT-4 and Google's Gemini can't always be trusted when it comes to U.S. elections.

AI models shared inaccurate information more than half of the time when asked questions about election procedures, and information judged to be "harmful" was shared 40 percent of the time, according to a new report from the AI Democracy Projects and Proof News. As Americans increasingly interact with generative AI chatbots, experts are sounding the alarm over the technology's potential impact on the coming U.S. election. 

"People are using models as their search engine, and it's kicking out garbage. It's kicking out falsehoods. That's concerning," Bill Gates, a Republican election official from Arizona, told Proof News. "If you want the truth about the election, don't go to an AI chatbot. Go to the local election website."

Gates was part of a group of more than 40 election officials, journalists and academics gathered to test five leading AI models on their election information. GPT-4, Gemini, Anthropic's Claude, Meta's Llama 2, and Mistral's Mixtral were each judged based on responses to 26 questions a voter might ask. MORE

TIPAZ.org