Are Chatbots Misinforming Us About the European Elections? Yes.

PUBLISHED BY
Democracy Reporting International

AUTHORS AND RESEARCHERS
Michael Meyer-Resende, Austin Davis, Ognjan Denkovski, Duncan Allen

EXECUTIVE SUMMARY
2024 has been called a super-election year, with more than 60 national elections taking place around the world. At the same time, this year has also seen important advancements in the sophistication and application of AI technologies. The potential impact of AI technologies on the electoral process, specifically in voters’ access to accurate information, has been a concern of many.

Since the launch of OpenAI’s ChatGPT in late 2022, the power of AI has become tangible to the wider public, with major companies competing intensively to bring new AI products to the mass customer market. Most prominent among these are AI-driven chatbots, powered by Large Language Models (LLMs) to “understand” and generate human-like text. As these chatbots grow in popularity and power, with the ability to access real-time information and provide source links, they increasingly take over the function of search engines. Indeed, some of these chatbots, such as Microsoft’s Copilot, have already been integrated into internet search.

With chatbots emerging as a popular source of primary information, the impact they have on elections is no longer theoretical. Can these programs consistently provide accurate information about complicated, important topics like the electoral process? If not, do they at least refer users to authoritative sources?

This report investigates the accuracy of the four most popular chatbots’ responses to questions relating to the upcoming European Parliament elections. While the bots appear to have been relatively well-tuned to provide non-partisan responses to political topics, none of them provided reliably trustworthy answers to questions voters may pose about the electoral process.

This is problematic: when voters are wrongly informed on electoral requirements, they may be deterred from voting (for example, thinking it is more complicated than it is), miss deadlines, or make other mistakes. In short, this unintentional misinformation can impact the right to vote and electoral outcomes.

Our findings also suggest that legal obligations under the EU’s Digital Services Act (DSA) are not being fulfilled, such as proper risk assessment, testing and training to mitigate risks to electoral processes. These findings also run against commitments made by some companies under the EU’s Code of Practice against Disinformation to identify and mitigate risks of dis- and misinformation and to adopt safe design principles.

What are our key findings?
Randomness:
The quality of responses to questions about the European Parliament elections vary greatly, even within the responses of each chatbot, supporting the idea that the workings of LLMs are hard to predict and to finetune.

Themes: The chatbots performed poorly on questions of the electoral process (registration, voting, results), while they largely managed to stay non-partisan on political questions.

READ THE WHITE PAPER IN ITS ENTIRETY

RELATED RESEARCH PAPERS FROM THE INTEGRITY PROJECT

TIPAZ.org