AI-generated misinformation: 3 teachable skills to help address it

The Conversation
In my digital studies class, I asked students to pose a query to ChatGPT and discuss the results. To my surprise, some asked ChatGPT about my biography.

ChatGPT said I received my PhD from two different universities, and in two different subject areas, only one of which represented the focus of my doctoral work.

This made for an entertaining class, but also helped illuminate a major risk of generative AI tools — making us more likely to fall victim to persuasive misinformation.

To overcome this threat, educators need to teach skills to function in a world with AI-generated misinformation.

Generative AI stands to make our existing problems separating evidence-based information from misinformation and disinformation even more difficult than they already are.

Text-based tools like ChatGPT can create convincing-sounding academic articles on a subject, complete with citations that can fool people without a background in the topic of the article. Video-, audio- and image-based AI can successfully spoof people’s faces, voices and even mannerisms, to create apparent evidence of behaviour or conversations that never took place at all. MORE

TIPAZ.org