Want to fight misinformation? Teach people how algorithms work
Nieman Lab
In an era dominated by social media, misinformation has become an all too familiar foe, infiltrating our feeds and sowing seeds of doubt and confusion. With more than half of social media users across 40 countries encountering false or misleading information weekly, it’s clear that we’re facing a crisis of misinformation on a global scale.
At the heart of this issue lies social media algorithms — those mysterious computational formulas that determine what content appears on our feeds. These algorithms are designed to show users content that they are most likely to engage with, often leading to the proliferation of misinformation that aligns with our biases and beliefs. A prominent example is Facebook’s profit-driven algorithms, which supported a surge of hate-filled misinformation targeting the Rohingya people, contributing to their genocide by the Myanmar military in 2017.
But here’s the kicker: Social media algorithms remain largely opaque to users. The information feeding mechanism driven by algorithmic decisions is often perceived as a black box, as it is almost impossible for users to recognize how an algorithm reached its conclusions. It’s like driving a car without knowing how the engine functions. Lacking insights into the algorithmic mechanism impairs individuals’ ability to critically evaluate the information they come across. There has been a growing call for and attention to algorithmic knowledge–understanding how algorithms filter and present information. However, it is still unclear whether having algorithmic knowledge actually helps social media users combat misinformation.
ADDITIONAL NEWS FROM THE INTEGRITY PROJECT