The efficacy of Facebook’s vaccine misinformation policies and architecture during the COVID-19 pandemic
Published in Science Advances
AUTHORS
David A. Broniatowski, Department of Engineering Management and Systems Engineering, Institute for Data, Democracy, and Politics, The George Washington University, Washington, DC
Joseph R. Simons, Office of the Assistant Secretary for Financial Resources, United States, Department of Health and Human Services, Washington, DC
Jiayan Gu, Department of Prevention and Community Health, The George Washington University, Washington, DC
Amelia M. Jamison, Department of Health, Behavior, and Society, Johns Hopkins University, Baltimore, MD
Lorien C. Abroms, Institute for Data, Democracy, and Politics, Department of Prevention and Community Health, The George Washington University, Washington DC
ABSTRACT
Online misinformation promotes distrust in science, undermines public health, and may drive civil unrest. During the coronavirus disease 2019 pandemic, Facebook—the world’s largest social media company—began to remove vaccine misinformation as a matter of policy. We evaluated the efficacy of these policies using a comparative interrupted time-series design. We found that Facebook removed some antivaccine content, but we did not observe decreases in overall engagement with antivaccine content. Provaccine content was also removed, and antivaccine content became more misinformative, more politically polarized, and more likely to be seen in users’ newsfeeds. We explain these findings as a consequence of Facebook’s system architecture, which provides substantial flexibility to motivated users who wish to disseminate misinformation through multiple channels. Facebook’s architecture may therefore afford antivaccine content producers several means to circumvent the intent of misinformation removal policies. MORE