Is AI Biased against Some Groups and Spreading Misinformation and Extreme Views?
The Brink/Boston University
Millions of us have played with artificial intelligence tools like ChatGPT, testing their ability to write essays, create art, make films, and improve internet searches. Some have even explored if they can provide friendship and companionship—perhaps a dash of romance.
But can we, should we, trust AI? Is it safe, or is it perpetuating biases and spreading hate and misinformation?
Those are questions that Boston University computer scientist Mark Crovella will investigate with a new project backed by a first-of-its-kind National Science Foundation (NSF) and Department of Energy program. The National Artificial Intelligence Research Resource (NAIRR) Pilot aims to bring a new level of scrutiny to AI’s peril and promise by giving 35 projects, including Crovella’s, access to advanced supercomputing resources and data.
A BU College of Arts & Sciences professor of computer science and a Faculty of Computing & Data Sciences professor and chair of academic affairs, Crovella will use the high-powered assist to examine a type of AI known as large language models, or LLMs. His goal is to audit LLMs—AI programs trained to study and summarize data, produce text and speech, and make predictions—for “socially undesirable behavior.” LLMs help drive everything from ChatGPT to automated chatbots to your smart speaker assistant. Crovella will be joined on the project by Evimaria Terzi, a CAS professor of computer science.
According to the NSF, the research resource pilot grew out of President Joe Biden’s October 2023 executive order calling for a federally coordinated approach to “governing the development and use of AI safely and responsibly.”
The Brink asked Crovella about the rapid expansion of AI, how it’s already part of our everyday lives, and how the NAIRR award will help his team figure out if it’s trustworthy and safe.
RELATED STORIES FROM THE INTEGRITY PROJECT