OpenAI Releases ‘Deepfake’ Detector to Disinformation Researchers
The New York Times
As experts warn that images, audio and video generated by artificial intelligence could influence the fall elections, OpenAI is releasing a tool designed to detect content created by its own popular image generator, DALL-E. But the prominent A.I. start-up acknowledges that this tool is only a small part of what will be needed to fight so-called deepfakes in the months and years to come.
On Tuesday, OpenAI said it would share its new deepfake detector with a small group of disinformation researchers so they could test the tool in real-world situations and help pinpoint ways it could be improved.
“This is to kick-start new research,” said Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy. “That is really needed.”
OpenAI said its new detector could correctly identify 98.8 percent of images created by DALL-E 3, the latest version of its image generator. But the company said the tool was not designed to detect images produced by other popular generators like Midjourney and Stability.
RELATED STORIES FROM THE INTEGRITY PROJECT