The Integrity Project

View Original

Google made a watermark for AI images that you can’t edit out

The Verge
The Google DeepMind team has believed for years that building great generative AI tools also requires building great tools to detect what has been created by AI. There are plenty of obvious, high-stakes reasons why, says Google DeepMind CEO Demis Hassabis. “Every time we talk about it and other systems, it’s, ‘What about the problem of deepfakes?’” With another contentious election season coming in 2024 in both the US and the UK, Hassabis says that building systems to identify and detect AI imagery is more important all the time.

Hassabis and his team have been working on a tool for the last few years, which Google is releasing publicly today. It’s called SynthID, and it’s designed to essentially watermark an AI-generated image in a way that is imperceptible to the human eye but easily caught by a dedicated AI detection tool. 

The watermark is embedded in the pixels of the image, but Hassabis says it doesn’t alter the image itself in any noticeable way. “It doesn’t change the image, the quality of the image, or the experience of it,” he says. “But it’s robust to various transformations — cropping, resizing, all of the things that you might do to try and get around normal, traditional, simple watermarks.” As SynthID’s underlying models improve, Hassabis says, the watermark will be even less perceptible to humans but even more easily detected by DeepMind’s tools. MORE