This AI Company Releases Deepfakes Into the Wild. Can It Control Them?

Illustration: Traffic_Analyzer/Getty Images

Wired
Erica is on YouTube, detailing how much it costs to hire a divorce attorney in the state of Massachusetts. Dr. Dass is selling private medical insurance in the UK. But Jason has been on Facebook spreading disinformation about France’s relationship with its former colony, Mali. And Gary has been caught impersonating a CEO as part of an elaborate crypto scam.

These people aren’t real. Or at least, not really. They’re deepfakes, let loose into the wild by Victor Riparbelli, CEO of Synthesia. The London-based generative AI company has around 150 of these digital humans for hire. All Synthesia’s clients have to do to get this glossy cast to read their scripts is type in the text they want brought to life and press “generate.”

Riparbelli’s vision for these avatars is for them to function as a glitzy alternative to Microsoft PowerPoint, carrying out corporate training and giving company handbooks a little pizzazz. But Synthesia’s deepfakes have found an appeal beyond the corporate world; they’ve caught the attention of more controversial users, who have been putting the avatars to work spreading disinformation or crypto scams over multiple continents.

“We’re doing a lot. We won’t claim that we’re perfect,” says Riparbelli. “It’s work that’s constantly evolving.”

The challenges facing Riparbelli are a precursor of what’s to come. As companies commercialize synthetic media, turning generative AI from a niche product into an off-the-shelf tool, bad actors are going to take advantage. Businesses at the forefront of the industry need to figure out how far they will go to stop that from happening, and whether they are willing to take responsibility for the AI they create—or push that over to the platforms that distribute it. MORE

TIPAZ.org