We Haven’t Seen the Worst of Fake News

Aimee Rinehart of Harvard University discusses the advent of realistic voiceover technologies and the dangers they present on “Good Morning America.”

The Atlantic
It was 2018, and the world as we knew it—or rather, how we knew it—teetered on a precipice. Against a rising drone of misinformation, The New York Times, the BBCGood Morning America, and just about everyone else sounded the alarm over a new strain of fake but highly realistic videos. Using artificial intelligence, bad actors could manipulate someone’s voice and face in recorded footage almost like a virtual puppet and pass the product off as real. In a famous example engineered by BuzzFeed, Barack Obama seemed to say, “President Trump is a total and complete dipshit.” Synthetic photos, audio, and videos, collectively dubbed “deepfakes,” threatened to destabilize society and push us into a full-blown “infocalypse.”

More than four years later, despite a growing trickle of synthetic videos, the deepfake doomsday hasn’t quite materialized. Deepfakes’ harms have certainly been seen in the realm of pornography—where individuals have had their likeness used without their consent—but there’s been “nothing like what people have been really fearing, which is the incriminating, hyperrealistic deepfake of a presidential candidate saying something which swings major voting centers,” says Henry Ajder, an expert on synthetic media and AI. Compared with 2018’s disaster scenarios, which predicted outcomes such as the North Korean leader Kim Jong-un declaring nuclear war, “the state we’re at is nowhere near that,” says Sam Gregory, who studies deepfakes and directs the human-rights nonprofit Witness. MORE

TIPAZ.org