Deepfake videos created with the help of artificial intelligence are getting harder to spot. This article considers the corrosive effect of deepfake video on media and public discussion.
"Deepfakes are so named because they rely on “deep learning,” a branch of AI. ... [D]irty tricksters now have the technology to create videos in which it really does look like a prominent politician is violently cursing at a baby — or worse. ... AI researchers say it might not be technically possible to spot deepfakes before they spread virally on social media. While there’s plenty of reason to fear such false videos may mislead voters, our research finds the real problem is a bit different: It’s likely to spread distrust of all news on social media, further eroding public debate. ... Crucially, we found when people were uncertain whether the deceptive deepfake was real or not, they also had less trust in news on social media than did those who were not uncertain — even after controlling for participants’ levels of trust, as measured before the experiment. Why does this matter? Declining trust may be a rational response to the wave of online disinformation scandals in the past few years. But most Americans now get their news online, and almost half of Americans get their news on social media. ... In other words, deepfakes’ biggest threat to democracy may not be direct but indirect. Deepfakes might not always fool viewers into believing in something false, but they might contribute to skepticism and distrust of news sources, further eroding our ability to meaningfully discuss public affairs."
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
This blog also appears on Facebook: