Deepfakes, videos generated or manipulated by artificial intelligence, allow people to create content at a level of sophistication once only available to major Hollywood studios. Since the first deepfakes arrived seven years ago, experts have feared that doctored videos would undermine politics, or, worse, delegitimize all visual evidence. In this week’s issue of The New Yorker, Daniel Immerwahr, a professor of history at Northwestern University, explores why little of this has come to pass. As realistic as deepfakes can be, people seem to have good instincts for when they are being deceived. But Immerwahr makes the case that our collective imperviousness to deepfakes also points to a deeper problem: that our politics rely on emotion rather than evidence, and that we don’t need to be convinced of what we already believe.
You can read Daniel Immerwahr’s essay inThe New Yorker’s first ever special issue about artificial intelligence—out now.