I saw a video last week that looked completely legitimate. Professional framing, credible-looking sources, the works. Took me maybe thirty seconds before something felt off. Another minute to confirm it was fake. But here’s the thing—I was actively looking for it. I was already suspicious.
Most people aren’t.
I came across Jacob Landry’s piece on AI-generated hoaxes, and it crystallised something I’ve been mulling over. We keep talking about AI like it’s creating some new category of deception. It’s not. Hoaxes have been around forever. Conspiracy theories, fabricated evidence, elaborate lies—none of this is new.
What’s new is how stupidly easy it’s become to make them convincing.
Take the Bean conspiracy from Chicago. A group called the “Man in Bean Coalition” started spreading the story that someone was trapped inside the Cloud Gate sculpture—you know, the giant reflective bean thing. Ridiculous premise, right? Except they didn’t just tweet about it. They organized protests, generated AI videos showing purchase records, fake x-ray imagery. The whole evidence trail you’d expect from a real incident.
The fabrication wasn’t sophisticated in concept. It was just thorough. And that thoroughness is what AI enables at scale. You no longer need a team of forgers. You don’t need weeks of preparation. You need a decent prompt and maybe an hour.
We saw the same thing with the comet 3I/ATLAS situation. Someone spots an unusual object, and suddenly there are AI-generated deepfakes of physicists like Brian Cox and Michio Kaku claiming it’s an alien spacecraft, fabricated research videos, the whole apparatus of credibility, manufactured in real-time to feed people who want to believe NASA is hiding something.
It’s not that the lies got smarter. They got faster and more complete.
And this is where we’re screwed, honestly. Because the old model of “trust but verify” assumes verification is possible. It assumes you can trace sources, check credentials, and find the original. But when the sources themselves are generated, when the credentials are synthetic, when there is no original—what exactly are you verifying against?
Landry makes the point that we’re losing our ability to distinguish truth from fiction, and he’s not wrong. But I’d put it more bluntly: we’re entering an era where believing anything requires work most people aren’t going to do.
Think about your own media diet. How many things do you see in a day? How many articles, videos, and social posts? Now ask yourself how many of those you actually verify. Not “seem credible so I accept them,” but actually check sources, reverse image search, dig into who’s making the claim.
Yeah, me neither. Not most of them.
And we saw this coming. This isn’t a surprise. We’ve known for years that synthetic media was improving, that the next generation would outpace detection, and that the tools would become cheaper and more accessible. We knew. We didn’t really reckon with what it would feel like when it arrived.
It feels like standing in a hall of mirrors where some of the reflections are real and some aren’t, and you’re supposed just to figure out which is which while you’re walking through. Exhausting. That’s what it feels like.
The problem isn’t the technology itself. AI has legitimate uses. The problem is that we’re primates who evolved to trust what we see with our eyes, and that’s no longer a reliable heuristic. Our brains aren’t built for this. We’re pattern-matching machines in an environment where the patterns lie.
So what do we do? I don’t have a great answer. Landry doesn’t either, really. He ends on this note about losing the ability to separate truth from fiction being “terrifying,” and yeah, it is. But terror doesn’t solve the problem.
Perhaps the answer is that we accept that belief now comes at a cost. It costs time, energy, and scepticism. Nothing gets the benefit of the doubt anymore. Everything requires work to trust.
Which brings me to the question I keep coming back to: how much energy are you actually willing to spend verifying everything you see? Because if the answer is “not much,” then you’re operating on faith in an environment specifically designed to exploit that faith.
And that’s not a comfortable position to be in.
Read the full piece: Elaborate Hoaxes in the Age of AI by Jacob Landry