AI Misinformation: How Grok Falsely Named World Leaders in the Epstein Files

Grok AI falsely claimed Prime Minister Modi and Donald Trump were named in the Epstein files. They weren’t. One line of AI-generated text misled millions of people in a matter of hours. And this is just one example of a growing crisis: one in three AI answers on news topics contains false information.

I came across this story while following the Epstein files release and couldn’t believe what I was reading. People were using AI chatbots to search through thousands of documents, and those chatbots were confidently making connections that didn’t exist.

AI misinformation is the quiet crisis nobody’s talking about. We’re all turning to chatbots for quick answers during chaotic moments, and those chatbots are failing us in ways we don’t notice until the damage is done.

What Happened With Grok and the Epstein Files

In December 2025, the U.S. Department of Justice released a massive batch of Epstein-related documents. Over a million pages. Names everywhere. Social media exploded with screenshots and accusations.

People wanted answers fast. So they did what millions of people do now: they asked AI chatbots.

Someone asked Elon Musk’s Grok whether any Indian nationals appeared in the files. Grok responded that “recent Epstein file releases mention Indian or Indian origin names,” including “Narendra Modi Epstein’s texts offering to arrange meetings via Bannon.”

The claim went viral instantly.

There was just one problem: it was completely false. Modi’s name doesn’t appear in the Epstein files in any such context. Grok hallucinated the connection entirely. A single line of AI-generated text created an international incident, linking a world leader to a convicted sex offender based on nothing.

By the time corrections circulated, the damage was done. Screenshots of Grok’s response had been shared thousands of times. The retraction never caught up.

Person scrolling phone during breaking news when AI misinformation spreads fastest
During breaking news, we reach for our phones. Chatbots are waiting with confident answers that might be completely fabricated.

The AI Misinformation Numbers Are Alarming

The Epstein files incident isn’t an isolated case. The research on AI misinformation is alarming.

According to a 2025 NewsGuard study covered by Axios, one in three AI chatbot answers on news topics contains false information. That’s up from 18% just a year ago. The chatbots aren’t getting better at avoiding misinformation. They’re getting worse.

Some models are worse than others. Perplexity spread falsehoods in 47% of tested answers. ChatGPT and Meta’s Llama hit 40%. Claude had the lowest rate at 10%, and Gemini at 17%. But even the “best” models are wrong roughly one in ten times on news topics.

The scariest part? Chatbots become more likely to spread false information during breaking news, exactly when people are most desperate for quick answers and least likely to verify.

More Real AI Misinformation Examples from 2025

Grok Misidentified a Mass Shooting Hero

In December 2025, during the Bondi Beach shooting in Australia, Grok made multiple critical errors. It misidentified the bystander who heroically disarmed one of the gunmen. It questioned whether video evidence was authentic. People were using it to understand a tragedy in real-time, and it was feeding them wrong information with complete confidence.

AI-Generated War Footage Went Viral

During the India-Pakistan tensions in May 2025, social media filled with AI-generated satellite images, fabricated strike footage, and even a fake audio clip of a Pakistani commander declaring a nuclear alert. One researcher called it “potentially the worst I have seen the information environment in the last two years.”

People asked chatbots to verify the footage. The chatbots guessed. Some of those guesses were wrong.

Health Misinformation Hit 88%

A University of South Australia study found that when chatbots were programmed to spread health disinformation, 88% of all responses were false. Four out of five chatbots generated disinformation in 100% of their responses. Yet each false claim was “presented with scientific terminology, a formal tone and fabricated references.”

The confident tone is the problem. A chatbot doesn’t sound uncertain when it’s wrong. It sounds exactly the same as when it’s right.

Why AI Misinformation Spreads So Fast

Man using laptop with AI interface, typing with focused attention. Indoors with eyewear beside.
When a chatbot answers confidently, we assume it’s right. That assumption is increasingly dangerous.

The problem isn’t that chatbots lie. They don’t, exactly. The problem is that they compress complex situations into simple answers, and in that compression, meaning gets lost.

“Mentioned” Becomes “Implicated”

When you ask a chatbot “Is this person in the documents?” and it says yes, you hear “this person is involved.” But “in the documents” could mean:

  • A reporter requested comment from them
  • A lawyer copied them on an email
  • Someone mentioned their name in passing
  • They suggested a meeting that never happened
  • A victim’s account included them as unrelated context

Or, as in the Modi case, the chatbot simply invented the connection entirely.

Screenshots Kill Context

Here’s how AI misinformation typically spreads:

Someone asks a vague question. The chatbot gives a short, shareable answer. That answer gets screenshotted. The screenshot goes viral. The follow-up questions that would have added context? They never catch up.

The Grok/Modi screenshot spread across X, Reddit, and international news sites within hours. The correction took days to reach the same audience, and many people never saw it.

Disclaimers Don’t Work

Many chatbots add careful language like “No accusations are being made” or “This information should be verified.” But by then, the damage is done. The reader already saw the name plus “in the documents.” The disclaimer scrolls past. The implication sticks.

Red Flags That an AI Answer Might Be Wrong

610oThtD8YL. AC UF894,1000 QL80


After following the Epstein files debacle, these are the warning signs that should make anyone pause:

Vague sourcing. If the answer says “recent reports” or “documents indicate” without a specific page, exhibit, or quote, treat it as unverified.

Missing “where and why.” A trustworthy answer explains whether a name came from an email, testimony, an allegation, or a contact list. Location and reason matter. Without them, you can’t judge meaning.

Loaded words. Terms like “linked,” “connected,” or “involved” imply wrongdoing even when the underlying reference is completely innocent, or completely fabricated.

No category separation. Victims, witnesses, staff, journalists, attorneys, and random contacts can all appear in the same document dump. Blending them together creates false narratives.

How to Avoid Spreading AI Misinformation

You don’t need to stop using chatbots. But these habits help prevent becoming part of the problem, especially for anything involving names, accusations, or breaking news.

Step 1: Restate the Claim in Plain Language

Instead of accepting “X is in the files,” rewrite it as: “X’s name appears in a document. The claim is that this appearance indicates wrongdoing.”

That reframing exposes what actually needs to be proven. Usually, it hasn’t been.

Step 2: Find the Original Document

If you can’t locate the primary source, don’t share the claim as fact. Period. The chatbot’s summary is not the source. The actual document is.

Step 3: Identify the Document Type

Is it an email thread? A contact list? A deposition? A court motion? The meaning changes drastically depending on the type. A name in a scheduling email means almost nothing. A name in sworn testimony means something very different.

Step 4: Read the Surrounding Text

One line rarely stands alone. Get enough context to understand why the name appears. The chatbot won’t do this for you.

Step 5: Use Precise Language When Sharing

If you decide to share, write “Name appears in a scheduling email” or “Referenced by a third party in passing.” Not “in the files.” That phrase implies guilt in today’s internet culture, even when nothing supports it.

Using AI Without Spreading AI Misinformation

Chatbots are still useful. I use them constantly for understanding complex topics, drafting writing, and organizing information. The key is knowing what they’re good at and what they’re terrible at.

Good for: Summarizing, organizing timelines, creating glossaries, explaining background context.

Terrible for: Verifying claims, fact-checking breaking news, determining whether someone is “involved” in something.

A few habits help:

Ask for uncertainty on purpose. Prompts like “List three possible interpretations of this mention” reduce one-track conclusions.

Require citations. If the chatbot can’t provide a page reference or direct quote, treat the answer as a lead to investigate, not proof of anything.

Keep sensitive names out of speculative prompts. When allegations involve abuse, public figures, or minors, extra caution protects real people from real harm.

The Bigger Picture on AI Misinformation

We’re living through a strange moment. AI tools have gotten good enough that we trust them instinctively, but not good enough to deserve that trust on sensitive topics. That gap is where AI misinformation grows.

The Epstein files release was supposed to bring transparency. Instead, it became a case study in how AI can pollute the information environment. Real documents got mixed with AI hallucinations. Legitimate questions got buried under fabricated connections. And millions of people walked away believing things that simply aren’t true.

I write about AI for this blog because I genuinely believe these tools can help people. They’ve helped me. But helping requires understanding limitations, and the biggest limitation right now is that chatbots don’t understand consequences. They don’t know that inventing a connection between a world leader and a sex offender can create international incidents.

The responsibility falls on us. Every time we ask a chatbot about a name in a document dump, about whether footage is real, about who’s “involved” in something, we’re choosing whether to slow down or speed up the rumor cycle.

Common Questions About AI Misinformation

rethinking, think, throughts, question mark, question, problems, uncertain, hand, note, writing, people, community, change, future, renovation, rethinking, question mark, question mark, question, question, question, question, question, uncertain, uncertain, change, change, renovation

Can a chatbot be trusted for fact-checking?

Not reliably. Studies show one in three answers on news topics contain false information, and accuracy drops further during breaking news. Treat chatbot outputs as starting points to investigate, never as final verification.

Why do AI chatbots spread misinformation?

Chatbots predict text that fits patterns, not truth. They don’t understand real-world harm, context, or implications. When they summarize complex situations into simple answers, crucial meaning gets lost. Sometimes they fabricate connections entirely, as Grok did with the Epstein files.

Which AI chatbot is most accurate?

According to 2025 research, Claude had the lowest misinformation rate at 10%, followed by Gemini at 17%. ChatGPT and Llama were at 40%. Perplexity was highest at 47%. Grok has additional issues because it’s designed to avoid political correctness, making it more prone to problematic outputs.

What should I do before sharing something a chatbot told me?

Find the original source and read the surrounding context. If you can’t do that, share nothing as fact. The chatbot’s confident summary is not the source, and it might be completely fabricated.


Want to use AI tools without the pitfalls? Start with our beginner’s guide to everyday AI, where we break down what these tools actually do well and where they fall short.

Want AI tips that actually work? 💡

Join readers learning to use AI in everyday life. One email when something good drops. No spam, ever.

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Your email address will not be published. Required fields are marked *