I asked Perplexity a question yesterday. It gave me a detailed answer with citations. One of those citations linked to a New York Times article I couldn’t read without a subscription.
But I didn’t need to click through. Perplexity had already told me everything the article said.
That’s the problem The New York Times is suing Perplexity over. And if you use AI tools to search for information, this lawsuit could change how they work.
The quick answer: The Times claims Perplexity scrapes their articles without permission and reproduces the content in AI responses. If news organizations win these lawsuits, expect AI search tools to show less direct information and more links to sources. The era of getting paywalled content summarized for free may be ending.
Here’s what happened and why it matters.
What Happened
On December 5, 2025, The New York Times filed a lawsuit against Perplexity AI in federal court in New York. The Chicago Tribune filed a separate but related suit the same day.
The core accusation: Perplexity is scraping news articles without permission and using that content to power its AI search engine. When you ask Perplexity a question, it often pulls information directly from news sources like the Times, sometimes reproducing articles nearly word-for-word.
The Times says this is copyright infringement. They want it to stop, and they want compensation.
This isn’t the first time Perplexity has faced legal trouble over copyright. Here’s some background on the ongoing issues:
The Two Main Accusations
The lawsuit makes two key claims:
1. Scraping without permission. Perplexity allegedly crawls the Times website (sometimes in real-time) to grab articles, videos, and podcasts. This content feeds into Perplexity’s AI models and search results. The Times says this is unauthorized copying.
2. Reproducing content in outputs. When users ask Perplexity questions, the AI sometimes spits out responses that are “identical or substantially similar” to Times articles. Instead of just summarizing or linking, Perplexity allegedly reproduces the journalism itself.
There’s also a third issue: hallucinations. The Times claims Perplexity has falsely attributed completely made-up information to the newspaper, damaging their reputation.

This Isn’t the First Lawsuit
The Times has been aggressive about protecting its content from AI companies:
In 2023, they sued OpenAI and Microsoft for similar alleged copyright violations with ChatGPT and Bing.
Dow Jones (owner of the Wall Street Journal) sued Perplexity back in October 2024.
There are now more than 40 similar cases working through courts around the country.
The Times actually sent Perplexity cease-and-desist letters in October 2024 and again in July 2025. According to the lawsuit, Perplexity kept using their content anyway.
Why This Matters for AI Users
If you use Perplexity (or similar AI search tools), here’s what you should understand:
The information has to come from somewhere. AI tools don’t create information out of thin air. They summarize, synthesize, and sometimes copy from sources. When you get a detailed answer about current events, that content was originally reported by journalists.
These lawsuits could change how AI tools work. If news organizations win, AI companies might need to license content or change how they display information. This could mean more links to sources, less direct answers, or paywalled content becoming less accessible through AI.
Free access isn’t free. Journalism costs money to produce. If AI tools can serve up the same information without compensation, it threatens the business model that pays for news gathering in the first place.

What Perplexity Says
Perplexity has generally argued that their use of content falls under fair use, similar to how Google displays snippets in search results. They provide citations and links back to original sources.
The courts will ultimately decide whether AI-generated summaries cross the line from helpful tool to copyright infringement.
My Take
I use Perplexity regularly for research. It’s genuinely useful. But reading about this lawsuit made me think differently about where those convenient answers come from.
There’s a real tension here. AI search tools are incredibly helpful for users. But they’re built on content that other people created, often without permission or payment. That’s not sustainable.
I don’t know how this will shake out legally. But I suspect we’ll see AI companies eventually need to cut deals with major publishers, similar to how Spotify licenses music. The era of just scraping everything and asking for forgiveness later seems to be ending.
For now, I’m continuing to use these tools. But I’m also more conscious about clicking through to original sources when I want the full story, especially for news.

What to Watch
This lawsuit will take time to resolve. But here’s what to keep an eye on:
Settlement talks. Many of these cases end in licensing deals rather than court decisions. If Perplexity settles with major publishers, expect other AI companies to follow.
Product changes. Watch for AI tools to start showing more prominent source attribution, limiting how much content they reproduce, or blocking certain publishers entirely.
Legislation. Courts are deciding these cases one by one, but Congress could step in with clearer rules about AI and copyright. That would affect all AI tools, not just Perplexity.
For more AI news and practical guides, check out our Start Here page. If you’re curious about how AI tools work, our AI Content Summarizers Guide explains the technology behind tools like Perplexity.









Leave a Reply