ℹ️ Quick Answer: The New York Times is suing Perplexity AI for scraping articles without permission and reproducing that journalism in AI search answers. If publishers win these lawsuits, AI tools like Perplexity, ChatGPT Browse, and Google AI Overviews will likely show less direct information and more links to original sources.
📋 WHAT’S INSIDE
- What Happened
- The Two Main Accusations
- This Isn’t the First Lawsuit
- Why This Matters for AI Users
- What Perplexity Says
- My Take
- What to Watch
I asked Perplexity a question yesterday. It gave me a detailed answer with citations. One of those citations linked to a New York Times article I couldn’t read without a subscription.
But I didn’t need to click through. Perplexity had already told me everything the article said.
That’s the problem The New York Times is suing Perplexity over. And if you use AI tools to search for information, this lawsuit could change how they work.
What Happened
On December 5, 2025, The New York Times filed a copyright infringement lawsuit against Perplexity AI in the Southern District of New York federal court.
The lawsuit landed alongside a related suit from the Chicago Tribune, owned by Tribune Publishing, filed the same day.
The core accusation. Perplexity is scraping news articles without permission and using that content to power its AI search engine. When you ask Perplexity a question, it often pulls information directly from news sources like the Times, sometimes reproducing articles nearly word for word.
The Times says this is copyright infringement. They want it to stop, and they want compensation.
This isn’t the first time Perplexity has faced legal trouble over copyright. Some background on the ongoing issues.
The Two Main Accusations
The lawsuit alleges Perplexity scrapes Times articles in real time without authorization, then reproduces that journalism “identically or substantially similar” in its AI answers instead of linking to the source.
1. Scraping without permission. Perplexity allegedly crawls nytimes.com (sometimes in real time) to grab articles, videos, and podcasts. This content feeds into Perplexity’s AI models and search results. The Times says this is unauthorized copying that violates their terms of service and robots.txt directives.
2. Reproducing content in outputs. When users ask Perplexity questions, the AI sometimes spits out responses that are “identical or substantially similar” to Times articles. Instead of summarizing or linking, Perplexity allegedly reproduces the journalism itself.
There’s also a third issue. Hallucinations. The Times claims Perplexity has falsely attributed completely made up information to the newspaper, damaging their reputation.

This Isn’t the First Lawsuit
The Times previously sued OpenAI and Microsoft in December 2023 over ChatGPT, Dow Jones sued Perplexity in October 2024, and there are now over 40 similar AI copyright cases working through U.S. courts.
The Times has been aggressive about protecting its content from AI companies. In December 2023, they sued OpenAI and Microsoft for similar alleged copyright violations with ChatGPT and Bing Chat (now Microsoft Copilot).
Dow Jones (owner of the Wall Street Journal) and the New York Post sued Perplexity back in October 2024.
There are now more than 40 similar cases working through courts around the country, involving publishers like Condé Nast, the Intercept, and the Center for Investigative Reporting.
The Times actually sent Perplexity cease and desist letters in October 2024 and again in July 2025. According to the lawsuit, Perplexity kept using their content anyway.
Why This Matters for AI Users
If publishers win these cases, AI search tools like Perplexity, ChatGPT’s Browse feature, and Google AI Overviews may need to license content or limit how much they reproduce, making detailed AI answers shorter and less useful.
The information has to come from somewhere. AI tools don’t create information out of thin air. They summarize, synthesize, and sometimes copy from sources. When you get a detailed answer about current events, that content was originally reported by journalists at organizations like the Times, Washington Post, or Reuters.
These lawsuits could change how AI tools work. If news organizations win, AI companies might need to license content or change how they display information. This could mean more links to sources, less direct answers, or paywalled content becoming less accessible through AI.
Free access isn’t free. Journalism costs money to produce. The Times employs over 1,700 journalists worldwide. If AI tools can serve up the same information without compensation, it threatens the business model that pays for news gathering in the first place.

What Perplexity Says
Perplexity, led by CEO Aravind Srinivas, argues its use of content falls under fair use and compares its approach to how Google displays search snippets with links back to original sources.
Perplexity also launched a “Publishers’ Program” in mid 2024 to share revenue with participating media companies. But the Times did not participate, and the lawsuit suggests these offers weren’t sufficient to address the underlying copyright concerns.
The courts will ultimately decide whether AI generated summaries cross the line from helpful tool to copyright infringement.
My Take
AI search is useful, no question. But it’s unsustainable in its current form. AI companies will likely need to license publisher content, similar to how Spotify licenses music, and the era of scraping everything without permission is ending.
I use Perplexity regularly for research. It saves me a ton of time. But reading about this lawsuit made me think differently about where those convenient answers come from.
There’s a real tension here. AI search tools are incredibly helpful for users. But they’re built on content that other people created, often without permission or payment. That’s not sustainable.
I don’t know how this will shake out legally. But I suspect we’ll see AI companies eventually need to cut deals with major publishers, similar to how Spotify licenses music from Universal, Sony, and Warner. The era of just scraping everything and asking for forgiveness later seems to be ending.
For now, I’m continuing to use these tools. But I’m also more conscious about clicking through to original sources when I want the full story, especially for news.

What to Watch
Three developments will shape how this plays out. Settlement talks that set licensing precedents, product changes like stricter source attribution, and potential federal legislation to clarify AI copyright rules.
This lawsuit will take time to resolve. But here’s what to keep an eye on.
Settlement talks. Many of these cases end in licensing deals rather than court decisions. OpenAI has already signed deals with the Associated Press and Axel Springer. If Perplexity settles with major publishers, expect other AI companies to follow.
Product changes. Watch for AI tools to start showing more prominent source attribution, limiting how much content they reproduce, or blocking certain publishers entirely.
Legislation. Courts are deciding these cases one by one, but Congress could step in with clearer rules about AI and copyright. That would affect all AI tools, not just Perplexity.
This story is still developing, and the outcome will affect every AI search tool you use.
Related reading: AI Content Summarizers Guide | Latest AI News | New to AI? Start here









Leave a Reply