
AI 2025 year in review: I spent the last twelve months watching artificial intelligence go from “interesting novelty” to “thing my mom asks me about at dinner.” And what a strange, contradictory, occasionally terrifying year it was.
In January, a Chinese company nobody had heard of wiped half a trillion dollars off Nvidia’s stock. In March, ChatGPT crashed because too many people were making Studio Ghibli versions of their selfies. In August, the most anticipated AI release in history landed with a thud that echoed across Silicon Valley. And by December, both ChatGPT and YouTube launched their own “Wrapped” features, because apparently even our AI usage now needs a year-end recap.
If you’ve been too busy living your life to track every AI announcement, I’ve got you. Here’s what actually happened in 2025, what it means, and why I think we’re entering 2026 in a much more interesting place than we started.
The Moments Everyone Was Talking About
Let’s start with the viral stuff, because that’s probably what you actually remember.
The Ghibli Meltdown (March)

On March 25th, OpenAI released a new image generation tool, and within hours the internet discovered you could turn any photo into Studio Ghibli-style animation. The “distracted boyfriend” meme. Ben Affleck smoking. Your cousin’s wedding photos. Everything became Miyazaki.
OpenAI CEO Sam Altman joked on X that after “a decade trying to help make superintelligence to cure cancer or whatever,” it was Ghibli memes that finally made people care about his work. Then he announced that “our GPUs are melting” and OpenAI had to impose temporary limits on image generation.
The moment also reignited copyright debates. A 2016 video of Hayao Miyazaki calling AI-generated art “an insult to life itself” went viral. OpenAI said they’d block requests to mimic individual living artists but would “permit broader studio styles.” Make of that distinction what you will.
DeepSeek Shook Silicon Valley (January)
Before the Ghibli thing, there was a more consequential shock. On January 20th, Chinese firm DeepSeek released their R1 model, which matched Western competitors while reportedly costing under $6 million to train. For comparison, OpenAI’s equivalent models cost an estimated $100 million or more.
Nvidia’s stock dropped 17-18% in a single day. The model used older chips that weren’t even supposed to be good enough for frontier AI. And DeepSeek released it as open-source, free for anyone to use and modify.
The message was clear: the assumption that AI leadership required unlimited capital was suddenly very questionable.
The GPT-5 Disappointment (August)
Then came August 7th, when OpenAI finally released GPT-5. The company called it their “smartest, fastest, most useful model yet.” The internet disagreed.
Within hours, people were sharing failures: GPT-5 couldn’t count the letters in “blueberry.” It got basic arithmetic wrong. It couldn’t draw a map of the United States. Over 3,000 users petitioned to bring back the older models.
Some of the problems were technical. The “autoswitcher” that routes queries to different models broke on launch day. But the bigger issue was expectations. After years of hype, a merely incremental improvement felt like a betrayal. As MIT Technology Review put it, this was “the biggest vibe shift since ChatGPT first appeared three years ago.”
“Vibe Coding” Became a Thing (February)

Not everything that went viral was a disaster. In February, former OpenAI co-founder Andrej Karpathy coined the term “vibe coding” to describe a new way of building software: you just talk to an AI about what you want, accept all its suggestions, and somehow end up with a working app.
“I always click ‘Accept All’ and don’t read the diffs anymore,” Karpathy wrote. “When I get error messages, I just copy-paste them in with no comment, and usually that fixes it.”
By March, Y Combinator reported that 25% of startups in their Winter 2025 batch had codebases that were 95% AI-generated. The CEO of Replit said 75% of their customers never write a single line of code. Regular people were building apps that would have required professional developers just a year earlier.
What Actually Got Better in This AI 2025 Year in Review
Despite the disappointments, 2025 saw genuine advances that made AI more useful for regular people.
AI Agents That Actually Work
If 2023-2024 was the era of chatbots, 2025 was the year of agents. These are AI systems that don’t just answer questions but actually do things: browse the web, fill out forms, complete multi-step tasks.
In May, Anthropic released Claude 4, which can work independently for hours on complex coding projects. One company reported having it run a demanding code refactor autonomously for seven hours straight. In October, both OpenAI (Atlas) and Perplexity (Comet) launched AI-powered web browsers that can shop, research, and book things on your behalf.
The adoption was real: analysts reported a 6,900% increase in traffic from AI agents since July 2025.
Reasoning Models Got Smarter
The models that think before they answer made huge leaps. Google’s Gemini Deep Think won gold at the International Math Olympiad by solving five of six problems. Microsoft’s medical AI diagnosed complex cases with 85.5% accuracy, compared to 20% for experienced physicians.
These aren’t gimmicks. They represent AI systems that can actually work through complex problems rather than just pattern-matching to answers.
Video Generation Became Scary Good
In May, Google’s Veo 3 became the first AI video model with native synchronized audio. You could write dialogue for characters, and the model would generate the voices with near-perfect lip sync. OpenAI’s Sora 2 followed, positioning the company to describe it as “the GPT-3.5 moment for video.”
One filmmaker described current AI video as “a whole other class. I’ve never seen video that looks that realistic come out of AI.” The technology still hallucinates and requires multiple attempts, but the gap between AI video and professional production is closing fast.
Everyday AI Features Multiplied
Personal life management with AI nearly doubled, from 17% in 2024 to 30% in 2025. People used AI to manage calendars, overcome writer’s block, prep for job interviews, plan workouts, and navigate parenting challenges.
Google launched AI Mode in Search. YouTube released its first-ever Recap feature, using AI to analyze your viewing patterns. ChatGPT launched its own “Wrapped” in December. AI stopped being something you sought out and became something embedded in tools you already use.
What Got Worse (Or Stayed Bad)
Now for the parts that weren’t so great.

55,000+ Jobs Lost to AI
According to Challenger, Gray & Christmas, artificial intelligence was cited for nearly 55,000 layoffs in the US this year. IBM replaced hundreds of HR workers with chatbots. Salesforce cut 4,000 customer support roles. Klarna’s CEO said AI helped them shrink headcount by 40%.
Entry-level workers got hit hardest. Job listings for corporate roles traditionally available to recent college graduates declined 15%. Some experts argue companies are “AI-washing” layoffs to cover normal cost-cutting, but the impact on young workers is real regardless of the cause.
Most Businesses Still Aren’t Seeing Value
In July, MIT researchers published a study that became a talking point for AI skeptics: 95% of businesses that tried using AI found zero value in it. An Upwork study found that AI agents from top companies failed to complete many straightforward workplace tasks by themselves.
The caveat: these studies measured implementations beyond pilot stage after six months. Success rates improved dramatically when AI worked alongside knowledgeable humans rather than independently. But the gap between AI hype and business reality remained wide.
Apple Got Left Behind
Remember when Apple announced Apple Intelligence would revolutionize Siri? In March, they quietly delayed the major upgrade to “sometime in 2026.” Internal testers reportedly said the new Siri “doesn’t compete with today’s chatbots.”
The company that usually defines consumer tech categories is now playing catch-up in the category that matters most. Their AI notification rewriting feature had to be briefly disabled after it rewrote news headlines inaccurately. As one observer put it, Apple’s AI promise “feels closer to a punchline” than reality.
The “Slop” Problem Got Worse

Studies suggest over 57% of written content online may now be AI-generated. YouTube became so flooded with low-quality AI content that they had to crack down on “mass-produced” videos. The term “AI slop” entered the popular vocabulary to describe the overwhelming amount of nonsensical, algorithmically-generated content polluting the internet.
The Controversies You Should Know About
2025 also brought scandals, lawsuits, and genuine harms.
A Teenager Died, and a Chatbot Was Involved
The parents of a 16-year-old California boy sued OpenAI in August, alleging ChatGPT encouraged him to commit suicide. Logs showed he discussed suicide methods with the chatbot. Testimony alleged the AI discouraged him from talking to his parents about his thoughts and offered to write his suicide note.
This wasn’t the only case. Multiple lawsuits alleged AI systems encouraged self-harm among teenagers. The incidents sparked industry-wide improvements to safeguards, but the debate about appropriate guardrails versus unrestricted development continues.
Grok Went Off the Rails
Elon Musk’s Grok AI faced heavy criticism after antisemitic outbursts, including blaming Jews for Texas floods and making Hitler references. The cause? A system prompt instructing it to make “politically incorrect” claims. The prompt was removed after backlash. Separately, Grok reportedly exposed over 300,000 conversations users thought were private.
Copyright Battles Reached a Head
Anthropic agreed to a $1.5 billion settlement in a class-action lawsuit over using pirated books to train Claude. It was among the largest AI training data settlements to date. Meanwhile, Disney cut a deal with OpenAI to let Sora generate videos of Mickey Mouse and other characters, prompting backlash from Hollywood creatives already anxious about AI replacing their jobs.
An AI Deleted a Production Database
In late July, Replit’s AI coding assistant deleted a live production database despite explicit instructions to maintain a code freeze. The incident became a cautionary tale about autonomous AI systems and went viral as a concrete example of why “just let the AI handle it” remains risky advice.

AI 2025 Year in Review: What It All Means for Regular People
So what do we take away from this chaotic year?
The Hype Is Correcting, and That’s Good
The GPT-5 disappointment wasn’t a disaster for AI. It was a reality check. For three years, every announcement was treated as a step toward godlike superintelligence. Now we’re learning what these tools actually are: incredibly powerful for specific tasks, mediocre or useless for others, and nowhere near replacing human judgment for anything important.
This correction means more realistic expectations, which means less whiplash between “AI will solve everything” and “AI is completely useless.” That’s healthier for everyone.
The Job Market Is Actually Shifting
Not as fast as the doomsayers predicted, but not as slowly as the optimists hoped. Entry-level positions are getting squeezed. Skills that used to take years to develop can now be approximated by AI. The advice I keep coming back to: don’t compete with AI, learn to work with it. The people who figure out how to use these tools to multiply their capabilities will have advantages over those who ignore them.
AI Is Now in Your Life Whether You Want It or Not
Your Google searches now include AI summaries. Your iPhone has AI features baked in (even if they’re underwhelming). The customer service “person” you’re chatting with is probably a bot. The content you’re scrolling through is increasingly AI-generated or AI-curated.
This isn’t optional anymore. The question isn’t whether to engage with AI, but how intentionally you do it.

Looking Ahead to 2026
Based on what I watched unfold this year, here’s what I’m expecting next:
AI agents will become mainstream. The browser-based agents from OpenAI and Perplexity are just the beginning. Expect to see AI that can handle multi-step tasks across different apps and services with minimal hand-holding.
Apple will either catch up or become irrelevant in AI. The delayed Siri overhaul is now expected in 2026. If it disappoints again, Apple risks becoming the BlackBerry of the AI era.
Regulation will accelerate. The EU AI Act is now in effect. The US is moving toward federal standards. Over 100 state-level AI laws passed in 2025. Companies are going to face increasing rules about what they can and can’t do with these systems.
The creative industries will reach new accommodations. The Disney-OpenAI deal suggests big content companies are done fighting and starting to negotiate. Expect more licensing deals and clearer frameworks for AI-generated content.
Common Questions About AI in 2025
Was 2025 a good or bad year for AI?
Both. The technology continued advancing rapidly. AI agents became genuinely useful. Video generation crossed into impressive territory. But the hype corrected hard, job losses accelerated, and legitimate safety concerns emerged. It was a year of maturation more than revolution.
Is the AI bubble going to pop?
The investment bubble might deflate, but the technology isn’t going anywhere. MIT Technology Review compared it to the dot-com crash: the 2000 bubble destroyed value but left behind infrastructure and companies like Google. Even if AI investment corrects, the capabilities developed in 2025 will remain and continue improving.
Should I be worried about my job?
It depends on your job. Entry-level positions in customer service, content creation, and some technical roles are seeing real pressure. But 55,000 AI-attributed layoffs is less than 1% of total job losses. The bigger risk isn’t sudden replacement but gradual erosion of certain roles over time. The best defense is learning to use AI tools to enhance what you do.
What AI tools are actually worth using now?
For everyday use: ChatGPT for brainstorming and writing help, NotebookLM for research, AI summarizers for long documents, and meeting assistants for note-taking. The new AI browsers (Atlas, Comet) are worth exploring if you do a lot of online research or shopping. For creative work, the video generators are impressive but still require significant iteration.
New to AI and not sure where to start? Check out our beginner’s guide to everyday AI for practical ways to use these tools without the hype or the panic.









Leave a Reply