ℹ️ Quick Answer: Cursor claimed GPT-5.2 agents built a web browser from scratch in one week. Developers found the code won’t compile, with 32+ build errors and every CI run failing. The “from scratch” claim also falls apart since the project relies on existing Mozilla Servo libraries.
📋 WHAT’S INSIDE
- What Went Wrong With Cursor’s AI Browser
- The ‘From Scratch’ Claim Doesn’t Hold Up Either
- What Cursor Actually Said (And Didn’t Say)
- Why This Matters
Cursor’s AI browser project doesn’t work. The company announced that GPT-5.2 agents wrote over 1 million lines of code for a web browser in just one week. Developers who tried to build the code found it won’t compile. The GitHub repository has 32+ build errors, and not a single commit in the last 100 can compile cleanly.
Last week, Cursor CEO Michael Truell announced something that sounded incredible. He announced hundreds of AI agents had built a web browser from scratch in seven days. The project, called FastRender, supposedly included an HTML parser, CSS engine, layout system, and custom JavaScript VM.
Then developers actually looked at the code.
What Went Wrong With Cursor’s AI Browser
The FastRender repository has 32+ build errors including duplicate struct definitions and missing field references, and every GitHub Actions CI run on the main branch has failed.
A detailed analysis by a developer who dug into the repository found the obvious. The code doesn’t compile. Running the standard build command produces dozens of errors and nearly 100 warnings.

The errors aren’t subtle edge cases. They’re basic problems like duplicate struct definitions, conflicting trait implementations, and references to fields that don’t exist. The kind of errors that would fail on the first build attempt.
Even more telling, every GitHub Actions run on the main branch has failed. The CI pipeline shows red across the board.
The ‘From Scratch’ Claim Doesn’t Hold Up Either
FastRender’s dependency list includes html5ever, cssparser, and rquickjs, all major components from Mozilla’s Servo browser engine, undermining Cursor’s “built from scratch” marketing.
Cursor claimed the browser was built “from scratch” with a “custom JS VM.” Hacker News commenters quickly pointed out that the dependency list tells a different story.
The project uses html5ever (Servo’s HTML parser), cssparser (Servo’s CSS parser), and rquickjs (a JavaScript runtime). These are major components from Mozilla’s Servo browser engine. Calling this “from scratch” is a stretch.
As one commenter put it, “The first thing a programmer does is check the dependencies, and they can immediately tell it’s just using existing packages.”
What Cursor Actually Said (And Didn’t Say)
Cursor’s blog post used phrases like “meaningful progress” and showed an 8-second demo video, but never claimed the browser compiles, runs, or includes build instructions.
This is the tricky part. Cursor’s blog post never explicitly claimed the browser works. They said agents “made meaningful progress” and that “hundreds of workers run concurrently.” They showed an 8-second video that looks like a screenshot.
But they never said, “Here’s how to run it” or “This commit compiles” or “Here’s what to expect when you try it.”
The marketing created the impression of a working prototype without actually claiming one exists. Technically not lying, but pretty misleading.
⚠️ The GitHub issue about build failures remains open with no response from Cursor. The repository has no releases, no tags, and no stable branch.
Why This Matters
The FastRender project illustrates the growing gap between AI coding tool marketing and real-world output. GPT-5.2 agents generated 1 million lines of Rust that look like a browser but cannot compile into one.

I’ve written before about how AI companies market their products. This is a textbook example of the hype problem.
AI coding tools are useful. I use them daily. But the gap between “AI can help you code faster” and “AI built a browser in a week” is enormous. When companies blur that line, it sets unrealistic expectations and makes it harder to have honest conversations about what AI can actually do.
The experiment proved something, just not what Cursor intended. It proved AI agents can generate millions of tokens of code that looks like a browser but doesn’t actually work.
That’s not nothing. But it’s not a browser either.
If you’re interested in what AI coding tools can realistically do for you, check out my Start Here guide for practical use cases that actually work. You can also browse more AI news coverage on the blog.









Leave a Reply