
ℹ️ Quick Answer: Claude Code ultrareview is a new research-preview command that spins up a fleet of bug-hunting agents in Anthropic’s cloud sandbox to review your branch or pull request before you merge. Each finding gets independently reproduced, so you get a short list of real bugs instead of a wall of style suggestions. Pro and Max users get 3 free runs through May 5, 2026.
📋 WHAT’S INSIDE
- What Claude Code ultrareview Actually Does
- How It’s Different From Copilot and the Regular /review
- Why Claude Code ultrareview Caught My Attention
- When You Should Actually Use It
- What the 3 Free Runs Actually Cover
I already have a code-review stack. It’s mostly a handful of Claude Code plugins I run before every push. I still read every block Claude Code writes for me before I merge it. So when Anthropic quietly dropped a new slash command whose entire job is hunting for bugs before you merge, I paid attention.
Not because I need another review layer, but because the idea of a tool whose one purpose is deep, independently-verified bug review is genuinely different from anything else I’m running today.
What Claude Code ultrareview Actually Does
Claude Code ultrareview sends your branch or pull request to a fleet of reviewer agents running in Anthropic’s remote cloud sandbox. The agents explore your diff in parallel, independently reproduce each finding, and deliver a short list of confirmed bugs 10 to 20 minutes later.
You type /ultrareview inside Claude Code (version 2.1.86 or newer), confirm the scope in a dialog, and the work moves off your machine. No local bundling. If you point it at a GitHub PR, it pulls the PR directly. Findings come back as CLI notifications with file locations and fix context, and they also show up in Claude Desktop if you’re signed in. The official Anthropic documentation went live this week.
The part that makes it different from a normal review pass is the verification step. Every agent is expected to reproduce its own finding before it lands in your inbox. Style nitpicks get filtered out. What’s left is supposed to be actual bugs.
How It’s Different From Copilot and the Regular /review

GitHub Copilot’s PR review is fast, inline, and focused on suggestions. Claude Code’s own local /review runs on your machine in a single pass. /ultrareview is the cloud-scale, multi-agent version that only reports bugs it could reproduce.
| Tool | What You Get |
|---|---|
| GitHub Copilot PR Review | Inline suggestions across the diff, fast, minimal workflow friction |
| Claude Code /review (local) | Single-pass review on your machine, uses your current session context |
| Claude Code /ultrareview (cloud) | Parallel agents, independent bug verification, 10 to 20 minute runtime |
| CodeRabbit | Automated PR summaries and inline review comments on every push |
DataCamp’s head-to-head between Claude Code and Copilot put it cleanly. Copilot is the day-to-day driver. Claude earns its keep on big, cross-file refactors. Ultrareview leans all the way into that second lane, and it leans harder by running in parallel and filtering out false positives before you ever see them.
Why Claude Code ultrareview Caught My Attention
I already have a layered review stack. What’s missing is a reviewer that behaves like a senior engineer staring at a scary diff, not an autocompleter pattern-matching style rules.
Here’s where I landed after a day of looking at it. I build iOS apps and I run SwiftLint plus Xcode’s analyzer on every commit. I also have a couple of Claude Code plugins that review code as I go, and I switched to Claude Code from Cursor partly because that review workflow felt tighter. Like I mentioned earlier, I still read every bit of code that Claude Code writes before it ships. That setup catches most of what matters.
What my process doesn’t catch is the “this works but it silently corrupts a migration three months from now” class of bug. That’s the kind of issue you need a second brain to find, one that isn’t racing to finish the task.
A tool whose one job is to spend 15 minutes trying to break your diff before you merge it feels like that second brain. If ultrareview delivers on the “confirmed bugs only” promise, it could genuinely change how I sign off on AI-generated code. I will still need to review every line of code but having this feature gives me more confidence in what is shipping. Anthropic Labs is shipping fast right now, and this lands a few days after Claude Design went live.
When You Should Actually Use It
Anthropic recommends reaching for ultrareview on critical changes like auth flows, data migrations, and refactors. Don’t burn runs on typo fixes or dependency bumps.
I’d extend that list in practice to:
- Anything touching authentication, session tokens, or permissions
- Any change to a database schema or migration script
- Any refactor that crosses module boundaries
- Any PR where a regression would cost you more than a weekend to fix
Running it on small, obvious diffs wastes a free run and clutters your inbox with “no issues found” notifications. Save it for the merges that would keep you up at night if something slipped through. To see it action, view the video below.
What the 3 Free Runs Actually Cover
Pro and Max subscribers get 3 free ultrareview runs through May 5, 2026. After that, usage rolls into your existing Claude Code plan limits.
Pro plans start at $20 a month. Max 5x at $100 a month runs five times the Pro usage. Max 20x at $200 a month gets you around 800 prompts. Each ultrareview run is more expensive than a typical prompt because it spins up multiple agents, so the post-trial cost structure will matter more than the free runs do. Anthropic hasn’t published exact post-trial pricing yet.
If you’re on the free tier, ultrareview isn’t available. If you’re still weighing which coding assistant belongs in your 2026 AI assistant stack, this is another point in Claude Code’s column, at least for the pre-merge step.
✅ Worth Using For Critical Merges: If you’re on Pro or Max, spend your 3 free runs on changes you actually care about. Auth, migrations, or hairy refactors. You’ll know in 20 minutes whether Claude Code ultrareview finds something your other reviewers missed.
FAQ

Do I need to be on Pro or Max to use /ultrareview?
Yes. It’s a Pro and Max feature during the research preview. Free and API-only users don’t have access yet. Each Pro or Max account gets 3 free runs through May 5, 2026.
Does /ultrareview run locally or in the cloud?
Cloud. Your branch or PR is pushed to a remote Anthropic sandbox where the reviewer agents run in parallel. Your local Claude Code session stays usable while the review works in the background.
How long does a typical ultrareview take?
Ten to twenty minutes, depending on the size of your diff. Findings drop into your Claude Code CLI and into Claude Desktop automatically when they’re ready.
Can I use ultrareview on any repo or only GitHub?
Both. You can run it on a local git branch or point it directly at a GitHub pull request. The GitHub PR mode skips local bundling entirely.
Anthropic keeps shipping Claude Code features that narrow the gap between “AI writes the code” and “AI actually ships working code.” ultrareview is the first tool I’ve seen that treats pre-merge review as its own job instead of a courtesy check.
Related reading: Why the $20 AI Coding Plan Is a Trap | The 2026 AI Assistant Stack | Claude Design Review | New to AI? Start here









Leave a Reply