You know the feeling. It’s Friday afternoon, you open GitHub, and there are three PRs waiting for review. One is 40 files. The other touches the registration flow you haven’t looked at in months. The third is “just a small refactor” — 800 lines changed.

Code review is essential. It catches bugs, maintains consistency, and spreads knowledge across the team. But it’s also slow, mentally draining, and the first thing that slips when deadlines are tight.

What if Claude Code could do the first pass?


Review locally before you push

The simplest way to start is before you even create a PR. You’ve made your changes, everything compiles, tests pass. Before you push, ask Claude Code:

Review my staged changes. Focus on potential bugs, missing validation, and anything that deviates from our coding standards.

Claude Code reads the diff, follows references into the rest of your codebase, and gives you feedback. Not generic advice — feedback based on your actual code, your actual patterns.

I’ve caught embarrassing mistakes this way. A FirstOrDefault() without a null check two lines later. A new endpoint missing the [Authorize] attribute that every other endpoint in the controller has. Things you stop seeing after staring at your own code for hours.


Reviewing a PR with the gh CLI

When someone else’s PR needs reviewing, Claude Code can help there too. You can feed it the PR diff directly:

gh pr diff 42 | claude -p "Review this .NET PR. Focus on bugs, missing error handling, and deviations from our patterns."

This pipes the full diff into Claude Code as a prompt. Claude reads every changed line, understands the context from file names and surrounding code, and gives you a structured review.

For a deeper review where Claude needs to see the full codebase — not just the diff — check out the PR locally first:

gh pr checkout 42
claude

Then ask Claude Code to review the changes against the main branch. This way it can follow references, check if new code matches existing patterns, and spot inconsistencies you’d miss by reading the diff alone.


What Claude Code catches well

After months of using Claude Code for reviews on .NET projects, I’ve noticed clear strengths.

Null reference risks. Claude is remarkably good at tracing nullable values through your code. It catches FirstOrDefault() calls where the result is used without a null check, nullable reference types that aren’t handled, and string.IsNullOrEmpty() checks that should be IsNullOrWhiteSpace().

// Claude flags this immediately
var user = await _context.Users.FirstOrDefaultAsync(u => u.Email == email);
var displayName = user.FirstName + " " + user.LastName; // potential NullReferenceException

Missing validation. If your other endpoints validate request models with FluentValidation, Claude notices when a new endpoint skips it. Same for missing [Required] attributes, unchecked enum values, or request models without any validation at all.

Inconsistent patterns. This is where Claude really shines. It spots when one controller returns NotFound() while all others return Problem(). It notices when a new service doesn’t follow the same DI registration pattern. It catches when someone uses DateTime.Now while the rest of the project uses IDateTimeProvider.

Dependency injection issues. Scoped services injected into singletons, missing registrations, circular dependencies — Claude reads your Program.cs and catches mismatches.


What it misses

Let’s be honest about the gaps.

Business logic correctness. Claude can tell you the code compiles and handles nulls. It can’t tell you whether the discount calculation is actually correct for your business rules. It doesn’t know that customers in Belgium have different VAT rules than customers in the Netherlands.

Architectural context it doesn’t have. If your team decided last month that all new services should use the mediator pattern, Claude doesn’t know that — unless you’ve documented it. It reviews what it sees, not what you’ve discussed in meetings.

Performance at scale. Claude won’t flag that your new LINQ query does a full table scan on a table with 50 million rows. It doesn’t know your data volumes. It catches obvious N+1 queries, but subtle performance issues require human judgment and knowledge of your production environment.

Cross-system impact. If your API change breaks a downstream consumer that lives in a different repository, Claude has no way of knowing.


Automated review with /install-github-app

If you want Claude Code to review every PR automatically, you can install the GitHub app:

/install-github-app

This sets up Claude Code as an automated reviewer on your repository. Every new PR gets a review comment with findings — before any human even looks at it.

The automated reviews follow the same patterns: null checks, validation gaps, inconsistencies. The difference is that it happens without anyone needing to remember to ask.


CLAUDE.md as your team’s style guide

Here’s where it gets really powerful. Claude Code reads your project’s CLAUDE.md file before every review. That means you can teach it your team’s conventions.

## Code Review Standards

- All controllers must use the `[Authorize]` attribute unless explicitly public
- Use FluentValidation for all request models, no data annotations
- Return ProblemDetails for all error responses (RFC 7807)
- All async methods must accept CancellationToken
- Use IDateTimeProvider instead of DateTime.Now/UtcNow
- Repository methods return domain models, never EF entities

Now every review — manual or automated — checks against your team’s specific rules. Not generic best practices. Your rules.

This is the multiplier. Without CLAUDE.md, Claude gives you a solid generic review. With it, Claude gives you a review that sounds like your most experienced team member who has memorized the style guide.


Example: a real review session

Here’s what a typical review looks like. I asked Claude Code to review a PR that adds a new endpoint for exporting user data:

Review the changes in this PR. Check for bugs, missing validation,
and deviations from our existing patterns in this project.

Claude came back with five findings:

  1. Missing null checkGetUserById returns null when the user doesn’t exist, but the export method doesn’t check for it
  2. No authorization — The endpoint is missing [Authorize(Policy = "Admin")], while all other admin endpoints have it
  3. Inconsistent error response — Returns BadRequest("User not found") instead of NotFound() with ProblemDetails
  4. Missing CancellationToken — The async method doesn’t accept or forward a CancellationToken, unlike every other endpoint in the controller
  5. No input validation — The ExportUserRequest model has no FluentValidation validator, while all other request models do

Four out of five were legitimate issues I would have flagged in my own review. The fifth (CancellationToken) was technically correct but low priority. Zero false positives, zero hallucinated problems.

That’s a good first pass.


Tips for useful reviews

Not all review prompts are equal. Here’s what I’ve learned:

Be specific about what to look for. “Review this code” gives you generic feedback. “Check for missing validation, null reference risks, and deviations from our existing patterns” gives you actionable findings.

Ask Claude to check against existing code. “Compare this new controller with OrdersController.cs and flag any inconsistencies” forces Claude to actually look at your conventions instead of assuming generic ones.

Request a severity level. Ask Claude to categorize findings as “must fix,” “should fix,” and “consider.” This helps you triage quickly.

Don’t ask it to approve. Claude will always find something. That’s the point. Use it as a bug finder, not as a gatekeeper.


Start with your next PR

You don’t need to set up automation to benefit from this. Next time you have a PR to review — yours or someone else’s — try this:

gh pr checkout 42
claude

“Review the changes in this PR compared to main. Focus on bugs, missing error handling, and anything that doesn’t match our existing patterns.”

You’ll find that the tedious part of code review — reading every line, checking for obvious mistakes, verifying consistency — takes seconds instead of minutes. What remains is the part that actually needs your brain: evaluating architecture, business logic, and trade-offs.

AI review doesn’t replace human review. It makes human review worth doing.