Your API endpoint takes 3 seconds to respond. You’ve been staring at the code for an hour. The profiler points at the data access layer, but the queries look fine. You paste the controller and service class into Claude Code and ask “why is this slow?” Ten seconds later, it highlights a lazy-loading loop you completely overlooked.
That moment — when a second pair of eyes catches what yours can’t — is where Claude Code earns its keep for performance work.
The N+1 query problem
This is the classic Entity Framework performance killer. You load a list of orders, then access each order’s customer in a loop. EF Core dutifully fires a separate SQL query for every single customer.
// The problem: N+1 queries
public async Task<List<OrderDto>> GetRecentOrders()
{
var orders = await _context.Orders
.Where(o => o.CreatedAt > DateTime.UtcNow.AddDays(-30))
.ToListAsync();
return orders.Select(o => new OrderDto
{
Id = o.Id,
CustomerName = o.Customer.Name, // Lazy load: 1 query per order
Total = o.Total
}).ToList();
}
With 200 orders, that’s 201 database queries. Claude Code spots this pattern immediately and suggests eager loading:
// The fix: eager loading with .Include()
public async Task<List<OrderDto>> GetRecentOrders()
{
var orders = await _context.Orders
.Include(o => o.Customer)
.Where(o => o.CreatedAt > DateTime.UtcNow.AddDays(-30))
.Select(o => new OrderDto
{
Id = o.Id,
CustomerName = o.Customer.Name,
Total = o.Total
})
.ToListAsync();
return orders;
}
One query. The projection into OrderDto happens at the database level. Claude also catches that moving the Select before ToListAsync avoids materializing the full Order entities — a detail I initially missed.
LINQ that looks clean but isn’t
LINQ’s readability can hide real performance costs. Here are patterns Claude consistently flags.
Premature materialization with .ToList():
// Bad: materializes everything, then filters in memory
var activeUsers = _context.Users.ToList()
.Where(u => u.IsActive)
.ToList();
// Good: filter at the database
var activeUsers = await _context.Users
.Where(u => u.IsActive)
.ToListAsync();
Multiple enumeration of an IEnumerable:
// Bad: enumerates the query twice
var products = GetFilteredProducts();
var count = products.Count(); // First enumeration
var first = products.First(); // Second enumeration
// Good: materialize once
var products = GetFilteredProducts().ToList();
var count = products.Count;
var first = products[0];
Sorting before taking one item:
// Wasteful: sorts the entire collection
var oldest = users.OrderBy(u => u.CreatedAt).FirstOrDefault();
// Better: MinBy avoids a full sort (C# 11+)
var oldest = users.MinBy(u => u.CreatedAt);
These are small on their own. In a hot path called thousands of times per minute, they add up fast.
Async anti-patterns
Blocking on async code is one of the most common performance issues in ASP.NET Core. It looks harmless but it starves the thread pool.
// Dangerous: blocks a thread pool thread
public OrderDto GetOrder(int id)
{
var order = _orderService.GetOrderAsync(id).Result; // Deadlock risk
return MapToDto(order);
}
Claude flags .Result and .Wait() immediately and suggests making the entire call chain async:
// Correct: async all the way down
public async Task<OrderDto> GetOrderAsync(int id)
{
var order = await _orderService.GetOrderAsync(id);
return MapToDto(order);
}
Another pattern Claude catches: wrapping synchronous code in Task.Run inside ASP.NET Core controllers. This doesn’t make the code faster — it just shifts the work to a different thread pool thread, adding overhead for no benefit.
// Pointless in ASP.NET Core — you're already on a thread pool thread
var result = await Task.Run(() => _service.CalculateReport(data));
// Just call it directly
var result = _service.CalculateReport(data);
Memory and allocations
String concatenation in loops is a classic allocation trap. Every += creates a new string object, and the old one becomes garbage.
// Bad: O(n²) allocations
var csv = "";
foreach (var item in items)
{
csv += $"{item.Name},{item.Value}\n";
}
// Good: single allocation strategy
var sb = new StringBuilder();
foreach (var item in items)
{
sb.AppendLine($"{item.Name},{item.Value}");
}
var csv = sb.ToString();
Claude also catches unnecessary boxing — value types being cast to object — and suggests generic alternatives. It flags List<object> where List<int> would avoid boxing entirely.
For large collections, Claude sometimes suggests Span<T> or ArrayPool<T> to reduce large object heap pressure. Whether those suggestions are worth the added complexity depends on your context — more on that in the limitations section.
How to prompt for performance reviews
Generic prompts give generic answers. Scoped prompts give actionable results.
Instead of: “Review this code for performance.”
Try these:
- “Look at this EF Core repository class. Are there any N+1 query patterns or missing eager loading?”
- “This endpoint handles 500 requests per second. Review the allocation patterns in the hot path — the ProcessOrder method and everything it calls.”
- “Check this async code for thread pool starvation. Are there any blocking calls or unnecessary Task.Run usage?”
- “This LINQ pipeline runs on a collection of 50,000 items. Are there any unnecessary materializations or multiple enumerations?”
The more context you give about the scale and the hot path, the more useful Claude’s suggestions become. Telling it “this runs once at startup” versus “this runs on every HTTP request” changes which optimizations actually matter.
Honest limitations
Claude Code is a code reviewer, not a profiler. Here’s what it cannot do:
It can’t measure. Claude can spot patterns that are typically slow, but it can’t tell you whether they’re actually slow in your application. An N+1 query on a table with 5 rows is a non-issue. The same pattern on a table with 50,000 rows is a production incident. You still need BenchmarkDotNet, dotnet-trace, or Application Insights to measure what matters.
It suggests heuristics, not guarantees. “Use StringBuilder instead of string concatenation” is good general advice. But if your loop runs 3 times, the difference is negligible and the simpler code wins.
It can over-optimize. Claude sometimes suggests Span<T>, stackalloc, or object pooling in code that processes 10 items. Those optimizations have real cognitive costs — more complex code that’s harder to maintain. Don’t optimize code that isn’t slow.
It can’t see the runtime. Memory pressure, garbage collection pauses, connection pool exhaustion, thread pool starvation under load — these are runtime behaviors that don’t show up in a code review. Claude can warn you about patterns that cause these issues, but it can’t confirm they’re happening.
Use Claude Code as the first pass. Use your profiler as the final word.
Start here
Pick your slowest API endpoint. Paste the controller action and the service methods it calls into Claude Code. Ask: “This endpoint is slow under load. What performance issues do you see in the data access patterns and the async code?”
Review what Claude finds. Then measure with BenchmarkDotNet or your APM tool to confirm the suggestions actually move the needle. The combination of Claude’s pattern recognition and your profiler’s measurements is more effective than either one alone.