You’re working on a new API endpoint. Claude Code writes the controller, the service layer, the DTOs. Now you need tests. And a code review. And XML docs for the public methods. That’s three different tasks, each requiring a different mindset.

What if you didn’t have to do them one at a time?


What are sub-agents?

When you give Claude Code a task, it works as a single agent. It reads files, edits code, runs commands — all in one conversation with one set of instructions. That works well for most things.

But Claude Code can also spawn sub-agents: focused child agents that each handle a specific part of the work. You define what each sub-agent does, what it has access to, and what it should return. The parent agent coordinates.

Think of it like a senior developer delegating work to specialists. “You write the tests. You review the code. You update the docs.” Each specialist focuses on one thing and does it well.


The Task tool

Sub-agents are powered by the Task tool in Claude Code. When Claude uses the Task tool, it spawns a new agent with its own context window and a specific prompt you define. The sub-agent does its work and returns the result to the parent.

You don’t need to install anything. The Task tool is built into Claude Code. What makes it powerful is how you instruct it — and that starts with your CLAUDE.md.

Here’s the basic pattern. In your CLAUDE.md, you describe the sub-agents you want Claude to use:

## Sub-agents

When working on .NET code, use these specialized agents via the Task tool:

### Test Writer
Spawn a sub-agent with this prompt:
"You are a .NET test specialist. Given a class, write comprehensive xUnit tests
with FluentAssertions. Cover happy path, edge cases, and error scenarios.
Use Arrange-Act-Assert pattern. Mock dependencies with NSubstitute."

### Code Reviewer
Spawn a sub-agent with this prompt:
"You are a .NET code reviewer. Review the given file for: security issues,
performance problems, adherence to SOLID principles, null safety, async/await
correctness. Return a structured list of findings with severity levels."

### Documentation Agent
Spawn a sub-agent with this prompt:
"You are a .NET documentation specialist. Generate XML documentation comments
for all public types and members. Follow Microsoft's documentation guidelines.
Be concise but complete."

When Claude encounters a situation where one of these agents is useful, it spawns it automatically. You don’t have to ask for each one individually.


Example 1: The test writer

Let’s say you’ve just built a PaymentService with three public methods. You tell Claude:

“Write tests for PaymentService.”

With the sub-agent configuration above, Claude spawns a test writer agent. That agent receives the PaymentService.cs file, reads the dependencies, and produces a full test class:

public class PaymentServiceTests
{
    private readonly IPaymentGateway _gateway;
    private readonly IOrderRepository _orderRepository;
    private readonly PaymentService _sut;

    public PaymentServiceTests()
    {
        _gateway = Substitute.For<IPaymentGateway>();
        _orderRepository = Substitute.For<IOrderRepository>();
        _sut = new PaymentService(_gateway, _orderRepository);
    }

    [Fact]
    public async Task ProcessPayment_ValidOrder_ReturnsSuccess()
    {
        // Arrange
        var order = new Order { Id = 1, Amount = 99.99m, Status = OrderStatus.Pending };
        _orderRepository.GetByIdAsync(1).Returns(order);
        _gateway.ChargeAsync(Arg.Any<PaymentRequest>()).Returns(new PaymentResult { Success = true });

        // Act
        var result = await _sut.ProcessPaymentAsync(1);

        // Assert
        result.Should().NotBeNull();
        result.Success.Should().BeTrue();
    }

    [Fact]
    public async Task ProcessPayment_OrderNotFound_ThrowsNotFoundException()
    {
        _orderRepository.GetByIdAsync(99).Returns((Order?)null);

        var act = () => _sut.ProcessPaymentAsync(99);

        await act.Should().ThrowAsync<NotFoundException>();
    }
}

The sub-agent focuses exclusively on testing. It doesn’t try to refactor the service or suggest improvements. It writes tests — that’s its job.


Example 2: The code reviewer

Now you want a second pair of eyes on that same PaymentService. Claude spawns the code reviewer agent, which returns structured feedback:

## Code Review: PaymentService.cs

**HIGH** - No cancellation token support on async methods.
ProcessPaymentAsync and RefundAsync should accept CancellationToken.

**MEDIUM** - Decimal comparison without tolerance in CalculateDiscount.
Consider using a small epsilon for financial calculations.

**LOW** - Consider extracting magic number 3 (max retry count) to a constant.

**OK** - Null checks on constructor parameters ✓
**OK** - Async/await pattern used correctly ✓
**OK** - No obvious SQL injection or security concerns ✓

The reviewer doesn’t fix the code. It identifies issues. You decide what to act on. If you want, you can tell Claude: “Fix the HIGH and MEDIUM issues” — and the parent agent takes it from there.


Example 3: The documentation agent

The documentation agent handles the tedious-but-necessary work of XML docs:

/// <summary>
/// Processes a payment for the specified order.
/// </summary>
/// <param name="orderId">The unique identifier of the order to process payment for.</param>
/// <returns>A <see cref="PaymentResult"/> indicating the outcome of the payment.</returns>
/// <exception cref="NotFoundException">Thrown when no order exists with the specified ID.</exception>
/// <exception cref="PaymentException">Thrown when the payment gateway rejects the transaction.</exception>
public async Task<PaymentResult> ProcessPaymentAsync(int orderId)

It reads the implementation, understands the exceptions, and writes docs that actually match the code. Not boilerplate — real documentation.


Running agents in parallel

Here’s where it gets interesting. You can tell Claude:

“For every service in the Services folder: run the test writer and code reviewer in parallel.”

Claude spawns multiple sub-agents simultaneously. The test writer works on PaymentService while the reviewer works on OrderService. Then they swap. The parent agent collects all results and presents them to you.

For a solution with ten services, this can save you serious time. Instead of sequentially writing tests, reviewing, documenting — it all happens at once.


When sub-agents make sense

Sub-agents shine when:

  • The task is clearly separable. Testing and reviewing are independent activities. They don’t need each other’s output.
  • You have consistent patterns. If every service follows the same structure, one prompt covers them all.
  • The work is repetitive. Writing tests for 15 services manually is tedious. Delegating it to a sub-agent is efficient.

Sub-agents are overkill when:

  • The task is small. If you’re writing tests for one method, just ask Claude directly. Spawning a sub-agent adds overhead.
  • The tasks are deeply connected. If the test depends on the review findings, running them in parallel doesn’t help.
  • You’re exploring. When you don’t know what you want yet, a single conversation with Claude is more flexible than a structured agent pipeline.

Limitations to know about

Sub-agents have isolated context. The test writer doesn’t know what the reviewer found. The documentation agent doesn’t see the test output. Each sub-agent starts fresh with only the files and prompt you give it.

There’s no shared state between agents. If the reviewer identifies a bug and the test writer needs to account for it, you have to coordinate that manually through the parent agent.

Sub-agents also consume tokens. Each one gets its own context window. Running five agents in parallel on a large codebase uses significantly more tokens than a single agent doing everything sequentially. Watch your usage.

And finally: sub-agents are only as good as their prompts. A vague prompt like “review this code” gives vague results. The more specific you are about frameworks, patterns, and expectations, the better the output.


Start with one agent

Don’t build a fleet of sub-agents on day one. Start with one. The test writer is a great first choice — it’s the most mechanical task, and the output is immediately verifiable with dotnet test.

Add it to your CLAUDE.md. Try it on a few services. Tune the prompt until the tests match your team’s style. Then add the reviewer. Then the documentation agent.

The goal isn’t to replace your judgment. It’s to multiply your capacity — by having specialists handle the work that follows a pattern, so you can focus on the work that doesn’t.