You know that service. The one with 400 lines of business logic, zero tests, and a backlog item that says “add unit tests” that’s been sitting there since Q3. Nobody picks it up because writing tests after the fact is tedious. You already know what the code does — why spend half a day proving it?

But then someone changes a method signature and the whole order flow breaks in production. And you think: those tests would have caught this.

Claude Code can write them for you. Not perfect tests. But a solid starting point that would have taken you hours to write manually.


Step 1: Let Claude Code analyze what needs testing

Start Claude Code in your project and point it at the class:

“Analyze OrderService.cs and tell me what test cases I need. Don’t write code yet — just list the cases.”

Claude Code reads the file, follows the dependencies, and comes back with something like:

  • PlaceOrder — happy path with valid order
  • PlaceOrder — throws when customer is null
  • PlaceOrder — throws when order items are empty
  • PlaceOrder — applies discount when total exceeds threshold
  • PlaceOrder — calls repository exactly once
  • CalculateTotal — sums items correctly
  • CalculateTotal — handles zero-quantity items

This is the step most people skip. They jump straight to “write tests.” But having a test plan first means you can judge whether the cases make sense before any code is generated.


Step 2: Generate the first test class

Now tell Claude Code to write the tests:

“Create an xUnit test class for OrderService. Use FluentAssertions for assertions and NSubstitute for mocking. Start with the happy path and the validation cases.”

Claude Code generates something like this:

public class OrderServiceTests
{
    private readonly IOrderRepository _repository;
    private readonly IDiscountCalculator _discountCalculator;
    private readonly OrderService _sut;

    public OrderServiceTests()
    {
        _repository = Substitute.For<IOrderRepository>();
        _discountCalculator = Substitute.For<IDiscountCalculator>();
        _sut = new OrderService(_repository, _discountCalculator);
    }

    [Fact]
    public async Task PlaceOrder_WithValidOrder_SavesOrderToRepository()
    {
        // Arrange
        var customer = new Customer { Id = 1, Name = "Acme Corp" };
        var items = new List<OrderItem>
        {
            new() { ProductId = 10, Quantity = 2, UnitPrice = 25.00m }
        };

        // Act
        await _sut.PlaceOrder(customer, items);

        // Assert
        await _repository.Received(1).Save(Arg.Is<Order>(o =>
            o.CustomerId == 1 && o.Items.Count == 1));
    }

    [Fact]
    public async Task PlaceOrder_WithNullCustomer_ThrowsArgumentException()
    {
        // Arrange
        var items = new List<OrderItem> { new() { ProductId = 1, Quantity = 1, UnitPrice = 10m } };

        // Act
        var act = () => _sut.PlaceOrder(null!, items);

        // Assert
        await act.Should().ThrowAsync<ArgumentException>()
            .WithMessage("*customer*");
    }
}

Notice the Arrange-Act-Assert structure, the _sut naming convention, and the readable FluentAssertions syntax. Claude Code picks up on your project’s patterns if you have a CLAUDE.md that describes your testing conventions.


Step 3: Add Theory tests for multiple scenarios

For cases with varying input, [Theory] with [InlineData] keeps things clean:

“Add a Theory test for CalculateTotal with different combinations of quantities and prices.”

[Theory]
[InlineData(1, 10.00, 10.00)]
[InlineData(3, 15.50, 46.50)]
[InlineData(0, 99.99, 0.00)]
public void CalculateTotal_WithVariousItems_ReturnsExpectedTotal(
    int quantity, decimal unitPrice, decimal expectedTotal)
{
    // Arrange
    var items = new List<OrderItem>
    {
        new() { Quantity = quantity, UnitPrice = unitPrice }
    };

    // Act
    var result = _sut.CalculateTotal(items);

    // Assert
    result.Should().Be(expectedTotal);
}

This is where Claude Code saves real time. Writing out five [InlineData] variations by hand is boring. Describing the pattern in one sentence is fast.


Step 4: NSubstitute for dependencies

The mocking setup is where things get interesting. Claude Code understands constructor injection and sets up the substitutes correctly:

[Fact]
public async Task PlaceOrder_WhenTotalExceedsThreshold_AppliesDiscount()
{
    // Arrange
    var customer = new Customer { Id = 1, Name = "Big Spender" };
    var items = new List<OrderItem>
    {
        new() { ProductId = 1, Quantity = 10, UnitPrice = 100.00m }
    };
    _discountCalculator.Calculate(Arg.Any<decimal>())
        .Returns(50.00m);

    // Act
    await _sut.PlaceOrder(customer, items);

    // Assert
    await _repository.Received(1).Save(Arg.Is<Order>(o =>
        o.DiscountApplied == 50.00m));
}

The Arg.Any<>() and .Returns() syntax is NSubstitute at its cleanest. Claude Code uses the right patterns without you having to explain the mocking framework.


Step 5: Ask for missing edge cases

This is my favorite trick. After the initial tests are written:

“Look at the test class and the source code. What edge cases am I missing?”

Claude Code typically finds things like:

  • What happens when the repository throws an exception?
  • What if the discount calculator returns a negative value?
  • What about concurrent calls with the same customer?
  • What if the order items list is null (not empty)?

Some of these are genuinely useful. Others are over-engineering. You decide which ones to keep. But the identification of edge cases is something Claude Code does well — it’s systematic in a way that humans often aren’t.


Step 6: Run and iterate

Now run the tests:

dotnet test

Some will fail. Maybe Claude Code assumed a method returns Task when it returns Task<Order>. Maybe a property name is slightly wrong. This is normal.

Tell Claude Code what failed, and it fixes the test. The iteration loop is fast: run, fix, run, fix. Within a few rounds you have a green suite.


When Claude Code writes bad tests

Let’s be honest about this. Claude Code produces bad tests in predictable ways:

Testing implementation details. It loves verifying that a method was called with specific arguments. That creates brittle tests that break when you refactor the internals, even if the behavior hasn’t changed. If you see Received(1) in every test, push back.

Too many mocks. When your service has five dependencies and every test sets up all five substitutes, the tests are telling you something — but about the design, not the tests. Claude Code won’t tell you “this class has too many responsibilities.” That’s your judgment call.

Trivial assertions. Testing that a property you just set has the value you set it to. That’s not a test, it’s a mirror. Delete these.

Copy-paste patterns. Claude Code sometimes generates ten tests that are 90% identical. Ask it to refactor the shared setup into helper methods.

The fix is simple: review the tests like you’d review any code. Don’t merge AI-generated tests without reading them.


Tips for better prompts

A few things I’ve learned about prompting for tests:

  • “Test the behavior, not the implementation” — say this explicitly. It reduces mock-heavy tests.
  • “One assertion per test” — forces focused tests instead of test methods that check seven things.
  • “Use descriptive test names that explain the scenario” — you’ll thank yourself when a test fails in CI six months from now.
  • “Don’t test private methods directly” — Claude Code sometimes tries to use reflection. Stop it.
  • Describe your conventions in CLAUDE.md — if you use a naming pattern like MethodName_Scenario_ExpectedResult, tell Claude Code once and it follows it everywhere.

Start today

Pick a service with zero tests. Open Claude Code and say:

“Analyze this class and write xUnit tests with FluentAssertions and NSubstitute. Test behavior, not implementation. Use Arrange-Act-Assert.”

Review what comes back. Delete the bad tests. Keep the good ones. Run dotnet test.

You’ll go from zero to 80% coverage in an afternoon. The last 20% — the edge cases, the integration scenarios, the things that require human judgment — that’s still your job. But the grind of writing the first 80% doesn’t have to be.