A recent study by METR got me thinking. They had sixteen experienced open-source developers — averaging five years of experience with their own codebase — work with and without AI tools. The result: with AI they were 19% slower. But the most striking part? They thought they were 20% faster.
Researchers from Tilburg University found something similar: AI helps less experienced programmers write code faster, but experienced developers shift to reviewing and correcting — and end up no more productive on balance.
This is not an argument against AI tools. It’s an argument for a better way of collaborating. Because the difference between developers who do get faster and those who don’t gain much isn’t the tool. It’s the approach.
1. Think first, prompt second
The biggest anti-pattern I see — including in myself early on — is generating code immediately. “Build a notification system.” And then being surprised that the result technically works but doesn’t fit your architecture at all.
Addy Osmani, engineering lead at Google, calls his approach “waterfall in 15 minutes”: first write a short spec with the AI, describe edge cases, and only then let it build. The planning takes a few minutes but saves you an hour of corrections.
In Claude Code, this means: start in Plan Mode. Let Claude read your codebase, understand your architecture, and propose an approach before a single line of code gets written.
Without a plan:
“Build a notification system for status changes.”
Claude creates its own NotificationService, a new table, and a polling mechanism. Technically correct. Not at all how your team does it.
With a plan:
“We want notifications on status changes. First look at how we handle events in /Application/EventHandlers. Create a plan that fits our existing architecture.”
Claude sees that you already have an IEventHandler<T> pattern and proposes building on it. No new system, but an extension of what’s already there.
2. Provide context, not just an instruction
Researchers from UC San Diego and Cornell studied 99 professional developers and observed thirteen of them in their daily work. One thing stood out: experienced developers always include file names, function names, and error messages. They don’t let the AI guess.
The difference:
Vague:
“The build fails. Fix it.”
Sharp:
“The build fails with this error: [error message]. Check the validation logic in
OrderValidator.cs. Fix the root cause, not the symptom. Rundotnet testafter the fix.”
In the previous post I wrote about CLAUDE.md as structural context — the things Claude should always know. But equally important is the ad-hoc context you provide per prompt. The more specific you are about which files, which patterns, and what end result you expect, the better the output.
3. Steer on direction, not on every line
There are two extremes. On one end: dictating everything, prescribing every line of code. Then you might as well type it yourself. On the other end: accepting everything without looking. That’s what they call “vibe coding” — and the Stack Overflow survey of 2025 shows that 72% of professional developers deliberately don’t do that.
The sweet spot is in between. The same UC San Diego study shows: no experienced developer let AI work fully autonomously. But they course-corrected on average every two steps — not every line.
Think of managing a colleague. You don’t say: “Type an if-statement on line 14 with this exact condition.” You say: “The validation should also account for inactive users. Look at how we do it for orders.” Giving direction, not dictating.
Addy Osmani puts it this way: “Every engineer is a manager now.” You orchestrate and validate. You don’t have to write everything yourself, but you do have to understand everything.
4. Know when to start over
This might be the most important lesson I’ve learned: sometimes course-correcting is more expensive than starting fresh.
The rule of thumb I use: if you’ve corrected Claude twice on the same point, your context is polluted with failed attempts. Start over. In Claude Code: /clear, revert to your last good commit, and write a better prompt that incorporates what you learned the first time.
John Lindquist from egghead.io puts it simply: “Starting over works every time.” And he’s right. A clean session with a sharp prompt almost always yields a better result than a long session full of corrections.
The signals that you’re better off starting over:
- You’re repeating the same correction for the third time
- The responses are going in circles
- The code gets more complex with each iteration instead of simpler
- You notice you’re getting frustrated
That last one is serious. Frustration is a signal that you’re wrestling with the context, not the problem.
5. Trust, but always verify
A study by Sonar among 1,100 developers revealed a telling paradox: 96% don’t fully trust AI-generated code. But 48% don’t always check it before committing.
That’s like not trusting a colleague but also not reviewing their code.
Anthropic’s own documentation is crystal clear on this: “Give Claude a way to verify its work. This is the single highest-leverage thing you can do.” Verification is not optional — it’s the most important thing you can do.
Concretely in a .NET project:
“Implement the endpoint, write tests for the happy path and the key edge case, and run
dotnet test. Fix any failures.”
By making verification part of your prompt, you build in a feedback loop. Claude writes code, tests it itself, and fixes what doesn’t work — before you even look at it.
Kent Beck — the creator of Test-Driven Development — says that TDD becomes more important with AI, not less. It’s the safety net that ensures speed doesn’t come at the cost of quality.
It’s a skill, not a switch
The Stack Overflow survey of 2025 shows that 66% of developers spend more time fixing “almost-right” AI code than it would cost to write it themselves. That number sounds like an indictment of AI tools. But I think it’s more a signal that most developers are still learning this collaboration.
Effective collaboration with AI is a skill you develop. Just like pair programming, code review, or managing a team. It’s not about the perfect prompt or the right tool. It’s about a way of working: plan before you start, provide context, course-correct at the right moment, and always — always — evaluate the end result yourself.
Start with one thing. Next time you type a prompt, first ask yourself: does Claude know enough to do this well? If the answer is no, give it what it needs. You’ll notice the difference.