A developer pastes a connection string into ChatGPT to debug a configuration issue. A colleague copies a chunk of customer data into an AI tool “just to test the mapping logic.” Someone else shares an internal API contract to generate client code faster.

Recognizable? Probably. Risky? Definitely.

None of these people had bad intentions. They were trying to get work done. But in the rush to be productive, they shared things that should never have left the building.


The problem: where does your input go?

When you type a prompt into an AI tool, your input doesn’t just disappear after you get a response. Depending on the service, it might be stored, logged, or even used to train future models. Most services are transparent about this in their terms — but let’s be honest, who reads those?

The key question isn’t “is this AI tool good?” It’s: where does my input end up, and who can access it?

For tools running on your own infrastructure, the answer is straightforward. For external services, it’s more nuanced. Some providers offer enterprise plans where your data isn’t used for training. Others don’t make that distinction. And even if they do — data still travels over the internet, gets stored on their servers, and is subject to their security practices.

This doesn’t mean you shouldn’t use external AI tools. It means you should think about what you put into them.


What you should NEVER share

This list is shorter than you think, but the consequences of getting it wrong are severe:

  • Credentials: API keys, connection strings, passwords, tokens. Not even “just this once” to debug something. Strip them out first.
  • Customer data: Names, email addresses, financial records, health data — anything that’s personally identifiable or covered by GDPR/privacy regulations.
  • Production data: Real database exports, log files with user information, internal system configurations.
  • Trade secrets: Proprietary algorithms, unreleased product details, internal business strategy documents.

The common thread: if it would be a problem when it shows up in a data breach report, don’t paste it into an external AI tool.


What you CAN safely share

The good news: there’s plenty you can share without risk.

  • Public code and open-source patterns: Anything that’s already on GitHub or in public documentation.
  • Generic architecture questions: “How do I implement the mediator pattern in .NET?” doesn’t expose anything sensitive.
  • Error messages — as long as you strip out sensitive context. NullReferenceException at line 42 in OrderService.cs is fine. The same error with a full stack trace including customer IDs is not.
  • Test data and mock objects: Synthetic data you’ve created for testing purposes.
  • General best practices: Questions about design patterns, coding conventions, or framework usage.

The rule of thumb: could you post this on Stack Overflow without a second thought? Then it’s fine for an AI tool too.


The core principle: you are responsible

I wrote an AI guidelines document for my team, and the opening line says it all:

AI is a tool, not the responsible party. When work is delivered, AI is not responsible for it. The responsibility always lies with the person who creates, reviews, and commits the work.

This boils down to four rules:

1. You review everything. Code, tests, documentation — everything AI generates must be checked by you before it gets committed. This includes checking for accidentally included secrets.

2. You understand what you commit. Never commit code you can’t explain. If AI generates something you don’t understand, ask for an explanation or write it yourself.

3. You test the result. AI-generated tests need to be validated too. A green test suite means nothing if the tests don’t actually verify the right behavior.

4. You own it. The moment you hit “commit,” it’s your work. Not AI’s, not a colleague’s. If there’s a bug in production, “but AI wrote it” is not an acceptable answer.


Governance in a team: make it explicit

Individual awareness is great. But in a team, you need shared agreements. Because the developer who accidentally shares a connection string isn’t careless — they just never had a conversation about what’s okay and what isn’t.

Here’s what works:

Document the rules. Write down what can and can’t be shared with AI tools. Keep it short — a bullet list is enough. Put it somewhere everyone sees it: your wiki, your onboarding docs, or your CLAUDE.md.

Speaking of which — if you use Claude Code, your CLAUDE.md file is the perfect place for “don’t do” rules. Things like:

## Security
- Never include API keys, connection strings, or secrets in prompts
- Don't process real customer data — use test fixtures from /tests/fixtures/
- Strip personally identifiable information from error logs before sharing

Claude reads this file at the start of every session. It’s not just documentation — it’s active context that shapes how the AI works with your codebase.

Talk about it. Have a ten-minute conversation in your next retro or standup. Not to scare people, but to create a shared understanding. “What do we think is okay to share with AI tools?” That one question will surface more edge cases than any policy document.

Review for it. Add “no secrets in AI prompts” to your code review checklist. Check .env files, check commit history for accidentally committed credentials. This is good practice regardless of AI — but AI makes it more urgent.


It’s not about fear

I want to be clear: this is not an argument against using AI tools. I use them every day. They make me faster, they catch things I miss, and they’re genuinely useful for thinking through problems.

But “useful” and “safe by default” are not the same thing. AI tools don’t know what’s confidential in your context. They don’t know that the database URL you just pasted contains production credentials. They don’t know that the JSON blob you’re debugging contains real customer email addresses.

You do. That’s the point.

Governance isn’t about restricting what you can do. It’s about knowing what you’re doing. It’s the difference between driving fast because you know the road, and driving fast because you’re not paying attention.


Start here

If you don’t have any AI governance in your team yet, here are three things you can do today:

  1. Make a “never share” list. Five minutes, five bullet points. Credentials, customer data, production configs, tokens, trade secrets. Pin it in your team channel.

  2. Check your CLAUDE.md (or equivalent config). Add a security section with the rules your AI tools should follow. It takes two minutes and it works from day one.

  3. Ask the question. In your next team meeting: “What are we comfortable sharing with AI tools?” You’ll be surprised how many people have been wondering the same thing but didn’t bring it up.

AI governance sounds big and corporate. It doesn’t have to be. It starts with one conversation and a few clear rules. The tools are powerful — make sure you’re using them with your eyes open.