BYOT: Bring Your Own Tokens

I keep hearing versions of the same story from consultants and in-house devs alike: "I just used Claude Code to fix that bug." Own API key, own account, company codebase. Wild west.

Does the CTO know? Compliance? Legal?

What's actually happening

When an AI agent "reads" a file or "investigates" a database issue (even with read-only access), that content gets pulled into the context window. From there it makes a round trip to an LLM API. Every turn. Source code, schemas, migration files, logs, error messages with real user data baked in.

This is not a theoretical attack surface. This is the default behaviour, happening right now, in organisations that haven't thought carefully about it. The developer isn't being malicious. They're being productive. That's exactly what makes it hard to catch.

Four questions worth asking before the next sprint

1. Which LLM APIs are actually being used in your codebase right now?

Not which ones you've approved. Which ones are actually running. Personal accounts don't show up in your procurement or security tooling. You won't find them in your firewall logs unless you're specifically looking for the right hostnames. Start by asking your developers directly. You may be surprised.

2. Do the terms of those APIs exclude training on your data?

Most major providers offer enterprise tiers with explicit no-training commitments. Personal accounts are a different story. Read the terms. Some providers are unambiguous; others are not. "Your data is not used for training" in a consumer product FAQ is not a legal guarantee. Get it in writing, in a contract, before your code is in the context window.

3. Where does the data land: EU, US, somewhere else entirely?

Data residency matters under GDPR, and increasingly under other frameworks too. A Finnish developer chasing a bug with a personal API key may be routing company data through inference infrastructure in a jurisdiction your DPA has never heard of. Saying "we didn't know" is not a GDPR defence. Saying "it was just a developer tool" is not a GDPR defence either.

4. What happens when an agent pulls PII into context while chasing a user-reported bug?

This one deserves to be said plainly. A developer gets a bug report: "user ID 48291 sees the wrong balance." They hand it to their agent. The agent reads the relevant database query, checks the logs, pulls a sample row to understand the schema. That sample row may contain a real name, a real address, a real transaction history. It is now in the context window of an API running on infrastructure you don't control, under terms you haven't reviewed, in a jurisdiction you may not know.

GDPR doesn't care that it was a quick fix.

The answer is not to ban agentic tooling

That ship has sailed, and it was a good ship. Agentic development workflows are genuinely useful. Trying to prohibit them will just push usage further underground.

The answer is to own your LLM APIs. At the very least, understand how the technology works and buy tokens from a provider whose terms, data residency, and security posture actually match your compliance requirements. Enterprise agreements with the major providers are not expensive relative to the risk. They give you audit trails, data processing agreements, and no-training commitments in writing.

Set up a company account. Give your developers access. Make it easier to do the right thing than to reach for a personal key.

Where a lot of organisations are right now

Half their codebase round-tripping through three different inference providers on personal accounts. No audit trail. No DPA. No idea.

It's unfathomable. And yet, exactly where a lot of organisations are right now.

BYOT is not a developer problem. It's a leadership problem. The people setting engineering culture and tooling policy need to get ahead of this. Before the compliance team finds out the hard way.

Hello, world.

First post. The log is open.