Agentic drift is killing your codebase. Here's how to stop it.
I read Kevin Kern's post on agentic drift last week and it put words to something I've been watching happen across every codebase I touch.
The short version: your AI coding agent is slowly wrecking your architecture, one helpful fix at a time. Not because it's bad at coding. Because it's too good at preserving what already exists.
What agentic drift actually looks like
You ask Claude Code or Cursor to fix a bug. The agent looks at the existing code, figures out the contract, and adds a small compatibility shim to make things work. Problem solved. Ship it.
Except that shim is now permanent. The next time the agent touches that area, it treats the shim as intended architecture. It adds a normalization layer to work around the shim. Then a guard clause to make sure the normalization layer doesn't break the original shim.
Six months later you've got a codebase full of:
- Empty stubs that used to do something
- Duplicate ownership paths where two modules both think they're in charge
- Normalization layers wrapping normalization layers
- Type coercion scattered everywhere because the agent kept widening types to avoid errors
None of these changes are wrong in isolation. Every single one made sense when the agent wrote it. But zoomed out, your architecture is dissolving.
Kevin calls it "small fixes, duplicate ownership paths, and other small compatibility fixes" that seem harmless but create architectural decay. I've seen this happen on real projects. It's not hypothetical.
Why agents do this
The root cause is simple. Agents are trained to preserve existing contracts. When an agent sees code that works, it doesn't ask "should this exist?" It asks "how do I make my change without breaking this?"
That's the right behavior for a junior developer who doesn't understand the system. It's the wrong behavior for someone making architectural decisions. And right now, most of us are letting agents make architectural decisions without realizing it.
Every time you accept a diff without reading it, you're letting the agent's local optimization become your global architecture. Anthropic's 2026 Agentic Coding Trends Report found that this is one of the biggest challenges teams face as they scale agent usage.
The vibe coding trap
There's a connection here to vibe coding that people miss. Andrej Karpathy coined the term in early 2025 - you describe the vibe, the agent writes the code, you run it and see if it works.
Vibe coding is fine for throwaway projects and prototypes. The problem is when people vibe code a production system and then keep vibing. Each session adds more drift. The Stack Overflow research from January 2026 found that AI-generated code introduced security bugs at 1.5 to 2x the rate of human code, and was twice as likely to make concurrency mistakes.
That's not because AI is dumb. It's because nobody told it the architecture. It vibed its way through.
How to actually fix this
Kevin lays out several solutions in his post. Here's what I've found works in practice, having shipped 15 products with AI agents over the past year.
1. Set the rails before you start
Linter. Formatter. Type checker. Test setup. Folder structure. CI pipeline. All of this has to exist before you let an agent touch the code.
If the agent doesn't have rails to run on, it'll lay its own. And its rails will be different every session.
2. Use an AGENTS.md or skill file
This is the one that changed everything for us. Kevin recommends an AGENTS.md file - a concise guide that tells the agent how to work with your codebase. What to build, what not to touch, what patterns to follow.
We took this further and turned them into Claude skill files. A skill file is an AGENTS.md on steroids. It doesn't just say "follow this pattern." It contains the actual architecture patterns, component structures, and production code examples. The agent doesn't guess. It copies.
This is why adworthy.ai shipped in 3 days without drift - the SaaS Builder skill gave the agent the full architecture upfront. No room to improvise.
3. Review like you mean it
Kevin's quote on this is perfect: "Manual review is still the best review."
I scan every diff for these red flags:
- Excessive normalization: If you see data being transformed more than once before use, something drifted
- Type widening:
anytypes or union types that keep growing are a sign the agent is patching instead of fixing - Duplicate ownership: Two modules doing the same thing slightly differently
- Guard clauses that guard nothing:
if (x !== null && x !== undefined && x)everywhere
If you see these patterns, don't just fix the symptom. Find where the drift started and cut it there.
4. Give agents context, not just code
One thing Kevin emphasizes that most people skip: give your agent access to the running app. Screenshots, browser dev tools, actual UI. An agent working from code alone will make different decisions than one that can see what the user sees.
This is similar to what we do with the Taste & Design skill - it gives the agent visual context about what good UI looks like, so it doesn't just write code that compiles but code that looks right.
5. Don't over-parallelize
Running 5 agents on 5 branches sounds fast. But when you merge them, you've got 5 different interpretations of the same architecture. Kevin prefers single-branch development, and after trying both approaches, I agree.
One agent, one branch, one human reviewing. Slower but the codebase survives.
The real lesson
Here's what I keep coming back to after reading Kevin's post: someone still has to lead.
The agent is not the architect. The agent is a fast, tireless junior developer who will do exactly what you tell it and fill in the gaps with whatever seems reasonable. If you don't tell it the architecture, it'll invent one. And it'll be different every time.
That's why skill files exist. Not because agents are bad, but because agents without structure produce drift. Give them structure and they produce production code.
Kevin puts it well: "If you one shot a simple webapp, that's different from doing domain work on a codebase that should survive when winter is coming."
If your codebase needs to survive, give your agent the patterns. Take the skill quiz to figure out which skill file matches what you're building, or grab the full bundle if you want all of them.
The agent does the work. You do the thinking. That's the split that actually works.
Further reading and watching:
- Kevin Kern's original post on agentic drift
- Anthropic's 2026 Agentic Coding Trends Report
- Vibe Coding vs. Agentic Coding (academic paper)
- Are bugs inevitable with AI coding agents? (Stack Overflow)
- Claude Code: Eight trends defining how software gets built in 2026
- Structuring codebases for AI tools (Propel)