There’s a dirty secret about AI-assisted coding: it’s often too helpful.
Ask an AI to implement a feature and it will. Quickly and confidently. It’ll reach across your codebase, modify whatever it needs, create utilities that duplicate existing ones, deliver working code that makes you uncomfortable.
The code works. It doesn’t belong.
The AI isn’t bad at coding. It’s bad at not coding. It lacks the pause that experienced developers have: the “wait, where should this actually live?” moment. The awareness that every line you write exists in a larger context.
SOLID principles are the fix. Specifically, SOLID principles as constraints on AI behavior.
The Problem: No Boundaries
When I’m deep in a module, I have tunnel vision. I know what’s in scope, what interfaces I can depend on. If I need something from another layer, I feel friction. Import it. Check how it works. Consider whether I’m creating an awkward dependency.
AI feels none of this friction.
It sees your entire codebase as flat. Need a helper? Create one, even if three similar ones exist. Need to call into another module? Reach right in; maybe modify that module’s internals to make things easier. Need a new method on an interface? Added. No thought given to other implementations or callers.
The result: code that works in isolation but degrades the system.
So the question is whether you can give AI the same constraints that make experienced developers productive. Boundaries that force a pause. A reason to consider context before acting.
Single Responsibility as Silos
The Single Responsibility Principle says a module should have one reason to change. More useful for AI: a module should have clear boundaries about what it knows and doesn’t know.
When I work in a well-structured codebase, I don’t think about the whole system. I think about my context:
- What module am I in?
- What can I depend on?
- What interfaces are available?
This is cognitive load reduction. I only need to understand my slice and its contracts.
AI has no cognitive load. It can hold your entire codebase in context. This sounds like a superpower. It’s actually a liability; there are no natural stopping points, no friction that says stop.
Tell the AI: here’s your working context, here’s what dependencies are allowed, here’s what you do when you need something outside these boundaries. Not “figure out how to make this work.” Instead: “Stop. Tell me what you need. Let’s solve that problem in the right place first.”
The Evaluate, Decide, Act Framework
The instinct to “just add a method” is strong in AI because it’s the path of least resistance. Need GetUserWithOrders()? Add it.
But this leads to interface sprawl:
interface IUserRepository
{
User GetById(int id);
User GetByIdWithOrders(int id);
User GetByIdWithProfile(int id);
User GetByIdWithOrdersAndProfile(int id);
User GetByIdForDashboard(int id);
}
Each method made sense when added. Together, they’re chaos. What you probably wanted:
interface IUserRepository
{
User GetById(int id, UserInclude includes = UserInclude.None);
}
The issue isn’t modification. Sometimes that’s exactly right. The issue is the AI didn’t pause to consider whether modification was the right call.
Prompt with an explicit framework:
- Evaluate: Does the current design support this? If not, design gap or design smell?
- Decide: Add something new, or refactor what exists? Surface the decision.
- Act: If modifying, understand impact. Update all callers. Leave it consistent.
Make the AI show you the fork in the road instead of picking a direction silently:
“I need to add
GetUserWithOrderHistory()toIUserRepository.I notice there are already 3 similar methods. Combinatorial explosion.
Options:
- Add another method (quick, worsens sprawl)
- Refactor to composable design (better long-term, affects 4 call sites)
Which approach?”
Now you’re making the architectural call. The AI does analysis and implementation. Right division of labor.
Open/Closed as Change Control
The Open/Closed Principle often means “never modify.” Too rigid. Sometimes refactoring is exactly right.
But there’s a useful interpretation: modification requires justification.
When AI wants to change existing code, it should articulate:
- Why can’t this be done through extension?
- Who depends on what I’m changing?
- How does this affect existing consumers?
If it can’t answer these, it should stop and say so.
This isn’t preventing all modification. It’s preventing unconsidered modification. The kind where AI tweaks shared utilities for one use case and breaks three others.
Dependency Inversion as Architectural Awareness
Depend on interfaces, not implementations. When AI reaches for a concrete class instead of its interface, it’s treating the codebase as flat. It doesn’t understand the architecture.
Forcing that constraint makes AI:
- Discover what abstractions exist
- Understand what capabilities are meant to be available at each layer
- Recognize when it’s trying to do something the architecture doesn’t support
That last point is the important one. If the AI can’t do what you asked through the available interfaces, that tells you something about the architecture. The interfaces might need to grow. The feature might belong in a different layer entirely. Or the request doesn’t fit the system’s design, and that’s a conversation worth having before anyone writes a workaround.
These are architectural conversations. But they only happen if the AI hits friction and stops, rather than routing around the architecture.
Considered Change
All of this ladders up to one idea: AI should make considered changes, not expedient ones.
Expedient: “I need X, here’s the fastest path.”
Considered: “I need X. Here’s where it should live. Here’s how it relates to what exists. Here’s the impact. Here’s my recommendation.”
The difference is awareness. Awareness of boundaries, patterns, other code affected, the gap between “works” and “belongs.”
This is what senior developers do naturally. They’ve felt the weight of maintenance. They’ve lived with decisions for years and wished they’d paused first.
AI has no scar tissue. No weight of maintenance. So you encode it explicitly.
How
An instruction file encoding these principles:
Boundary awareness:
“Before writing code, establish which module you’re working in and what dependencies are allowed. Don’t modify code outside your working context without acknowledging the boundary crossing.”
Decision surfacing:
“When you need to modify an interface or shared code, show me the tradeoff. What exists now, what you’d change, what breaks if you’re wrong. Ask before proceeding.”
Duplication detection:
“Before creating utilities or helpers, search for existing implementations. Ask: am I about to write something that already exists?”
Impact awareness:
“When modifying shared code, identify all callers and implementations. Verify your changes work for existing uses. Don’t leave the codebase inconsistent.”
The point isn’t to slow the AI down. It’s to aim it. Let it do the analysis, the pattern-matching, the implementation grunt work. But within guardrails that mirror how experienced developers actually think.
You make the architectural calls. The AI does the heavy lifting. The code that comes out of that arrangement actually belongs in the codebase, which is more than you can say for most AI-generated PRs.