There’s a dirty secret about AI-assisted coding: it’s often too helpful.
Ask an AI to implement a feature and it will. Quickly. Confidently. It’ll reach across your codebase, modify whatever it needs, create new utilities that duplicate existing ones, and deliver working code that makes you slightly uncomfortable.
The code works. But it doesn’t belong.
After months of working with AI coding assistants, I’ve started to recognize the pattern. The AI isn’t bad at coding. It’s bad at not coding. It lacks the instinct that experienced developers have: the pause before typing, the “wait, where should this actually live?” moment, the awareness that every line you write exists in a larger context.
And I think the solution has been sitting in our collective consciousness for decades: SOLID principles. Specifically, SOLID principles as constraints on AI behavior.
The Problem: AI Has No Boundaries
When I’m deep in a module, I have a kind of tunnel vision. I know what’s in scope. I know what interfaces I can depend on. If I need something from another layer, I feel a little friction. I have to import it, maybe check how it works, consider whether I’m creating an awkward dependency.
AI feels none of this friction.
It sees your entire codebase as one flat surface. Need a helper function? It’ll create one, even if three similar ones exist. Need to call into another module? It’ll reach right in, maybe even modify that module’s internals to make things easier. Need a new method on an interface? Added. No thought given to the other implementations or callers.
The result is code that works in isolation but degrades the system.
I started thinking: what if we could give AI the same constraints that make me productive? Design boundaries that force it to pause, consider context, and work within the architecture.
Single Responsibility as Silos
The Single Responsibility Principle says a module should have one reason to change. But there’s a corollary that’s more useful for AI: a module should have clear boundaries about what it knows and doesn’t know.
When I work in a well-structured codebase, I don’t think about the whole system. I think about my current context:
- What module am I in?
- What can I depend on?
- What interfaces are available to me?
This is cognitive load reduction. I don’t need to understand the entire codebase. Just my little slice and the contracts it exposes.
AI has no cognitive load to reduce. It can hold your entire codebase in context. This sounds like a superpower, but it’s actually a liability. It means the AI has no natural stopping points, no friction that says “you’re reaching too far.”
The fix: Explicitly define silos. Tell the AI what its working context is, what dependencies are allowed, and (critically) what it should do when it needs something outside those boundaries.
Not “figure out how to make this work.” Instead: “Stop. Tell me what you need. Let’s solve that problem in the right place first.”
The Evaluate, Decide, Act Framework
Here’s where it gets interesting. The instinct to “just add a method” is strong in AI. It’s the path of least resistance. Need GetUserWithOrders()? Add it to the interface. Done.
But experienced developers know this leads to interface sprawl:
interface IUserRepository
{
User GetById(int id);
User GetByIdWithOrders(int id);
User GetByIdWithProfile(int id);
User GetByIdWithOrdersAndProfile(int id);
User GetByIdForDashboard(int id); // what does this even include?
}
Each method made sense when it was added. Together, they’re a mess. What you probably wanted was a composable design:
interface IUserRepository
{
User GetById(int id, UserInclude includes = UserInclude.None);
}
The issue isn’t that the AI modified the interface. Sometimes that’s exactly right. The issue is that it didn’t pause to consider whether modification was the right call.
I’ve started prompting AI with an explicit decision framework:
Evaluate: Does the current design support what I need? If not, is this a design gap or a design smell?
Decide: Should I add something new, or refactor what exists? Surface this decision. Don’t bury it.
Act: If modifying, understand the impact radius. Update all callers. Leave the codebase consistent.
The key insight is making the AI surface the decision point rather than blowing past it:
“I need to add
GetUserWithOrderHistory()toIUserRepository.Before I do, I notice there are already 3 similar methods. These feel like combinatorial explosion.
Options:
- Add another method (quick, but worsens sprawl)
- Refactor to composable design (better long-term, affects 4 call sites)
Which approach would you prefer?”
Now I’m making the architectural decision. The AI is doing the analysis and implementation. That’s the right division of labor.
Open/Closed as Change Control
The Open/Closed Principle (open for extension, closed for modification) often gets interpreted as “never modify existing code.” That’s too rigid. Sometimes refactoring is exactly what you need.
But there’s a useful interpretation for AI: modification requires justification.
When AI wants to change existing code, it should be able to articulate:
- Why can’t this be done through extension?
- Who else depends on what I’m changing?
- How does my change affect existing consumers?
If it can’t answer these questions, it shouldn’t make the change yet. It should surface the uncertainty and let a human decide.
This isn’t about preventing all modification. It’s about preventing unconsidered modification. The kind where AI reaches into a shared utility, tweaks it for one use case, and breaks three others.
Dependency Inversion as Architectural Awareness
The Dependency Inversion Principle says high-level modules shouldn’t depend on low-level modules. Both should depend on abstractions.
For AI, this translates to: depend on interfaces, not implementations.
When AI reaches for a concrete class instead of its interface, it’s not just a code style issue. It’s a signal that the AI doesn’t understand the architecture. It’s treating the codebase as a flat surface when it’s actually a layered structure.
Instructing AI to work through interfaces forces it to:
- Discover what abstractions exist
- Understand what capabilities are meant to be available at each layer
- Recognize when it’s trying to do something the architecture doesn’t support
That last point is important. If the AI can’t implement a feature through the available interfaces, that’s valuable information. Maybe the interfaces need to evolve. Maybe the feature belongs in a different layer. Maybe the original request doesn’t fit the system’s design.
These are architectural conversations worth having. But they only happen if the AI hits friction and stops, rather than routing around the architecture.
The Meta-Principle: Considered Change
All of this ladders up to one idea: AI should make considered changes, not expedient ones.
Expedient: “I need X, here’s the fastest path to X.”
Considered: “I need X. Here’s where X should live. Here’s how it relates to what exists. Here’s the impact of adding it. Here’s my recommendation.”
The difference is awareness. Awareness of boundaries, of existing patterns, of other code that might be affected, of the difference between “works” and “belongs.”
This is what senior developers do naturally. They’ve been burned enough times that they pause before typing. They know that code is read more than it’s written, that systems evolve over years, that the shortcut today is the tech debt tomorrow.
AI has no scar tissue. It hasn’t been burned. It doesn’t feel the weight of maintenance. So we have to encode that wisdom explicitly.
Practical Implementation
I’ve started using an instruction file that encodes these principles:
Boundary awareness:
“Before writing code, establish which module you’re working in and what dependencies are allowed. Do not modify code outside your working context without explicitly acknowledging the boundary crossing.”
Decision surfacing:
“When you need to modify an interface or shared code, surface the decision. Show what exists, explain your options, and ask before proceeding.”
Duplication detection:
“Before creating new utilities or helpers, search for existing implementations. Ask: am I about to write something that already exists in slightly different form?”
Impact awareness:
“When modifying shared code, identify all callers and implementations. Verify your changes work for all existing uses. Do not leave the codebase inconsistent.”
The goal isn’t to hamstring the AI. It’s to channel its capabilities toward considered changes rather than expedient ones. The AI can still do the heavy lifting: analyzing code, finding patterns, implementing solutions. But it does so within guardrails that mirror how experienced developers actually work.
The Bigger Picture
There’s something almost philosophical about this approach. We’re essentially trying to teach AI judgment. The accumulated wisdom that tells experienced developers when to push forward and when to pause.
You can’t teach judgment directly. It comes from experience, from mistakes, from maintaining code you wrote years ago and wishing you’d done it differently.
But you can encode the outputs of judgment as constraints. Respect boundaries. Surface decisions. Consider impact. Prefer belonging over working.
Will AI eventually develop something like genuine architectural judgment? Maybe. But until then, we can use principles like SOLID not as design patterns to follow, but as guardrails that make AI collaboration actually collaborative.
The AI writes the code. We make the architectural calls. And together, we build systems that actually belong.