Human On the Loop
There’s a moment every developer hits with AI coding agents: you’re running parallel tasks, the agent asks for approval on one, you’re thinking about another, and a third just finished with a decision you wouldn’t have made.
You realize the “human in the loop” model doesn’t scale when it comes to coding with AI. It is great for running your workflows on production and managing your business. You want to have a confirmation step before giving someone a large discount. But when it comes to AI assisted development it is not optimal.
Agentic coding promises speed. But if an agent needs approval at every step — file edits, tools usage, clarifications — you’re not scaling. You’re adding context-switching overhead. This works for one sequential task. It collapses with two or three agents. Your attention becomes the scarce resource.
Human On the Loop
“In the loop” means a human must approve every action. “On the loop” means the system acts autonomously while a human monitors and can intervene. This model is gaining traction in AI-assisted development — developers become builders of the code factory rather than writers of individual code.
Instead of approving micro-steps:
- Define constraints upfront
- Let the agent execute autonomously
- Verify the final output against those constraints
In the loop: approve → approve → approve On the loop: define → execute → verify
This model scales. You review outputs sequentially after agents finish, instead of interrupting yourself constantly. Even if the implementation is wrong to some extent, it is more efficient this way — an agentic coding assistant will quickly fix any issue when pointed at it.
You don’t start fully autonomous. You earn it. Early on, you approve more, observe how the agent interprets your constraints, and tighten the rules where it drifts. Over weeks, the constraints mature and the approval surface shrinks. Trust is built incrementally, not granted.
When you adopt “human on the loop,” the effort doesn’t disappear — it moves into documentation.
My CLAUDE.md consists of:
- Architecture patterns
- Module blueprints and templates
- Naming conventions
- Dependency rules
- Testing requirements
- Forbidden patterns
These markdown files are the product. They’re the constraints that make autonomy safe. Writing them forces you to articulate architecture you previously kept in your head. It makes you a better architect.
Architecture Is the Multiplier
Loose architecture doesn’t survive agents.
Every ambiguity becomes a decision point — and agents resolve ambiguity using generic training patterns, not your system’s intent. Ask an agent to “add caching” and it reaches for Redis. Your system uses Cloudflare KV. Without a constraint, you get the wrong cache every time.
Strict, repeatable structure changes everything:
- Backend modules follow the same 3-layer pattern
- Frontend features share a directory convention
- Scaffolding templates define structure
- Agents fill in domain logic, not architecture
Humans define structure. Agents implement within it. That’s the right division of labor.
Constraint Engineering
Production agentic coding is constraint engineering. What actually works:
1. Specialized Skills
Instead of one mega-prompt, use focused skills:
- Testing
- Refactoring
- Verification
- Deployment
2. Storybook as a Design Contract
Documented components prevent UI drift. Agents reference existing props and patterns instead of inventing new ones.
3. End-to-End Type Safety
Shared schemas. Generated clients. Types flowing from DB to frontend. When agents modify APIs, TypeScript exposes every affected consumer instantly. Without this, silent contract drift happens at machine speed.
4. Templates for Everything
Routes, services, repositories, tests — all templated. Templates eliminate creative interpretation. Consistency becomes automatic.
5. Layered Testing
Constraints define:
- Unit tests (mocked boundaries)
- Integration tests (full stack)
- Frontend semantic tests
- E2E flows
Agents can write tests — but only if you define what “good” testing means.
6. Dead Code Analysis
Agents over-generate. Run dead code analysis after sessions. Delete unused exports. Prevent bloat. Verification becomes a feedback loop.
7. Context and Memory
Agents forget between sessions. Structure your context so they don’t have to remember — session summaries, decision logs, and semantic memory mean each new session starts with the full picture, not a blank slate. Without this, agents re-discover (and re-decide) the same things repeatedly.
Summary
The ceiling for agentic coding isn’t prompt engineering. It’s architecture. If your system lacks clear boundaries, consistent patterns, end-to-end types, and a real testing strategy — agents amplify the chaos. If your architecture is disciplined, agents amplify leverage.
Constraints also go stale. When your CLAUDE.md references a pattern you’ve since replaced, the agent faithfully reproduces the old way. Maintaining constraints is ongoing work — but it’s architect-level work, not busywork.
The markdown files, templates, type systems, and test patterns — that’s the real product now. The code is just the output. The job of a developer is shifting from writing code to writing constraints and maintaining good quality documentation. Like it or not this might be the future.
Further reading:
About the Author
Low code enthusiast, automation advocate, open-source supporter, digital transformation lead consultant, skilled Pega LSA holding LSA certification since 2018, Pega expert, AI practitioner and JavaScript full-stack developer as well as people manager.
13+ years of experience in the IT field with a focus on designing and implementing large scale IT systems for world’s biggest companies. Professional knowledge of: software design, enterprise architecture, project management and project delivery methods, BPM, CRM, Low-code platforms and Pega 8/23/24 suite.