Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mint.skeptrune.com/llms.txt

Use this file to discover all available pages before exploring further.

You can safely ignore advice about coding agents that doesn’t mention loop structure. The single biggest mistake people make when using AI coding agents is writing prompts that have no loop condition — no way for the agent to validate its own work, iterate, and converge on a correct solution. Once you understand this, your results will improve dramatically. “Agent” is an overloaded term, so to be precise: throughout this post, the word “agent” refers to simonw’s definition — “a language model which runs tools in a loop to achieve a goal.”
If that definition is new to you, read fly.io’s guide on writing agents or ampcode’s guide and try building your own micro-agent first before continuing. It’s a worthwhile exercise.

What is this loop you speak of?

Consider this prompt asking Claude Code to write a generateSlug function:
add a new function to the @index.ts file called `generateSlug` which accepts a blog post title and returns a URL-friendly slug
Sounds clear and like it should work, right? It won’t. The result will be missing unicode handling, special character handling, multiple space collapsing, and edge cases. You can curse JavaScript all you want, but this is your fault, not Claude’s. The prompt gives the agent nothing to test against. It takes one shot and stops. You want something more like this instead:
look at @index.ts and start by making sure you have a way to unit test functions which get added to that file. Add jest and new ts files if you need to, add a new script to @package.json. Then scaffold a new generateSlug function in @index.ts, write robust tests for it covering unicode, special chars, multiple spaces, edge cases, run them, watch them fail, implement the slug generator until they all pass. Unit tests should be in the same index.ts file as the function implementation.
The second prompt gives the agent a loop: write tests, run them, fix the implementation, repeat until everything passes.

It’s all the agent loop

The key difference is this phrase from the second prompt:
“run them, watch them fail, implement the slug generator until they all pass”
This gives the agent a loop condition — a test to run repeatedly while iterating. Without it, the agent takes one shot and stops. With it, the agent keeps working until satisfied. The agent loop is the mechanism that transforms a language model from “a thing that predicts text” into “a thing that actually solves problems.” When you give an agent a concrete, runnable validation condition, you’re engaging the loop. When you don’t, you’re just asking for a first draft.

Common loop condition patterns

The pattern is always the same: describe the work, specify the validation, then tell the agent to iterate until the validation passes. Here are three templates you can adapt directly.
[describe feature], write tests for [key behaviors], run them, fix until they all pass
Example: “Add a formatCurrency function to utils.ts, write tests covering negative numbers, zero, large values, and locale formatting, run them, fix until they all pass.”
reproduce the bug in [file/test], fix the issue, verify the bug no longer occurs and all tests pass
Example: “In the auth module, reproduce the session expiry bug with a test that logs in, waits, and verifies the token is invalid. Fix the issue, verify the bug no longer occurs and all tests pass.”
[make changes], run the build, fix any errors or type issues until it compiles successfully
Example: “Migrate the config module to strict TypeScript, run the build, fix any type errors until it compiles successfully.”
All three patterns include a concrete, runnable check: tests pass or fail, builds succeed or error, bugs reproduce or don’t. Agents can take these abstract descriptions and turn them into concrete tools they can run over and over until the condition is met.
Prompting outside of this pattern is the equivalent of grabbing a junior dev three shots past Ballmer’s Peak and asking them to fix a bug. They’ll make a change, call it done, and hand it back — whether it works or not.

Pairing loop conditions with planning mode

Different tasks call for different loop conditions, and knowing which condition to use for a given situation is a skill that comes with experience. Senior developers recognize these intuitively: unit tests for pure functions, integration tests for API calls, end-to-end tests for user flows, build checks for type safety. If you’re less experienced and unsure what validation makes sense for a task, use the planning mode offered by Claude Code, Cursor, or your coding agent of choice. Describe the task and ask the agent to suggest an approach including what it will use to validate its work. That meta-prompt will often surface the right loop condition for you.
When in doubt, ask the agent: “What tests or checks would prove this is working correctly?” Then include the answer in your prompt as the loop condition.

Conclusion

The gap between a prompt that produces a mediocre first draft and a prompt that produces working code is almost always the loop condition. Describe the work, specify validation, and tell the agent to iterate until it passes. That’s the whole pattern. Good luck out there.