Free courseHow to write good prompts for AI coding

Iterating and refining: from draft to production

Learn how to review AI output critically, refine your prompts incrementally, and build a personal prompt library.

10 min

Iterating and refining: from draft to production

Your first prompt rarely produces production-ready code. That's fine — the skill isn't writing a perfect prompt once. It's knowing how to review, refine, and converge fast.

Review before you accept

Treat AI output like a pull request from a new teammate. Check for:

  • Correctness: does it actually do what you asked?
  • Side effects: did it change files or logic you didn't mention?
  • Style: does it follow your project's patterns, or did it invent new ones?

Never merge AI code you haven't read. The time you save generating it is wasted if you debug mystery code later.

Refine incrementally, don't restart

When the output is close but not right, refine — don't rewrite your prompt from scratch.

First attempt:

Context: src/lib/cache.ts exports a getCache function using Redis.
Task: Add a TTL parameter to getCache with a default of 60 seconds.

The agent adds the parameter but uses EX instead of your project's PX (milliseconds) convention. Follow up with a targeted fix:

Change the TTL in getCache to use PX (milliseconds) instead of EX.
Convert the seconds default to 60000ms.

This is faster and more reliable than rewriting the whole prompt. Each refinement pass narrows the gap.

Stack your context across turns

AI agents keep conversation history. Use that. After a few rounds, you can write shorter prompts because the agent already knows your files and patterns:

Now add a unit test for the getCache TTL behavior.
Use the same vitest setup as the other tests in src/lib/__tests__/.

You don't need to re-explain your stack every turn — just reference what's already in the conversation.

Build a personal prompt library

When a prompt produces great results, save it. Over time, you'll build templates for your common tasks:

  • "Add a new API route with error handling" → saved prompt skeleton.
  • "Create a React component with props, types, and tests" → saved prompt skeleton.
  • "Refactor function X to use pattern Y" → saved prompt skeleton.

Store these in a markdown file in your repo, a notes app, or a tool like 99prompt. Reusing proven prompts beats improvising every time.

The feedback loop

The full cycle looks like this:

  1. Write a structured prompt (context, task, constraints).
  2. Review the output critically.
  3. Refine with a short follow-up prompt.
  4. Save prompts that work well.

Each cycle makes you faster. After a few weeks, writing good prompts becomes second nature — and your AI agent starts feeling like a reliable pair programmer instead of a wild card.