Free courseHow to write good prompts for AI coding

Why prompt quality decides output quality

Understand why vague prompts produce vague code and learn the mental model for writing clear, scoped instructions.

8 min

Why prompt quality decides output quality

AI coding agents don't read your mind. They read your prompt. A vague prompt forces the model to guess — and guesses compound into code you'll spend longer fixing than writing yourself.

The core mental model

Think of a prompt as a function call to the AI:

  • Input: your prompt (arguments).
  • Processing: the model's training and context window.
  • Output: generated code.

Garbage in, garbage out. The clearer your input, the more predictable the output.

What "vague" actually looks like

Here's a prompt a developer might write on their first try:

Make a login page

The model doesn't know your stack, your auth strategy, your design system, or whether "login" means email/password, OAuth, magic links, or all three. It will pick defaults that probably aren't yours.

Now compare:

Create a Next.js server component for an email/password login form.
Use react-hook-form with zod validation.
On submit, call the signIn server action from @/actions/auth.
Show inline field errors and a toast on failed login.

Same task, completely different output. The second prompt removes ambiguity at every decision point.

The two failure modes

Most bad prompts fail in one of two ways:

  • Too broad: "Build me a dashboard." The model invents requirements you never wanted.
  • Too narrow without context: "Add margin-top: 12px to the card." The model applies a fix but doesn't understand the layout problem you're solving, so the next change breaks it again.

The sweet spot is scoped but contextual — tell the agent what you want, where it lives, and why it matters.

A practical test

Before you send a prompt, ask yourself:

  1. Could a junior developer follow this instruction without asking me a question?
  2. Did I name the specific files, functions, or libraries involved?
  3. Did I state what "done" looks like?

If any answer is no, your prompt needs more detail. The ten seconds you spend clarifying will save minutes of back-and-forth or manual fixes.