There’s an entire cottage industry built on “prompt engineering secrets.” YouTube thumbnails with red arrows pointing at magical incantations. Twitter threads promising “the one prompt that will change everything.” PDF collections of “500 prompts every developer needs.”

Most of it is noise. The developers who get the best results from AI don’t use tricks. They think clearly, and that clarity shows up in their prompts naturally.

Prompt tricks vs. clear thinking

Let’s look at a real example. Say you need to build a rate limiter for an API.

The “prompt trick” approach:

You are an expert senior backend engineer with 20 years of 
experience. Think step by step. Take a deep breath. Write 
the best possible rate limiter implementation. Be very 
careful and thorough. This is very important for my career.

The clear thinking approach:

Build a token bucket rate limiter for our Express API.

Requirements:
- 100 requests per minute per API key
- Headers: X-RateLimit-Remaining, X-RateLimit-Reset
- Return 429 with retry-after when exceeded
- Store state in Redis (we use ioredis)
- Must work behind our nginx load balancer (multiple 
  Node processes share state)

Our existing middleware pattern:
export const middleware = (req, res, next) => { ... }

The first prompt wraps vague instructions in performative decoration. The second one communicates exactly what’s needed. No “you are an expert” preamble changes the model’s architecture. But specifying Redis, ioredis, the middleware signature, and the multi-process constraint absolutely changes the output you get.

The three layers of clear thinking

Good prompts aren’t written – they’re thought through. Here’s the mental model we’ve found most useful.

Layer 1: Intent

What are you actually trying to accomplish? Not “write a function” but “solve this specific problem in this specific context.” Most developers jump to implementation language before they’ve articulated intent, and AI mirrors that vagueness right back.

Before you type anything, ask yourself: If a smart colleague who just joined the team asked “What are you building and why?” – what would you say?

That answer is your prompt’s foundation.

Layer 2: Constraints

Every real project has constraints that exist nowhere in the code itself. The database you’re using. The framework version. The deployment environment. The team’s coding standards. Performance requirements. Compliance rules.

Constraints are the most undervalued part of any prompt. They eliminate entire categories of wrong answers before the AI even starts generating. When someone says “AI gave me a terrible answer,” 80% of the time the AI didn’t know about a constraint the developer considered obvious.

Here’s a useful exercise: before you prompt, list every constraint that would matter if you were handing this task to a contractor you’ve never worked with. That’s the context gap AI needs filled.

Layer 3: Shape

What should the output look like? Not the content – the shape. Do you want a single function or a class? A module with exports or an inline implementation? Should it include error handling, or will you add that yourself? Do you want tests alongside the implementation?

Specifying shape isn’t micromanagement – it’s communication. When you tell a colleague “just give me a quick utility function, nothing fancy,” you’re shaping the output. Do the same with AI.

Return this as:
- A single exported function (not a class)
- With JSDoc comments
- No external dependencies
- Include 3-4 unit tests using vitest

Why “think step by step” actually works (but not for the reason you think)

The famous “chain of thought” prompting technique does improve results – but not because you’re unlocking a secret mode. It works because asking AI to show its reasoning forces it to decompose problems the same way good engineers do.

The useful version isn’t “think step by step.” It’s being explicit about which steps matter:

1. First, identify which existing endpoints need rate limiting
2. Then design the token bucket data structure for Redis
3. Then implement the middleware
4. Finally, add the response headers

This mirrors how you’d actually approach the problem. You’re not tricking AI into being smarter – you’re sharing your problem decomposition, which is genuinely useful context.

Common thinking failures (and how to fix them)

The vague handwave: “Make it better” or “optimize this.” Better how? Faster execution? More readable? Better error messages? AI can’t optimize for a metric you haven’t defined.

The kitchen sink: Dumping an entire file and saying “fix the bugs.” Which bugs? What behavior are you seeing vs. expecting? The more specific the symptom, the more accurate the diagnosis.

The assumption of omniscience: “Update the user model to match our new schema.” What new schema? AI doesn’t have your latest Slack conversation or your Jira ticket. If the context exists only in your head, it doesn’t exist for AI.

The premature how: “Use a recursive CTE to query the org chart.” Maybe a recursive CTE is the right answer, maybe it isn’t. If you prescribe the implementation, you’ll get exactly what you asked for – even when a simpler approach would work better. Lead with the problem, not the solution, unless you’ve already decided on the approach.

The coductor mental model

An orchestra coductor doesn’t play every instrument. They don’t even tell musicians which notes to play – that’s in the score. What a coductor provides is interpretation, timing, and coordination.

Apply that to AI development:

That’s the prompt mindset. You’re not writing instructions for a machine. You’re providing the context and judgment that turns raw capability into useful output.

One habit that changes everything

Before you send any prompt to an AI coding tool, pause for ten seconds and ask:

“If I gave this instruction to a competent developer who knows nothing about my project, would they have enough to produce what I need?”

If the answer is no, you’re about to waste a round-trip. Add what’s missing. It takes thirty seconds of thinking to save five minutes of iteration.


The best prompt engineers aren’t engineers at all. They’re clear thinkers who happen to be writing prompts. Master the thinking, and the prompting takes care of itself.