The gap between “meh” and “genuinely useful” almost always comes down to how you set up the conversation — not how smart the model is.


Most people use AI the way they’d use a search engine: type something vague, hope for the best, get frustrated when the output is wrong, then either give up or spend longer fixing the result than they would have spent doing the work themselves.

This module covers the patterns that reliably produce better output across Claude, ChatGPT, and any other LLM you’ll encounter. They’re not theoretical. They’re the difference between “I tried AI and it didn’t work” and “I can’t believe I used to do this manually.”


Set the Frame Before You Ask the Question

The single highest-leverage prompting technique is telling the AI who it is before you tell it what to do. This isn’t cute roleplay — it activates domain-specific knowledge and shifts vocabulary, tone, and depth in ways that generic prompting doesn’t.

Role Assignment

Instead of:

“Review this vendor contract for issues.”

Try:

“You are a contracts attorney with 15 years of experience in SaaS vendor agreements. You specialize in identifying terms that disadvantage the buyer. Review this vendor contract and flag any clauses that are unusual, one-sided, or that I should push back on during negotiation.”

The generic prompt gets you a surface-level summary. The framed prompt gets you specific clause-level analysis with negotiation recommendations, because you’ve told the model what lens to look through and what kind of output matters.

How to Build Good Frames

A strong frame has three parts:

Identity: Who is this expert? Not just “a lawyer” but “a contracts attorney who specializes in SaaS vendor agreements.” Specificity unlocks depth.

Orientation: What’s their stance? “You specialize in identifying terms that disadvantage the buyer” tells the model which side of the table to sit on. Without this, you get neutral analysis when you wanted advocacy.

Task context: What are they being asked to do and why? “Flag clauses that are unusual, one-sided, or that I should push back on during negotiation” is a completely different job than “summarize this contract.”

Frames Worth Keeping in Your Back Pocket

Financial review: “You are a CFO reviewing budget projections for a board presentation. You are naturally skeptical of optimistic assumptions and want to stress-test every number. Your reputation depends on these being defensible.”

Writing feedback: “You are a senior editor at a publication known for clear, concise writing. You believe most drafts are 30% too long. Your feedback is specific, actionable, and blunt — you don’t soften criticism because you know the writer wants to improve.”

Technical decision: “You are a senior software architect advising a small team. You value simplicity over cleverness, and you’ve seen too many teams over-engineer solutions. When in doubt, you recommend the boring technology that works.”

Strategy review: “You are a management consultant conducting a strategic review. You are paid to find the things the internal team is too close to see. You ask uncomfortable questions.”

The pattern: be specific about the expertise, give them a disposition, ground them in why the task matters.


Force a Plan Before Execution

Left to its own devices, an AI will start producing output immediately. This is almost never what you want for anything complex. The fix: make it plan first.

The Check-In Pattern

Add this to any non-trivial request:

“Before you start, present a brief plan of how you’ll approach this. Include what you’ll cover, what assumptions you’re making, and anything you’d want to clarify. I’ll review and give feedback before you proceed.”

This does three things simultaneously: surfaces misunderstandings before they propagate through an entire document, gives you a map of what’s coming so you can redirect early, and forces the model to organize its thinking — which almost always improves what follows.

When to Use It

You don’t need a plan for “What’s the capital of France?” You absolutely need one for:

The Outline Variant

For documents and reports, ask for an outline specifically:

“I need a competitive analysis of [three vendors] for [specific use case]. Before writing anything, give me a proposed outline with the key comparison dimensions you’d evaluate. I want to make sure we’re covering what matters before you invest time in the full analysis.”

Faster than a full plan and gives you enough to steer. If the outline is missing a dimension you care about — say, vendor lock-in risk — you catch it before a 2,000-word report gets built on an incomplete framework.


Review Like You’re Managing, Not Rubber-Stamping

The AI gives you something, you skim it, say “looks good,” and move on. You’ve just accepted a first draft from a junior employee with no feedback loop. Stop doing that.

The Coaching Mindset

Treat AI output the way you’d treat work from a smart but inexperienced team member:

What Good Feedback Looks Like

Bad: “This is pretty good, maybe tighten it up a bit.”

Better: “Three things. First, the executive summary buries the recommendation — lead with it. Second, Section 3 makes a claim about market size without a source — either cite something or flag it as an estimate. Third, the risk section reads like an afterthought — give each risk a likelihood and impact rating so the reader can prioritize.”

The AI will execute on specific feedback with impressive precision. It just can’t generate that feedback for itself. That’s your job.


QA Through a Clean Instance

One of the most underused techniques, and devastatingly effective. After the AI produces work, take the output to a fresh conversation and have the AI critique its own work from the opposite perspective.

The Counter-Reviewer Pattern

You’ve just had Claude build a budget projection. Open a new conversation:

“You are a CEO reviewing budget numbers prepared by your finance team. You are concerned they are too optimistic and need to present defensible numbers to your board. Review the following budget and identify every assumption that could be challenged, every number that seems aggressive, and any gaps in the analysis.”

Then paste the budget.

Why a fresh instance? The original conversation has sunk-cost bias — the model that built the thing is architecturally invested in defending it. A fresh conversation has no context to protect. And you’re explicitly framing it as an adversary, giving it permission to be critical.

Other Counter-Review Frames

For proposals: “You are the decision-maker receiving this proposal. You’re looking for reasons to say no because your budget is tight. What weaknesses would you point to?”

For technical plans: “You are a skeptical senior engineer doing a code review. You’ve seen a lot of over-engineered solutions and you value simplicity. What concerns do you have?”

For marketing copy: “You are the target customer. You’ve seen a hundred pitches this week and you’re tired of hype. Does this copy make you want to learn more, or does it trigger your BS detector?”

For contracts you’ve drafted: “You are the other party’s lawyer. Find everything that favors the drafter and suggest modifications.”

Use counter-review for anything with stakes: financial projections going to stakeholders, proposals going to clients, plans you’ll spend real time and money executing. Five minutes of counter-review catches problems that hours of self-review won’t.


Give Permission to Push Back

By default, AI assistants are agreeable. They’ll run with your assumptions even when those assumptions have holes. You need to explicitly break this pattern.

The Permission Prompt

Add some version of this to requests where your assumptions might be wrong:

“I think [X approach] makes sense, but tell me if there’s a problem in my assumptions. Don’t just agree with me — if there’s a better way to frame this, say so.”

One sentence. Changes the dynamic from “execute my plan” to “help me pressure-test my plan.”

Variations

For scoping: “I want to build [X]. Before we start, tell me what I might be underestimating. What questions should I be asking that I’m not?”

For strategy: “My plan is to [X]. Play devil’s advocate — what’s the strongest argument against this approach?”

For estimates: “I think this will take about two weeks. Am I being realistic? What factors might make this take longer?”

For decisions: “I’m leaning toward [Option A] over [Option B]. Before I commit, make the best case for Option B. What am I not seeing?”

Most bad AI interactions fail not because the model can’t do the work, but because it optimized for the wrong objective. When you say “build X” and X is based on a flawed assumption, you get a beautifully executed wrong thing. Permission to push back turns the AI from a compliant executor into a thinking partner.


The Structural Stuff That Makes Everything Better

Beyond the five core patterns above, a handful of structural techniques worth internalizing.

Show, Don’t Just Tell (Few-Shot Examples)

If you want output in a specific format, show the AI what good looks like. One example is usually enough. Two or three if the pattern is unusual.

“Write product descriptions for these items. Here’s the tone and format I want:

Example: Alpine Trail Runner 3.0 — Built for people who think ‘casual hike’ means 12 miles with 3,000 feet of elevation gain. Vibram outsole grips everything short of ice. Weighs 9.8 oz so you forget you’re wearing shoes until the descent reminds your knees. $149.

Now write descriptions for: [list of products]”

Without the example, you get generic marketing copy. With it, you get copy that matches your voice.

Give It an Out for Uncertainty

“If you’re not sure about something, say so rather than guessing. ‘I’m not confident about this, but…’ is always better than a wrong answer presented with confidence.”

One instruction. Meaningfully reduces hallucination. The model would rather be helpful than accurate by default — this reweights toward accuracy.

Structure Complex Requests

For anything with multiple parts, use explicit structure:

“I need three things:

  1. A summary of the current state (2-3 paragraphs)
  2. Three options for how to proceed, with pros and cons for each
  3. Your recommendation with reasoning

Handle them in that order.”

Numbered, sequenced requests get dramatically better compliance than a paragraph that embeds multiple asks.

Use XML Tags for Complex Prompts

When you’re giving Claude a lot of structured input — context documents, examples, constraints — XML tags keep things organized:

“Review the following contract: [paste contract here]

Focus on these areas: <focus_areas>

  • Termination clauses
  • Liability caps
  • IP assignment </focus_areas>

Flag anything that deviates from market standard.”

XML tags are a Claude-specific technique — they help the model parse complex inputs more reliably than markdown headers or numbered sections alone. Use them when you’re passing in multiple distinct pieces of context.

Set Constraints That Actually Help

Constraints aren’t limitations — they’re guardrails that focus the output:

“Keep the executive summary under 200 words.” “Write this for an audience that understands accounting but not software engineering.” “If a claim needs a source and you can’t provide one, flag it with [NEEDS SOURCE] so I can verify.” “Don’t use jargon unless you define it in context.”


Putting It All Together

Here’s what a well-structured prompt looks like when you combine these techniques:

“You are a financial analyst preparing a quarterly business review for a consulting firm with 12 employees. You are thorough but concise — you know the partners will skim anything longer than 3 pages.

I’m going to share our Q1 financials. Before you build the review, outline what sections you’d include, what metrics you’d highlight, and what comparisons you’d draw. Present this outline for my feedback before writing anything.

I think we had a strong quarter, but push back if the numbers tell a different story. If any figures seem unusual or if you’d want additional context to give a fair assessment, ask.

Format the final output as: executive summary (under 200 words), key metrics table, narrative analysis by business line, and forward-looking section with risks and opportunities.”

That prompt takes about 60 seconds to write and will produce something dramatically better than “Summarize our Q1 financials.” The five techniques — frame, plan, review, counter-review, push back — add maybe two minutes to each interaction and save you from the “redo it three times because the first attempt missed the point” cycle.


Further Reading

ResourceWhy You’d Read It
Anthropic’s Prompting GuideThe official reference from the team that built Claude. Clear and practical.
Prompt Engineering GuideCommunity-maintained, covers techniques across all major models with examples.
Anthropic’s Prompt Best Practices Blog PostCondensed practical tips from Anthropic, good for a quick refresher.

Next: Module 02 — The Vibe Coding Landscape