How to Write Better AI Prompts: 5-Part Framework
A simple 5-part checklist for writing prompts that reliably produce useful outputs—plus a worked example you can copy and adapt.
- LLM
- AI
- Prompting
- Chatbots
- Productivity
When people say they got “AI slop,” what they usually mean is: their prompt didn’t give the model enough structure to aim at.
A large language model is trying to infer what you want from the text you provide. The easiest way to get consistently good results is to treat prompting like a small spec: clear goal, relevant inputs, and unambiguous success criteria.
This post gives you a simple framework you can reuse for almost any prompt.
TL;DR: the 5 parts of a good prompt
A strong prompt usually includes:
- Task / Goal: what you want done
- Context / Source: the information it should use and who it’s for
- Quality / Constraints: rules, boundaries, and how to judge success
- Output format: how you want the response structured
- Role / Persona: who the model should act as
In practice: Task is required. Context, format, and constraints are strongly recommended. Role is optional.
For teaching clarity, the sections below are ordered by importance. In your final prompt, if you use a role, it will often still appear first.
You don’t always need all five. But if your results feel vague, adding the missing pieces is the fastest fix.
1) Task / Goal
What it is: the specific thing you want the model to do.
Why it helps: without a clear task, the model will guess your intent.
Example phrases:
- “Summarise the following…”
- “Draft a plan for…”
- “Rewrite this in a friendlier tone…”
Common mistake: mixing multiple tasks (“summarise, critique, rewrite, and brainstorm”) without prioritising or splitting them.
2) Context / Source
What it is: the relevant info the model should use, and the audience/purpose for the output.
There are two useful subtypes:
- Source material: text, data, links, constraints it must use
- Situational context: who it’s for, what it will be used for, what matters most
Example phrases:
- “This is for a LinkedIn post aimed at team leads…”
- “Use only the text inside <source>…</source>”
Common mistake: giving a task with no audience or input, then being surprised the response doesn’t match your intent.
3) Quality / Constraints
What it is: boundaries and evaluation rules: length limits, tone, exclusions, citation rules, “don’t invent facts,” and so on.
Why it helps: constraints act like guardrails. They reduce failure modes such as hallucinations, overconfidence, or overly-long output.
Example phrases:
- “Keep it under 2,000 characters.”
- “If you’re unsure, say so and list assumptions.”
- “Do not use information outside the provided source.”
Common mistake: leaving “quality” implied (e.g., wanting it concise, but not saying so).
4) Output format
What it is: the shape of the response: paragraphs vs bullets, headings vs none, markdown vs plain text, etc.
Why it helps: it prevents the model from “choosing” a structure you didn’t want.
Example phrases:
- “Return a markdown table with columns X/Y/Z.”
- “Only paragraphs. No bullets or headings.”
Common mistake: asking for multiple formats at once (“a short summary and a detailed plan and a table”) without specifying where each part begins/ends.
5) Role / Persona
What it is: the perspective, expertise, or style you want the model to adopt.
Optional but often high impact: you can get good outputs without a role, but adding one often improves specificity and voice.
Placement note: if you include a role, the common convention is to put it at the top of the prompt.
Why it helps: it narrows the range of possible responses and reduces “generic” output.
Example phrases:
- “You are a senior technical writer…”
- “Act as a pragmatic engineering manager…”
Common mistake: using a role that’s too vague (“be an expert”) without defining what “expert” means in this context.
A practical prompt in the wild
Here’s a realistic prompt someone might send before refining it:

Even though it’s informal and intentionally omits the role, it still contains the main components:
- Task: “give me a detailed description”
- Constraints: “less than 2,000 characters”, “friendly tone”
- Format: implied paragraphs (not explicit, but hinted)
- Context: “for a LinkedIn post”
- Role: missing
If you look back at prompts that worked well for you, you’ll usually find the same pattern: clear task + enough context + explicit constraints.
Upgrade: make each component explicit
This prompt might produce decent output, but if we want something higher quality, we can upgrade it. If we stop implying requirements, state them explicitly, and add a role, we usually get better results.
Here’s a more explicit version (still asking for the same outcome):
Role: You are an expert communication coach and AI educator who writes engaging, high-performing LinkedIn content.
Your task: Write a detailed description of general prompting advice for using tools like ChatGPT. Focus on practical, real-world tips that help everyday professionals get clearer, more useful answers.
Context: I am writing a LinkedIn post aimed at knowledge workers, team leads, and managers who are curious about AI but not deep technical experts. They want practical guidance so they can use AI more effectively and with more confidence.
Style and tone: Friendly, confident, and practical. Grounded and specific rather than hypey.
Format: Produce one continuous piece of text made up only of paragraphs. Do not use bullet points, numbered lists, headings, emojis, or hashtags.
Constraints and controls:
- Keep the total length under 2,000 characters (including spaces).
- Focus on general prompting advice, not specific tools or API details.
- Prioritise clarity for a LinkedIn feed: short sentences, strong opening, smooth flow.
- If you include examples, keep them simple and relatable to office or knowledge-work scenarios.
Why this tends to work better
- The model doesn’t have to guess the audience or voice.
- The output shape is explicit (so you don’t get surprise bullets/headings).
- Constraints are measurable, so the model can check its draft against clear requirements.
A copy/paste checklist you can reuse
When you’re stuck, fill in this skeleton:
Role: You are a [persona].
Task: [exact outcome].
Context: [audience, purpose, source material].
Format: [structure, markdown/plaintext, headings?].
Constraints: [length, tone, exclusions, citation rules, “don’t invent facts”, etc.].
A final note on format
In the examples above, I separated the different parts using simple labels and colons. You may see other guides suggest using Markdown headings or XML tags. In my experience, for low- to medium-complexity tasks, the exact format matters less than clarity.
For more complex prompts, stronger structure (for example, Markdown sections or XML tags) can improve reliability. Anthropic's Claude documentation explicitly recommends XML tags for prompt structuring across Claude models. I'll cover this in more detail in a future post on advanced prompt engineering.
If you want help applying this inside your business
If you're exploring how to make AI useful inside your existing tools (support, CRM, internal docs) and want a practical, low-pressure starting point, I offer a free 45-60 minute AI opportunity & automation review.
You can reach me here: Contact