I recently taught a workshop on moving past basic prompting, and thought I’d share the content here.
Let’s face it, we’re all supposed to be using these tools, and getting better, faster, stronger, etc., but how many people have actually been taught about prompting?
LLMs are powerful but lazy, literal, and very keen to please
Do not prompt them the way you do a (standard) Google search. A prompt is a mini brief. Not a question. Another way to think about it, is that you’re creating a search term for something that doesn’t yet exist. So the more clearly you can picture (and describe) it, the better your result is going to be.
Complex tasks fail because:
- Goals are vague
- Constraints are missing
- The model doesn’t know what “good” looks like
If you find yourself arguing with the AI, it’s usually because the brief wasn’t finished
So, before we go any further, here’s a basic prompt template. You’ll see all sorts of variations on this if you go hunting, but this covers the basics.
Context + Intent + Specific Info + Response Format
- Context – what situation has triggered this request
- Intent – what do I want to do with this info?
- Specific info – are there details I either need, or want to supply?
- Response format – how do I want the answer presented?
Of these, the most important is context. The more you can share about why you’re doing what you’re doing, and what your situation is, the better your response will be.
Example prompt: I’m creating a marketing strategy for a new product and need to better understand the target market. The product is [whatever the product is] and we’re intending to sell to [basic market/audience info]. Please create a target audience persona, including basic demographic details and relevant audience insights. Give me a one page summary of this person, and three image prompts I could use to get a conceptual portrait of them.
These tools aren’t like (standard) Google, where a search is just 3-5 words long. A decent prompt is likely to start at around 21 words (the one above is 70 words and doesn’t have the product and audience details), and can comfortably head into the hundreds of words to ensure a decent brief.
Tips and tricks
- Ask the LLM to adopt a persona: a French language tutor, a developmental editor, a social media marketing expert (this is getting less important as the tools evolve but can be useful sometimes).
- Ask for more ideas / options than you need.
- Prompt to combat stereotypes and biases.
- Specify complexity – ‘explain X to a 5-year-old’ (a lot of fun with topics like quantum physics and stoicism).
- If it’s a complex task, split the steps into a conversation and check progress along the way.
- Ask it what other information it needs (my fave).
- Put the most important thing first (especially in an image prompt).
- Be explicit about trade-offs (what is more important, what is lower priority).
- Tell it what not to do (note: don’t do this in an image prompts. If you want ‘no clouds in the sky’, ask for a ‘clear blue sky’).
- Don’t rely on “use best practice” or “world class” unless you define it.
- Use delimiters (<> , “””) to separate instructions from data.
- Make sure your data is organised.
- ITERATE this is a conversation, not a Google search.
- CHECK YOUR ANSWERS (but you already know that).
Beyond the ‘simple’ prompt
Chain of thought
If you don’t want to (or it’s not feasible for some reason) to break a conversation into steps, tell the model to show its reasoning before giving you the final answer.
Two-step
Step 1: plan / outline / ask questions.
Step 2: generate final output.
Tools such Gemini Deep Research does this automatically – proposes a plan, then waits for approval before executing.
Self-checking LLM prompts
- Review your answer against the constraints/goals and improve it.
- List any assumptions you’re making.
- Give me two options optimised for different priorities (specify the varying priorities).
Reverse engineer your prompt
When you’ve finally got the output you want from a conversation, and this is something you’re going to be doing again, say:
Analyse our conversation above. I want to achieve this same result next time in a single step. Please write a comprehensive prompt that includes all the context, rules, and formatting we just discussed, which I can use to generate this exact output immediately in the future.
Then open a new chat and test it.
This prompt is, of course, an example of…
Meta Prompting
Meta prompting is where you get the LLM to come up with the prompt.
- Describe what you want
- Ask for a prompt template (if this is something you’re going to reuse)
- Review the template
- Test and refine
If you’re using it to create image prompts, tell it which tool you’re using (Canva, Midjourney, Nano Banana, etc).
Build a shared prompt library
If you’re using the same prompts regularly, especially in a work context where you can pool resources (and brains) to collect the most useful prompt ideas and templates, build a shared library.
Create a database (mine’s in Notion, but go with any app you’ll actually use) where you record:
- Title: (e.g., “Q3 Report Generator”)
- Task Type: analysis writing, coding, etc
- Description: When to use it.
- The Prompt: The text block.
- Variables: Clearly mark where the user needs to input data (e.g., using [INSERT DATA HERE] or {CLIENT_NAME}).
- Anything else that’s relevant.
- Flag if there’s a preferred tool, and if there’s anything that breaks it
- Note: Prompts “rot” as models change; review them quarterly.
I hope you find this helpful!