prompting patterns for practical work: a step-by-step tutorial.

WatDaFeck RC image

prompting patterns for practical work: a step-by-step tutorial.

This article explains actionable prompting patterns for practical work and guides you through a reproducible, step-by-step process for building prompts that you can apply to routine tasks such as summarising meetings, extracting data, drafting emails and creating checklists for automation pipelines.

Begin by organising the problem into a clear goal, inputs, constraints and desired output format, because a precise goal reduces back-and-forth and improves reliability. State the intended role for the model and the context it must use, for example "You are a technical writer summarising design decisions from meeting notes". Describe inputs explicitly, such as "meeting transcript" or "raw CSV column", and list constraints like maximum length, style, or required fields. Decide the output format — a JSON array, bullet list, or a short paragraph — and give a concrete example of correct output to avoid ambiguity.

Use a set of prompting patterns that are easy to mix and match depending on the task, and consider the following common patterns that work well in practical workflows.

  • Role and context framing to set behaviour and perspective.
  • Stepwise decomposition to break the task into subtasks and avoid hallucination.
  • Input-output exemplars (few-shot) to show the model how to format responses.
  • Constraint anchoring to enforce length, tone or field presence.
  • Verify-and-refine loops to check and improve outputs.

Now follow a step-by-step recipe to combine those patterns into a robust prompt template you can reuse in production. Step 1: Define the primary objective in one sentence and write it at the top of the prompt. Step 2: Add a short role instruction to set the model's perspective. Step 3: Insert the input placeholder and any necessary context lines that will be dynamically filled by your automation. Step 4: Provide one or two few-shot examples that show valid input and the exact output format you expect. Step 5: Add constraints and explicit instructions for edge cases, for example how to handle missing data. Step 6: End with a clear output instruction such as "Return a JSON array with keys 'action', 'owner', 'due_date' and no extra commentary".

Apply this recipe to a concrete task to see how it looks in practice, for example extracting action items from meeting notes. Start the prompt with the role, then paste the meeting transcript under an INPUT heading, then present two short examples showing transcripts and the expected JSON outputs. Include explicit rules such as "If a due date is not found, set due_date to null" and "If an owner cannot be determined, set owner to 'Unassigned'". After you receive a response from the model, run a quick verification step where a smaller follow-up prompt checks the output for schema compliance and asks the model to correct any schema errors, which reduces downstream parsing failures and is cheap to automate. For more builds and experiments, visit my main RC projects page.

Finally, treat prompting as an iterative process rather than a one-off task and integrate testing into your workflow to improve reliability. Keep a small test suite of representative inputs and record model outputs against expected results so you can spot regressions after model updates. Automate the verify-and-refine loop so that an initial generation is followed by a validation prompt and, if necessary, a correction prompt, and measure key metrics such as accuracy, rejection rate and time to usable output. For additional reading and more examples from the blog, see our collection on AI & Automation for related posts and templates.

Comments