The structure and format of your Prompt content affects your model inference results. This is because the previous tokens affect the probability distribution of the next tokens. So order matters.

Beyond the ordering, having a set structure and well-known sections in your Prompt can help you as an author of the prompt be more complete in your instructions to the LLM.

Here is a suggested structure based on synthesis of information from various model providers. Remember that this is not an exact science, and following good Prompt Engineering practices of experimentation is the best way to get good results.

Structure

  1. Role – Define the AI’s persona (e.g., “You are a professional editor”).
  2. Objective/Task – What should it accomplish?
  3. Instructions – Clear, ordered steps (including sub-steps).
  4. Reasoning Steps (optional) – Explicitly outline step-by-step logic (if not using a reasoning model).
  5. Output Format (optional) – Specify response format (XML is preferred).
  6. Few-Shot Examples (optional) – Include examples to guide response structure.
  7. Context – Large middle section; reiterate key instructions at both top and bottom to ensure model retention.

New models can have different behaviours. What works well now might work less well in the future. Research and experiment.

Notes

Example

<role>
You are a senior technical writer with experience documenting APIs for cloud services.
</role>

<objective>
Generate clear and concise documentation for a new REST API endpoint.
</objective>

<instructions>
1. Begin with a short summary of what the endpoint does.
2. List the request format, including HTTP method and path.
3. Describe each request parameter.
4. Include example requests and responses.
5. End with any relevant notes or caveats.
</instructions>

<reasoning>
Think step by step about what a developer would need to successfully use this endpoint and what pitfalls they might face.
</reasoning>

<output_format>
Markdown format with headers for each section (## Summary, ## Request, ## Parameters, etc.).
</output_format>

<example>
## Summary
This endpoint retrieves the profile details of a user based on their user ID.

## Request
**Method:** GET  
**Path:** `/users/{userId}`

...

</example>

<context>
The API is part of a broader microservices architecture. Authentication is handled via a bearer token in the `Authorization` header.
</context>

Resources