This guide will help you to get started with prompting. It provides an overview of the different prompting methods and principles.

User Prompt

A prompt is a specific instruction or question given to a language model to guide its response. It helps set the context and provides a starting point for the model to generate relevant output. Depending on the desired task, the prompt can be a sentence, phrase, or even a longer paragraph. We can direct the model’s attention towards a particular topic or task by providing a prompt, ensuring that the generated response aligns with our intentions. Prompts are essential in shaping the behavior and output of the model, allowing us to have more control over the generated content.


To receive a good response from the AI model, provide as much specific and detailed information as possible. Here’s a short checklist to help structure your prompts for optimal results:

1. Clear Objective:

Write clear objectives to define what you want to have as a result. Formulate open-ended questions to receive extensive information. Which format should the outcome be in? (e.g., list, email, blog post, JSON). If you have several questions, sequence them logically.

Not: Write me an email to Julia.
Instead: Write me an email to my colleague Julia about the status of the ticket ABC-123 regarding our Jira integration.

2. Relevant Context:

What are you generally working on, and how does this prompt help you achieve that goal? Who is the audience for the response that will be generated? (experts/beginners, colleagues/the public,…).

Not: Write a marketing copy for biscuits.
Instead: Write a marketing copy for an advertisement in a newspaper about biscuits. The target group is adults and young families.

3. Defined Style:

Specify if the response should emulate a famous person, profession, or role. Define the tone of the reply (formal, friendly, funny, etc.).

If available, provide examples of the expected answers and their desired structure to guide the model effectively.

Working with AI models often involves some experimentation and iteration. You can adjust your prompts as needed to achieve the desired results.

Advanced tricks

  • Avoid negations: Avoid using don’ts in the prompt. Due to the probabilistic nature of the LLM, the semantic meaning of negations is often not picked up by the LLM.

    For example: Be polite instead of Don’t be rude.

  • Use examples: Provide examples of the desired output to guide the model’s behavior. This helps to ensure that the generated output aligns with the user’s expectations. This technique is also known as “Few-Shot Learning”.

    For example: Generate three titles for a fantasy book. Examples: ‘The Lord of the Rings’, ‘Harry Potter’, ‘The Hobbit’.

  • Provide step-by-step instructions: This sequential approach can help the model process complex tasks more effectively.

    For example: First, summarize the given text. Then, translate the text to Spanish.

  • Let the model think: It’s beneficial to ask the model to explain its reasoning to avoid errors and promote more reliable answers.

  • Highlight parts of the input to have them treated differently (with triple quotation marks, XML tags, titles,…):

    For example:
    Improve the following email in terms of professionalism and formality. “”” insert email “””
    Improve the following email in terms of professionalism and formality.<p> insert email here </p>

Was this page helpful?