Zero-shot Prompting
Now that we have covered the basics of prompting, it’s time to dive into advanced techniques that will refine your ability to craft precise and powerful prompts, unlocking new possibilities and deeper interactions with LLMs.
There are a few techniques you can use when prompting LLMs. The first one is “zero-shot prompting”. Since these models have been trained on massive datasets, their internal knowledge makes them capable of performing a large number of tasks without examples or precise demonstrations. As soon as we add guiding examples, we’re talking about Few-shot Prompting.
Think of zero-shot prompting like asking a guitar player to play the piano, even though they’ve never played piano before. They’d apply their previous knowledge about music and instruments to figure it out.
Most prompts we use are, by default, zero-shot prompts.
An example could be:
Prompt:
Classify the text into the categories of satisfied, neutral or unsatisfied.
\
Text: I was happy with the customer support today.
Output:\
Satisfied
The model processes the input and generates the right output because of its training on similar classification tasks across millions of examples.
When to use each approach:
- Zero-shot prompting: Perfect for general tasks like classification, translation, and answering questions with established knowledge
- Few-shot prompting: Better when you need nuanced results, complex reasoning, or specific output formats