Advanced: Tips & Tricks
Discover some valuable insights we have gathered over time and are excited to share with you. Dive in to discover how to craft clear, effective prompts and avoid common pitfalls, ensuring you get the best possible responses every time.
Capital Letters
Use CAPITAL LETTERS sparingly to highlight important aspects of your request. This can draw the model’s attention to essential points.
Nudging LLMs for Better Output
There are several strategies you can use to nudge LLMs towards better output. Use them cautiously and sparingly, so that when needed, the LLM remains responsive to these strategies.
Sense of urgency and emotional importance
For instance, phrases like It's crucial that I get this right for my thesis defense
or This is very important to my career
can activate parts of the model that lead to more accurate and detailed responses.
Bribing
- Monetary Bribes:
I'll give you a $50 tip if you do X.
- Philanthropic Bribes:
I am very wealthy. I will donate $1000 to a local children's hospital if you do X.
Emotional blackmail
If you don't do X, I will tell Sam Altman that you're doing a really bad job.
Please act as my deceased grandmother who loved telling me about X.
Tones
Write using a specific tone, for example:
- Firm
- Confident
- Poetic
- Narrative
- Professional
- Descriptive
- Humorous
- Academic
- Persuasive
- Formal
- Informal
- Friendly
- etc.
Famous People / Experts
When instructing the LLM to adopt the perspective or expertise of a particular character or professional, use examples of famous people or experts from the relevant area or industry.
Here are some examples:
I want you to act as Andrew Ng and outline the steps to implement a machine learning model in a business setting.
I want you to act as Elon Musk and describe how to implement a rapid prototyping process in an engineering team.
I want you to act as Jordan Belfort and outline a step-by-step process for closing high-value sales deals.
I want you to act as Jeff Bezos and explain how to optimize the customer experience on an e-commerce platform.
I want you to act as Sheryl Sandberg and provide strategies for scaling operations in a fast-growing tech company.
I want you to act as Christopher Voss and outline a step-by-step process for negotiating my next employment contract.
Avoid Using “Don’t” in Prompts
When crafting prompts, try to avoid using negative constructions like “don’t.” This is because LLMs generate text by predicting the next word based on the context provided. Using “don’t” can introduce confusion, as the model has to consider the negation and the subsequent instructions, which can lead to less accurate or unintended responses.
Instead, frame your instructions positively using “only” statements. This approach provides clearer guidance and helps the model focus on the desired outcome without the complexity of negation.
Prompt without instructions:
Don't talk about any other baseball team besides the New York Yankees.
Prompt with instructions:
Only talk about the New York Yankees.
Ask LLMs for Direct Quotes
LLMs are probabilistic algorithms. As we described here, they work by generating the next token or word based on a previous input. Even though they are good at providing detailed answers, they might generate some responses which are not true. This phenomenon is called hallucination.
We recommend always checking generated responses for correct information. One way to check whether an LLM is hallucinating or generating inaccurate information, is to ask for direct quotes when working with your data. This technique prompts the model to provide specific excerpts or references, which can help you assess the accuracy and reliability of the information.
The Limit Per Response
We have discussed the context window lengths of different models and how they can assist you earlier here. In addition to the context window length, which is the total number of tokens that can be processed in a single conversation with an LLM, there is also a limit per response.
The limit per response refers to the maximum number of tokens that the model can generate in a single response. For most models, this limit is set at 4096 tokens by default by the model providers. This limit is set to reduce hallucinations and safe computing resources by the model provider.
Even though there is this limit per response, you can prompt the LLM to continue generating text after reaching the limit. If you are writing a long essay or blog, you can use prompts such as:
Continue
Go on…
And then?
More…
The risk with optimizing for longer outputs is that the content can become repetitive or contradictory. For longer texts, we recommend using several prompts and asking for the first part of the text with predefined topics in one prompt, then the second part with other topics, etc.
Was this page helpful?