When working with an LLM, you can dramatically improve response quality by assigning it a specific role. This technique, called “priming”, works because it activates the model’s relevant knowledge patterns and language style for that domain. The technical reason this works: LLMs are trained on vast amounts of text from different professionals and contexts. When you specify a role, you’re essentially telling the model which subset of its training data to prioritize, leading to more focused and expert-level responses. For instance, asking for project management advice from a “product manager” persona will tap into PM methodologies, terminology, and best practices that might otherwise be diluted in a generic response. Prompt without role:

Help me to develop a marketing strategy.
Prompt with role:

As a growth marketing expert, can you help me write a marketing strategy?
The second prompt will generate responses using growth marketing frameworks, metrics, and tactical approaches that a specialist would actually use.