Ai-driven product manager prompt engineering introduction: how to use the llm effectively
Posted: Wed Dec 04, 2024 6:48 am
In recent years, large-scale language models (LLMs) such as ChatGPT and Claude have emerged, bringing great potential to improve the work efficiency of product managers (PMs). In this article, we will take a detailed look at prompt engineering techniques that allow PMs to effectively utilize LLMs.
The Innovation LLM Brings to PM Work
The LLM is a powerful tool for PMs throughout the product development lifecycle, specifically in the following ways:
Analysis of customer data and qualitative feedback
Conducting competitive analysis and market research
Brainstorm ideas
Creating PRD (Product Requirement Specification) and feature release plans
Preparing for a product review presentation
Simulate difficult conversations with stakeholders and team members
Product-related interview preparation
What is Prompt Engineering?
Prompt engineering is the art of giving the LLM the right information and instructions to generate useful output for a specific purpose. Creating effective prompts requires optimizing five elements:
Role
Task
Context
Tone and Format
Examples
Furthermore, after combining these elements, it is important to make iterative improvements.
Let's take a closer look at each element.
1. Role Settings
First, be clear about the role you want the LLM to play, for example, as a "travel agency agent" when planning a trip.
Specifying roles helps LLMs know what to prioritize when answering and working, resulting in higher quality output, especially when asking for feedback on a document or working outside of their expertise (e.g. feedback as an Engineering Manager or data analysis as a Data Analyst).
When defining roles, it is good to include the following elements:
Roles : Depending on the use case, specify roles such as top PM, product lead, product coach, customer, stakeholder, etc.
Expertise : Providing expertise in a particular task or industry sri lanka b2b leads encourages LLMs to provide more specific, higher quality answers. Using adjectives such as "world-class" and "highly effective" tends to improve the quality of answers.
E
Explain clearly what you want the LLM to do, as if you were asking an intern to do a job. Include task details, context, expectations, and justification.
Start with simple instructions , such as "Generate five ideas" or "Summarize this document" with direct and clear instructions.
As your prompting skills improve, include more detailed guidance on how to approach the task. This is called chain-of-thought prompting , and it tends to produce higher quality results.
Also, by asking the LLM to explain how they generated their output, you can understand how their approach impacted the results.
Example: New feature idea for a music streaming app
Simple explanation :
3. Providing Context
Provide relevant context to improve the quality of your output, but keep it brief and just the essential background information. The types of context that are particularly useful are:
Details of the industry and any special characteristics the model must keep in mind (e.g. regulatory considerations in healthcare or fintech)
A brief description of your product or service, including its unique features and current challenges
Customer profiles including user demographics, behavior, recent survey results, etc.
Existing data, qualitative research, and model output
Make the context more focused. Models perform better with shorter prompts. Think of it like telling an intern to read a specific chapter instead of giving them the whole book. This will keep your model from going off topic.
It specifies what the output should look and sound like.
Format : Describe or illustrate how you want LLM to deliver your output. You can include examples or templates of ideal output, specify how long or short it is, and what structure you want it to have (bullet points, long text, tables, graphs, etc.).
Tone : Choose 2-3 key attributes, such as professional, conversational, persuasive, persuasive, friendly, etc. It is also effective to ask them to be concise in their output and to leave out extra words and modifiers. LLM can also infer tone from examples and templates that they provide.
For most PM use cases, being more specific about format rather than tone will result in higher quality responses.
Tip
Consider your target audience, communication purpose, and medium when defining your tone and format.
5. Providing Examples
Providing real-world examples improves the quality and relevance of the output from an LLM, especially when the attributes of the desired output are too complex to describe in words (e.g., tone).
It is also useful to explain what aspects of your output you would like the LLM to focus on (structure, level of detail, tone, etc.).
For example, you can say:
6. The Importance of Iterating
Interacting with LLM is an inherently iterative process. It is unlikely that you will get the ideal output on the first prompt version.
Method of iteration:
Provide feedback on the output : Be specific about what aspects of the output need to be improved and how it needs to be improved. This is similar to giving feedback to an intern or subordinate. The feedback needs to be specific and actionable.
Improve your prompts : If you plan to return to the LLM frequently or use the prompts multiple times, edit the original prompts and retry them to see if that improves the output. It's worth investing time in improving your prompts, especially if you use the LLM for repetitive or frequent tasks such as analyzing survey data, synthesizing interview notes, or creating PRDs from templates.
Ask your LLM to improve the prompt : After you have iterated on a prompt or provided feedback on the output, try asking your LLM to summarize the discussion and create a "round-up prompt." This is a single new prompt with all the important information that you can reuse later, or generate several prompts from the original to "test variations." This is especially useful for tasks that you do often or where a specific output format is important. It's like a quick A/B test: by having your LLM generate several variations of a relevant prompt (without you having to think about it yourself), you can improve your prompts faster.
Summary: 6 ways to effectively engineer prompts
There are six ways that PMs can promptly engineer higher quality output from LLMs:
Role : Ask the model to act as a PM, a customer, or a stakeholder.
Task : Describe the task, just like you would when working with an intern.
Context : Include brief details about your industry, customer, or product.
Format & tone : Specify how the answer should look and sound.
Examples : Share examples of what you're looking for.
Iterate : Give feedback to the LLM to improve its output.
Conclusion
Prompt engineering is a key skill for PMs to use the LLM effectively, and by implementing the six methods outlined in this article, you will be able to produce higher quality, more fit-for-purpose output.
However, it is important that the output of the LLM should always be critically evaluated and supplemented with human expertise where necessary. The LLM is a powerful tool, but it does not replace the creativity or strategic thinking of a PM.
By honing our prompt engineering techniques and properly incorporating LLM into PM work flows, we will be able to achieve a more efficient and innovative product development process.
Practice and find the best way to create prompts for your workflow. Collaborating with AI has great potential to make PMs more efficient and more creative.
The Innovation LLM Brings to PM Work
The LLM is a powerful tool for PMs throughout the product development lifecycle, specifically in the following ways:
Analysis of customer data and qualitative feedback
Conducting competitive analysis and market research
Brainstorm ideas
Creating PRD (Product Requirement Specification) and feature release plans
Preparing for a product review presentation
Simulate difficult conversations with stakeholders and team members
Product-related interview preparation
What is Prompt Engineering?
Prompt engineering is the art of giving the LLM the right information and instructions to generate useful output for a specific purpose. Creating effective prompts requires optimizing five elements:
Role
Task
Context
Tone and Format
Examples
Furthermore, after combining these elements, it is important to make iterative improvements.
Let's take a closer look at each element.
1. Role Settings
First, be clear about the role you want the LLM to play, for example, as a "travel agency agent" when planning a trip.
Specifying roles helps LLMs know what to prioritize when answering and working, resulting in higher quality output, especially when asking for feedback on a document or working outside of their expertise (e.g. feedback as an Engineering Manager or data analysis as a Data Analyst).
When defining roles, it is good to include the following elements:
Roles : Depending on the use case, specify roles such as top PM, product lead, product coach, customer, stakeholder, etc.
Expertise : Providing expertise in a particular task or industry sri lanka b2b leads encourages LLMs to provide more specific, higher quality answers. Using adjectives such as "world-class" and "highly effective" tends to improve the quality of answers.
E
Explain clearly what you want the LLM to do, as if you were asking an intern to do a job. Include task details, context, expectations, and justification.
Start with simple instructions , such as "Generate five ideas" or "Summarize this document" with direct and clear instructions.
As your prompting skills improve, include more detailed guidance on how to approach the task. This is called chain-of-thought prompting , and it tends to produce higher quality results.
Also, by asking the LLM to explain how they generated their output, you can understand how their approach impacted the results.
Example: New feature idea for a music streaming app
Simple explanation :
3. Providing Context
Provide relevant context to improve the quality of your output, but keep it brief and just the essential background information. The types of context that are particularly useful are:
Details of the industry and any special characteristics the model must keep in mind (e.g. regulatory considerations in healthcare or fintech)
A brief description of your product or service, including its unique features and current challenges
Customer profiles including user demographics, behavior, recent survey results, etc.
Existing data, qualitative research, and model output
Make the context more focused. Models perform better with shorter prompts. Think of it like telling an intern to read a specific chapter instead of giving them the whole book. This will keep your model from going off topic.
It specifies what the output should look and sound like.
Format : Describe or illustrate how you want LLM to deliver your output. You can include examples or templates of ideal output, specify how long or short it is, and what structure you want it to have (bullet points, long text, tables, graphs, etc.).
Tone : Choose 2-3 key attributes, such as professional, conversational, persuasive, persuasive, friendly, etc. It is also effective to ask them to be concise in their output and to leave out extra words and modifiers. LLM can also infer tone from examples and templates that they provide.
For most PM use cases, being more specific about format rather than tone will result in higher quality responses.
Tip
Consider your target audience, communication purpose, and medium when defining your tone and format.
5. Providing Examples
Providing real-world examples improves the quality and relevance of the output from an LLM, especially when the attributes of the desired output are too complex to describe in words (e.g., tone).
It is also useful to explain what aspects of your output you would like the LLM to focus on (structure, level of detail, tone, etc.).
For example, you can say:
6. The Importance of Iterating
Interacting with LLM is an inherently iterative process. It is unlikely that you will get the ideal output on the first prompt version.
Method of iteration:
Provide feedback on the output : Be specific about what aspects of the output need to be improved and how it needs to be improved. This is similar to giving feedback to an intern or subordinate. The feedback needs to be specific and actionable.
Improve your prompts : If you plan to return to the LLM frequently or use the prompts multiple times, edit the original prompts and retry them to see if that improves the output. It's worth investing time in improving your prompts, especially if you use the LLM for repetitive or frequent tasks such as analyzing survey data, synthesizing interview notes, or creating PRDs from templates.
Ask your LLM to improve the prompt : After you have iterated on a prompt or provided feedback on the output, try asking your LLM to summarize the discussion and create a "round-up prompt." This is a single new prompt with all the important information that you can reuse later, or generate several prompts from the original to "test variations." This is especially useful for tasks that you do often or where a specific output format is important. It's like a quick A/B test: by having your LLM generate several variations of a relevant prompt (without you having to think about it yourself), you can improve your prompts faster.
Summary: 6 ways to effectively engineer prompts
There are six ways that PMs can promptly engineer higher quality output from LLMs:
Role : Ask the model to act as a PM, a customer, or a stakeholder.
Task : Describe the task, just like you would when working with an intern.
Context : Include brief details about your industry, customer, or product.
Format & tone : Specify how the answer should look and sound.
Examples : Share examples of what you're looking for.
Iterate : Give feedback to the LLM to improve its output.
Conclusion
Prompt engineering is a key skill for PMs to use the LLM effectively, and by implementing the six methods outlined in this article, you will be able to produce higher quality, more fit-for-purpose output.
However, it is important that the output of the LLM should always be critically evaluated and supplemented with human expertise where necessary. The LLM is a powerful tool, but it does not replace the creativity or strategic thinking of a PM.
By honing our prompt engineering techniques and properly incorporating LLM into PM work flows, we will be able to achieve a more efficient and innovative product development process.
Practice and find the best way to create prompts for your workflow. Collaborating with AI has great potential to make PMs more efficient and more creative.