One of the first use cases I tried with an LLM was generating content. The prompt was probably something along the lines of “write a blog article about {{topic}}”. Needless to say, the output wasn’t great. But even with just a little bit of prompt engineering, my outputs got a lot better.
Over the past year, we’ve written about tons of different ways to get better outputs from LLMs, focusing on prompt engineering methods and prompt patterns. Today we’ll flip that around and focus on the type of task, rather than the method.
Specifically, we are going to focus on prompt engineering for content creation when interacting with LLMs via an API, rather than a web interface like ChatGPT.
Additionally, we pulled insights from people who actually use these LLMs in production environments. i.e., the advice here will be extremely practical and based on people's actual experience trying to refine prompts for content creation.
Principles of prompt engineering for content creation
We’ll start with some basic best practices/tips and make our way to more advanced methods.
We'll start with a basic prompt, "write a LinkedIn post about prompt engineering" and refine it using various prompt engineering principles.
Principle #1: Prompt Structure and Clarity
Original Prompt: "Write a LinkedIn post about prompt engineering."
Refined Prompt: "Write a LinkedIn post about the key benefits of prompt engineering for AI content creation. Aim it at professionals in the AI industry. The post should be engaging and informative, with a call to action encouraging readers to explore prompt engineering techniques."
Prompt structure is extremely important. Here’s how Stefan Keranov, the Co-founder and engineering leader at Mindstone, structures his content generation prompts:
Principle #2: Specificity and Information
Original Prompt: "Write a LinkedIn post about prompt engineering."
Refined Prompt: "Write a LinkedIn post about the key benefits of prompt engineering for AI content creation, aimed at professionals in the AI industry. Include three specific benefits and a real-world example of how prompt engineering improved content quality. The post should be engaging and informative, with a call to action encouraging readers to explore prompt engineering techniques."
No matter what prompt you're working on, the more specific you can be upfront, the better the model can follow your instructions.
Principle #3: Use of Affirmative Directives
Original Prompt: "Write a LinkedIn post about prompt engineering."
Refined Prompt:"Create an engaging and informative LinkedIn post about the key benefits of prompt engineering for AI content creation. Highlight three specific benefits and provide a real-world example of success. End with a call to action urging professionals in the AI industry to explore prompt engineering techniques."
Telling the model what to do, rather than what not to do, helps to guide the model to a desired outcome. It's one of the top prompt engineering best practices from OpenAI.
Principle #4: Incorporating Examples
Original Prompt: "Write a LinkedIn post about prompt engineering."
Refined Prompt: "Write a LinkedIn post about the key benefits of prompt engineering for AI content creation. Include three specific benefits and a real-world example of how prompt engineering improved content quality. Make the post engaging and informative, with a call to action encouraging readers to explore prompt engineering techniques. Here is an example of a recent post that performed well: {{Example_LinkedIn_Post}}"
Using examples in your prompt, also known as few-shot prompting, may be the most effective and efficient method to get better outputs from LLMs, regardless of the task. For more info on few shot prompting, check out our guide: The Few Shot Prompting Guide.
Erich Hellstrom, founder of PromptPerfect, saves examples of well-performing content to inject into his prompt templates, ensuring the outputs mimic the desired tone and structure.
Principle #5: Role Assignment (Persona)
Original Prompt: "Write a LinkedIn post about prompt engineering."
Refined Prompt: "Act as a prompt engineer and write a LinkedIn post about the key benefits of prompt engineering for AI content creation. Highlight three specific benefits and provide a real-world example of success. Ensure the tone is engaging and motivational, ending with a call to action for AI professionals."
Tailoring the AI’s tone and style to a specific role or audience increases the specificity and relatability of the content.
Next, we’ll take a look at a few prompt patterns that can be applied when prompt engineering for content creation. Prompt patterns are high-level methods that provide reusable, structured solutions to overcome common LLM output problems. For more information about prompt patterns, check out our guide here: Prompt Patterns: What They Are and 16 You Should Know
Prompt patterns for content generation prompts
Prompt Pattern #1: Template Pattern
Send a specific template format that you want the LLM to follow when it produces an output.
Example Prompt:
"Provide a LinkedIn post about the key benefits of prompt engineering for AI content creation, with sections:
Introduction:
Key Benefits:
Real-World Example:
Call to Action:"
Prompt Pattern #2: Reflection Pattern
Prompt the LLM to introspect and suggest improvements after generating the first draft..
Example Prompt Refinement:"Write a LinkedIn post about prompt engineering and then reflect on the output to suggest improvements."
Advanced prompt engineering methods for content creation
Next up, we’ll take a look at some more advanced prompt engineering techniques that you can add to your tool belt to generate better content. All of these methods have free templates you can copy, use, save etc.
Multi-Persona Prompting
Multi-Persona Prompting involves the LLM identifying multiple personas to collaboratively work on the task at hand.
For example, going back to our original prompt "Write a LinkedIn post about prompt engineering", the LLM may identify a prompt engineer, a social media marketer, and a LinkedIn influencer to collaboratively work on crafting the post.
This prompt is really fun in general, as you can see the different personas working together.
By simulating a group of experts, the AI can produce richer, more nuanced outputs.
How It Helps:
- Diverse Perspectives: Incorporating different viewpoints can lead to more comprehensive and balanced content.
- Engagement: A variety of voices can make the content more engaging and relatable.
"According to" Prompting
One of our favorite prompt engineering methods, due to its simplicity, this technique involves grounding the AI’s responses in specific, reliable sources, which can help improve accuracy and reduce hallucinations.
For more info on this method, check out our blog post here: Improve accuracy and reduce hallucinations with a simple prompting technique.
Example Prompt : "Write a LinkedIn post about prompt engineering, citing information according to the PromptHub blog."
How It Helps:
- Accuracy: Grounding responses in reliable sources reduces hallucinations.
- Credibility: Citing reputable sources enhances the credibility of the content.
EmotionPrompt
LLMs, having been trained on human data, have a funny way of holding up a mirror to us. EmotionPrompt is an example of this. By incorporating emotional stimuli into prompts, we can get better and more accurate outputs.
Example Prompt: "Write a LinkedIn post about the key benefits of prompt engineering for AI content creation; this is crucial for my job"
How It Helps:
- Increased accuracy: Emotional language has proven to result in more accurate outputs
Chain of Density (CoD) Prompting
Chain of density prompting aims to improve summaries by iteratively integrating relevant entities into the summary, balancing detail and brevity.
How It Helps:
- Brevity with Depth: Ensures the content is both informative and succinct.
- Readability: Helps in producing summaries that are easy to read and understand.
Real world challenges and solutions
Even with all the tips, templates, and methods we've covered, there are tons of hurdles to overcome when writing prompts for content generation. From 'sounding like AI' to outputs that are just too lengthy, there are many challenges to overcome.
Let's look at some challenges and solutions from people working in the field.
Peter Gostev, head of AI at Moonpig and a great follow on LinkedIn, says the most common challenge he faces is “the LLMs change the style too much, making it generic.”
A common problem, Peter spun up a pretty cool custom GPT called “Slight Re-Writer”, which has the model spell out the amount that it can change the content, helping to keep the model from changing too much and making the text sound like AI.
Erich Hellstrom, founder of PromptPerfect, mentions that brevity is one of the bigger challenges he runs into. This feeling seems to becoming more popular with GPT-4o.
Erich mentions, “The best way I’ve found to overcome overly long outputs is to prompt the LLM to write based on examples, or to iteratively tell it how to edit the content after it first generates it, with specific things to remove.”
Here's one of Erich’s content generation prompts:
Stefan Keranov, Co-founder at Mindstone, is constantly shipping content generation prompts to production for the courses they run. Unsurprisingly, he notes that the biggest issues they bump up against are related to hallucinations. Stefan and the team at Mindstone elect to have a human-in-the-loop for any course content that isn’t generated on the fly.
For more help on reducing hallucinations, check out our blog post: Three Prompt Engineering Methods to Reduce Hallucinations
Danai Myrtzani, a prompt engineer at Sleed, notes a few common challenges. One of them is maintaining context and continuity when generating longer pieces, like white papers. One prompt engineering method to reach for in those situations is Skeleton of Thought Prompting.
Additionally, Danai spends a lot of time generating content in languages other than English, like Greek. She notes that Google’s Gemini tends to do a better job at this compared to OpenAI’s models, but generating content in Greek still requires a lot of post-editing compared to generating content in English.
Here's a prompt Danai uses often for writing content:
Parameters for content generation prompts
Parameters are LLM settings you can adjust to affect the outputs generated. For more info on Anthropic's and OpenAI's model parameters, check out our guides:
- Understanding OpenAI parameters: Optimize your Prompts for Better Outputs
- Using Anthropic: Best Practices, Parameters, and Large Context Windows
We'll go over the most important parameters, and some guidelines about what values work best for content generation.
Temperature
Temperature influences how deterministic the response from the model will be. The lower the temperature, the more deterministic. The higher the temperature, the more creative and chaotic the response will be.
For OpenAI models, temperature can be between 0 and 2. Higher values (0.8) will make outputs more random and creative. Lower values will make them more deterministic.
For Anthropic models, temperature can be between 0 and 1.
For both providers, the default value is 1.
Some values to test:
- Creative Tasks (e.g., blog posts, social media content): 0.7 to 0.9
- Factual Tasks (e.g., technical writing): 0.2 to 0.5
Top-p (Nucleus Sampling)
Top-p, similar to temperature, influences the creativity of the model’s responses. It limits the model to consider only the most probable tokens until the cumulative probability reaches the chosen threshold.
For example a top-p value of 0.5 means only the tokens comprising the top 50% probability mass are considered.
It’s recommended to adjust either temperature or top-p, but not both. In practice, we see more teams use temperature rather than top p.
For both OpenAI and Anthropic, top-p defaults to a value of 1 and can range between 0 and 1.
Some values to test:
- Recommended Values: 0.85 to 0.95 to balance creativity and coherence.
Max Tokens
Max tokens determines the maximum length of the output in tokens generated by the model. It can stop before that value, but it will not exceed it.
So if the model is producing more content than you'd like, you may want to try out using the max tokens parameter to constrain it.
Some values to test:
- Short Content (e.g., Tweets): 50-100 tokens.
- Medium Content (e.g., LinkedIn posts): 150-250 tokens.
- Long Content (e.g., Blog posts): 500-1000 tokens.
Frequency Penalty
The frequency penalty parameter reduces the likelihood of repeated phrases or words by penalizing frequent tokens. This decreases the likelihood that the model repeats something verbatim.
It is an optional parameter, defaults to 0, and can range between -2.0 and 2.0. The higher the number the larger the penalty.
This is one of the more finicky parameters. For example, the OpenAI documentation mentions the value can be between -2.0 and 2.0, but in their own playground, you can’t set the value to anything below 0. Anthropic doesn’t offer this parameter.
Our recommendation would be to leave this value at the default of 0. If you’re having problems with repetitive content, a good place to start testing this parameter would be around 0.5.
Presence Penalty
Very similar to frequency penalty and the same guidance applies here.
Stop Sequences
Stop sequences are text sequences that will cause the model to stop generating text.
- Example: Use a stop sequence like "\n\n" to end a response neatly.
Examples
- Parameter Settings for Different Scenarios:
- Creative LinkedIn Post:
- Temperature: 0.8
- Max Tokens: 200
- Top-p: 1 (default)
- Technical Blog Post:
- Temperature: 0.4
- Max Tokens: 800
- Top-p: 1 (default)
- Creative LinkedIn Post:
How to evaluate outputs?
When it comes to content generation prompts, evaluating output quality is much more art than science. Frankly, the best method here is human review. Here are a couple of ways make this more process more systematic.
Step 1: Model Selection
If you already know which model you want to use, great. But it may be worth revisiting, as LLMs can drift and change over time.
Test an initial version of your prompt across a variety of models to see how the outputs look. You're not looking for perfect outputs, but generally which model is producing the highest quality output. Look for things like structure, tone, and length.
Batch testing in PromptHub is one way to do this very easily.
Step 2: Parameter Tweaks
Okay, we’ve got our model set; now let’s turn to the parameters. We gave some broad guidelines above, but you’ll want to do some direct testing yourself as each use case differs.
The first parameter to test, and arguably the most important, is temperature. You’ll want to test your prompt across the same model but vary the temperature. This will allow you to see how the outputs change in style as the temperature changes.
Step 3: Prompt Iteration
Now you’ll want to start testing different versions of the prompts. You can test them side-by-side in batches, or whatever you feel is best. You should spend the most time in this step. Once you’re confident in the prompt, you can move on to the last step.
Step 4: Test with Different Data
Now that the model, parameters, and prompt are all set, let’s make sure this prompt works well with different data injected, rather than with just the base data we’ve been using. You can do this via datasets in PromptHub. Upload a CSV or create a quick dataset by hand, and then run your prompt over that dataset.
Step 5: Get Your Team Involved to Review!
After each step, you should commit your changes or open up a merge request so that your team can review what you’ve done, test it further, and give you notes. Prompt engineering is better done with various perspectives, and you get better outputs by getting fresh eyes on the problem. PromptHub has a variety of prompt versioning features to help make this easy.
Wrapping up
We've covered a lot of ground here. We looked at some foundational principles, prompt patterns, and advanced methods you can leverage to get better outputs when writing prompts to generate content. By understanding and applying these techniques, you can get content that is actually of high quality and relevance.
We say it all the time, but prompt engineering is an iterative process. It takes some work. Utilize the insights from the experts quoted here and get to testing. Whether you're crafting LinkedIn posts or detailed reports, these strategies will help you when writing content generation prompts.