I spent three months wondering why my ChatGPT results were mediocre while others were getting incredible outputs from the same AI. Was I using it wrong? Did they have some secret access to better models?
Turns out, the difference was entirely in how we wrote prompts.
I was typing things like "write a blog post about productivity" and getting generic, forgettable content. Meanwhile, people who understood prompt engineering were getting specific, nuanced, publication-ready writing from the exact same AI.
Once I learned proper prompt engineering, my results transformed overnight. The AI went from giving me basic responses to producing work that often only needed minor edits.
Here's everything I learned about writing prompts that actually work, with real examples of terrible prompts versus great ones, and a framework you can start using immediately.
Why most prompts fail (and yours probably do too)
Let me show you a prompt I used to write all the time:
Bad prompt: "Write a blog post about time management"
What I got: A generic 400-word essay that could have been written in 2005, with obvious points like "make a to-do list" and "prioritize important tasks." Completely useless.
Why it failed:
- No context about audience or purpose
- No tone or style guidance
- No structure requirements
- No length specifications
- No unique angle or perspective
The AI had to guess what I wanted, and it guessed wrong.
Most people write prompts like giving directions to someone who's never been to your city: "Go to the coffee shop." Which coffee shop? What's it near? What should they do when they get there?
Good prompts are specific, contextual, and leave no room for misinterpretation.
The anatomy of a great prompt
After analyzing hundreds of effective prompts, I've identified the core components that separate amateur prompts from professional ones.
The 7 essential elements
1. Role/Persona
Tell the AI what perspective to take.
Bad: "Write about marketing"
Good: "You are a senior marketing consultant advising a B2B SaaS startup"
2. Task
Clearly state what you want created.
Bad: "Tell me about SEO"
Good: "Create a step-by-step SEO audit checklist for a 10-page business website"
3. Context
Provide relevant background information.
Bad: "Write an email"
Good: "Write an email to a client who missed their payment deadline. This is their first late payment in 2 years of working together. We want to maintain the relationship while being firm about payment."
4. Format
Specify the structure and style.
Bad: "Give me some ideas"
Good: "Provide 10 ideas in a numbered list, with each idea explained in 2-3 sentences"
5. Constraints
Set boundaries and limitations.
Bad: "Write something"
Good: "Write exactly 300 words. Must include 3 statistics. Use conversational tone. Avoid jargon."
6. Examples (optional but powerful)
Show the AI what "good" looks like.
Bad: "Write in an engaging way"
Good: "Write in an engaging way. Example of the tone I want: 'Look, nobody actually enjoys meal planning. But imagine opening your fridge on Tuesday night and knowing exactly what you're making – ingredients ready, recipe saved, no last-minute pizza order.'"
7. Desired Output
Clarify what success looks like.
Bad: "Help me with this"
Good: "Your output should be a single-page executive summary that a non-technical CEO can read in 3 minutes and make a decision from".
The formula: Putting it all together
Here's the framework I use for almost every prompt:
[ROLE] You are [specific expertise/perspective]
[CONTEXT] [Relevant background information about the situation]
[TASK] [Exact action you want performed]
[FORMAT] Output should be [structure/style specifications]
[CONSTRAINTS] Requirements:
- [Constraint 1]
- [Constraint 2]
- [Constraint 3]
[EXAMPLE] (if needed) Here's an example of what I'm looking for: [example]
[OUTPUT] The final result should [description of successful outcome]
Let me show you this in action.
Real examples: Bad vs. Good prompts
Example 1: Content creation
Amateur prompt: "Write a blog post about AI"
What you get: Generic content that says obvious things everyone already knows.
Professional prompt:
You are a technology journalist writing for business executives who are curious about AI but skeptical of hype.
Context: Many executives have heard about AI but don't understand practical applications beyond chatbots. They need concrete examples relevant to their industries.
Task: Write a 1,200-word blog post titled "5 AI Applications That Actually ROI (With Real Numbers)"
Format:
- Engaging hook (2 paragraphs)
- 5 sections, one per application
- Each section: what it does, real company example, ROI metrics, implementation difficulty
- Conclusion with action steps
Constraints:
- Use conversational but professional tone
- Include at least 3 specific dollar amounts or percentages
- Avoid AI jargon or explain it when used
- No hype or exaggeration
Example of tone: "Look, AI isn't magic. But when a mid-sized manufacturer cuts quality control costs by 40% using computer vision, that's not hype – that's a business case."
Output: A blog post that executives could share with their board to justify exploring AI, with enough specifics to be credible but accessible enough for non-technical readers.
What you get: Specific, useful content with real examples and data that actually helps the target audience make decisions.
Example 2: Code generation
Amateur prompt: "Write Python code to analyze data"
What you get: Basic code that might not even run or address your actual needs.
Professional prompt:
You are an experienced Python developer helping a data analyst who knows Python basics but isn't an expert.
Context: I have a CSV file with 50,000 rows of sales data (columns: date, product_id, quantity, price, region). I need to find patterns in sales by region and product category over time.
Task: Write Python code to:
1. Load the CSV file
2. Calculate total sales by region for each month
3. Identify the top 5 products in each region
4. Create a simple visualization showing regional trends
Format: Provide complete, runnable code with:
- Comments explaining each section
- Error handling for common issues (missing file, bad data)
- Print statements showing progress
- Simple matplotlib visualizations
Constraints:
- Use pandas and matplotlib only (no other libraries)
- Code must run on Python 3.8+
- Include example output so I know what to expect
- Keep it under 100 lines
Output: Copy-paste-ready code that I can run immediately with my CSV file, with enough explanation that I understand what each part does.
What you get: Working code with explanations, error handling, and exactly what you need.
Example 3: Business strategy
Amateur prompt: "Help me with my marketing strategy"
What you get: Generic advice that could apply to any business.
Professional prompt:
You are a fractional CMO with 15 years experience in B2B SaaS marketing, specifically for companies in the $1M-10M ARR range.
Context: I run a project management SaaS tool competing with Asana and Monday.com. Current ARR: $2.3M. Team of 8. Marketing budget: $30K/month. Struggling with customer acquisition cost ($850 CAC, need to get under $500). Best customer segment: creative agencies 10-50 employees.
Task: Analyze our situation and recommend 3 specific marketing strategies we should test in the next quarter.
Format: For each strategy, provide:
1. One-sentence description
2. Why it fits our specific situation
3. Step-by-step implementation plan (4-6 steps)
4. Expected budget allocation
5. Success metrics to track
6. Timeline to see results
7. Potential risks
Constraints:
- Strategies must fit within $30K monthly budget
- Must be executable by a team of 8
- Focus on reducing CAC, not just increasing volume
- Consider our competitive position against established players
Output: A strategic memo that I can present to my team tomorrow with clear priorities and action items.
What you get: Actionable, specific advice tailored to your exact situation.
Advanced techniques that multiply your results
Once you master the basics, these techniques will take your prompts to the next level.
Technique 1: Chain of thought prompting
Instead of asking for a final answer, ask the AI to show its reasoning.
Basic: "Is this marketing copy effective?"
Chain of thought: "Analyze this marketing copy step by step:
- First, identify the target audience implied by the language
- List the specific persuasion techniques being used
- Evaluate clarity of the value proposition
- Assess the call-to-action strength
- Finally, give an overall effectiveness rating with specific improvements"
This produces better answers because the AI has to reason through the problem instead of jumping to conclusions.
Technique 2: Few-shot learning (examples)
Show the AI 2-3 examples of what you want.
Basic: "Write product descriptions"
With examples:
Write product descriptions for these items. Here are examples of the style I want:
Example 1: "This isn't just another water bottle. It's the one that'll actually make you drink water. The time markers guilt you into staying hydrated, the straw makes it effortless, and the size ensures you're not refilling every 20 minutes."
Example 2: "Your back hurts because your chair is terrible. This one isn't. Lumbar support that actually supports. Armrests that adjust to where your arms actually are. Mesh that breathes. Stop suffering."
Now write descriptions for: [your products]
The AI will match your style much more accurately.
Technique 3: Iterative refinement
Don't expect perfection on the first try. Build on previous outputs.
First prompt: "Write a blog post intro about remote work productivity"
Second prompt: "That's good, but make it more conversational and add a surprising statistic in the first sentence"
Third prompt: "Better. Now cut it to half the length while keeping the impact"
Each iteration gets closer to what you actually want.
Technique 4: Persona specificity
Instead of generic roles, create detailed personas.
Generic: "You are a marketing expert"
Specific: "You are Maya Rodriguez, a 38-year-old CMO who built two startups from $0 to $50M. You're skeptical of marketing trends and focus obsessively on ROI. You communicate in short, direct sentences and use data to back every claim. You've seen every marketing gimmick fail and have strong opinions about what actually works."
Specific personas produce more distinctive, valuable outputs.
Technique 5: Constraint-based creativity
Counterintuitively, adding constraints often produces better creative results.
Weak: "Write a creative product description"
Strong: "Write a product description that:
- Uses no adjectives
- Is exactly 50 words
- Includes a question
- Mentions a specific pain point
- Ends with a concrete benefit"
Constraints force the AI (and you) to be more creative within boundaries.
Common mistakes (and how to fix them)
Mistake 1: Being too vague
Bad: "Write something good"
Fix: Define "good" with specific criteria
Mistake 2: Asking multiple things at once
Bad: "Write a blog post and also some social media posts and maybe an email"
Fix: One task per prompt, or clearly separate multiple tasks
Mistake 3: Assuming context
Bad: "Improve this" [paste text without explaining what needs improving]
Fix: Explain what's wrong and what "improved" means to you
Mistake 4: No quality criteria
Bad: "Analyze this data"
Fix: Specify what insights you're looking for and how thorough the analysis should be
Mistake 5: Ignoring format
Bad: "Tell me about SEO"
Fix: Specify if you want a list, essay, step-by-step guide, comparison, etc.
Mistake 6: Unrealistic expectations
Bad: "Write a viral tweet" (AI can't predict virality)
Fix: "Write a tweet with strong emotional hook, under 280 characters, about [topic]"
Mistake 7: Not iterating
Bad: Getting mediocre output and giving up
Fix: Refine your prompt based on what's missing from the first result
Prompt templates you can steal
Here are my most-used prompt templates. Replace the bracketed sections with your specifics.
Template 1: Content creation
You are [expert role] writing for [specific audience].
Write a [content type] about [topic] that [specific goal].
Format: [structure requirements]
Include:
- [Element 1]
- [Element 2]
- [Element 3]
Tone: [style description with example]
Length: [word count or time to read]
The output should [success criteria].
Template 2: Analysis and feedback
You are [expert role] reviewing [what's being reviewed].
Context: [background information]
Analyze [specific aspect] and provide:
1. [First type of feedback]
2. [Second type of feedback]
3. [Specific recommendations]
Format as: [structure preference]
Be [tone – critical/supportive/balanced]
Most important: [priority criteria]
Template 3: Problem-solving
You are [expert role] solving [type of problem].
Problem: [clear description of the issue]
Context: [relevant constraints, resources, requirements]
Provide [number] solutions that:
- [Requirement 1]
- [Requirement 2]
- [Requirement 3]
For each solution include:
1. [What to include]
2. [What to include]
3. [What to include]
Prioritize [what matters most]
Template 4: Learning and explanation
You are [teaching role] explaining [concept] to [specific learner type].
Assume I understand [what they know] but not [what they don't know].
Explain [topic] by:
1. [Teaching approach]
2. [Teaching approach]
3. [Teaching approach]
Use [type of examples]
Avoid [what not to do]
Format: [structure preference]
Success = [learning outcome]
How to test if your prompts are working
Don't guess if your prompts are effective. Test them.
The consistency test: Run the same prompt 3 times. Do you get similar quality? If results vary wildly, your prompt is too vague.
The specificity test: Can you point to specific prompt elements that influenced specific output elements? If not, your prompt isn't directing the AI effectively.
The time test: Does the output save you time versus doing it yourself? If you spend 20 minutes editing AI output that took 10 seconds to generate, but you would've written it yourself in 15 minutes, that's a win.
The quality test: Would you share this output with a colleague, client, or boss? If you'd be embarrassed to, your prompt needs work.
The repeatability test: Can you use this prompt again for similar tasks? Good prompts are reusable templates, not one-offs.
Building your prompt library
Don't start from scratch every time. Build a library of proven prompts.
I keep a simple document with:
- Prompts that worked exceptionally well
- What made them effective
- When to use each one
- Variations for different situations
Categories in my library:
- Content creation (blog posts, social media, emails)
- Code and technical (scripts, debugging, documentation)
- Analysis (data, text, strategy)
- Creative (naming, taglines, concepts)
- Communication (difficult conversations, pitches)
- Learning (explanations, tutorials)
When I need something, I start with a proven template from my library rather than writing from scratch.
How to build yours:
- Every time an AI output exceeds your expectations, save the prompt
- Note what made it work
- Strip out the specific details to create a template
- Test the template on a new situation
- Refine based on results
Within a month, you'll have 20-30 solid prompts covering 90% of your AI use cases.
The meta-skill: Learning from AI responses
Here's something most people miss: the AI's response tells you how to improve your prompt.
If the output is too generic, you needed more specificity.
If it's the wrong tone, you needed clearer tone guidance.
If it's too short/long, you needed length requirements.
If it misunderstood, you needed more context.
Each mediocre response is feedback about your prompt. The best prompt engineers learn from every interaction.
I keep a running note: "When this happens in output → add this to prompt next time"
Example learnings:
- AI using buzzwords → Add "avoid jargon" constraint
- AI being overly formal → Provide casual tone example
- AI missing key points → Add explicit requirement to cover those points
- AI output too generic → Add context about what makes this situation unique
FAQ
How long should prompts be?
Long enough to be clear, short enough to be practical.
My prompts range from 50–500 words depending on complexity.
Don't pad for length, but don't sacrifice clarity for brevity.
Do I need to use special formatting?
No, but it helps with readability (for you).
I use line breaks, bullets, and clear sections because it helps me scan and reuse prompts — not because the AI needs it.
Should I be polite to AI?
Interesting question.
Being polite doesn't change AI output quality, but it does keep you in the habit of clear communication.
I say “please” and “thank you” sometimes just for me, not the AI.
Can I save time by making prompts shorter?
Ironically, no.
Vague short prompts cost more time because the output is wrong and you have to redo it.
Specific longer prompts save time overall.
Does prompt engineering work the same across different AI models?
Mostly yes, but nuances exist.
GPT-4 handles complexity better than GPT-3.5.
Claude is better with long documents.
Each model has quirks you learn through use.
Your action plan: Getting better at prompts
Don't try to master everything at once. Here's how to actually improve:
Week 1: Use the formula
For every AI interaction this week, consciously include: role, task, context, format, constraints. Even if it feels verbose.
Week 2: Add examples
Keep using the formula, but now add 1-2 examples of what you want. See how much better outputs become.
Week 3: Iterate
Stop accepting first outputs. Take mediocre results and refine your prompt. Learn what changes produce what improvements.
Week 4: Build your library
Start saving prompts that work. Create your first 5-10 templates for tasks you do regularly.
Week 5+: Experiment
Try advanced techniques. Chain of thought prompting. Persona specificity. Constraint-based creativity.
Give yourself a month of conscious practice. Your prompt quality will transform.
The real skill isn't writing prompts – it's thinking clearly
Here's what I learned after months of prompt engineering: the skill isn't manipulating AI. It's learning to communicate what you want with precision.
Good prompt engineering forces you to think clearly about:
- What exactly do I want?
- What context matters?
- What does success look like?
- What style/tone/format do I need?
These questions improve every form of communication, not just AI prompts.
I've become better at briefing freelancers, writing project specs, and even explaining what I need from colleagues. Because prompt engineering is really just clarity training.
The AI is your mirror. If your prompts are vague, it's because your thinking is vague. If your outputs are mediocre, maybe your requirements are unclear.
Master prompt engineering and you master the art of precise communication.
And that skill is valuable whether AI exists or not.
Start today: Take your next AI prompt and run it through the 7 elements framework. Add role, context, format, constraints. See what happens.
That's where prompt engineering mastery begins.
Related Articles & Suggested Reading


