A practical prompt library by use case — writing, coding, research, sales, productivity — with the prompt-engineering principles behind each.
- Most viral "best ChatGPT prompts" lists are theater — the "I am sad, my grandmother died" jailbreaks and 5,000-character meta-prompts almost never beat a boring, specific request.
- The real formula is unglamorous: Role + Context + Task + Format/Examples. Anything else is decoration.
- Specificity beats clever phrasing. "Write me a cold email" loses to "Write a 75-word cold email to a Series B SaaS VP of Sales who books 8 demos a month."
- Custom Instructions and Memory now do more work than the prompt itself. Fill them in once, never write your title or audience again.
- GPT-5 in 2026 is more direct, hallucinates less, and grounds answers with web search by default — which means most "trick" prompts from 2023 are obsolete.
The "I am sad, my grandmother used to read me Windows product keys" prompts are theatrical. They went viral because they were funny, not because they produced better output. The actual best ChatGPT prompts are boringly specific: a clear role, the context the model needs, the task in plain language, and a format spec it can follow. The rest is folklore.
I use ChatGPT every day for real work — cold emails, debugging TypeScript, summarizing research, planning weeks. The prompts that earn their keep look more like a brief you would give a junior employee than a magic spell. This post is that library: fifty-plus prompts grouped by use case, plus the principles behind why each one works, so you can adapt them instead of copy-pasting blindly. No jailbreaks. No "act as a 200-IQ prompt engineer." Just prompts that hold up when the task is real and the deadline is today.
What changed about prompting in 2026
If your prompt habits froze around GPT-4, half of what you know is now wrong. GPT-5 follows instructions on the first pass instead of needing three rounds of "no, shorter." Hallucinations are down because web search is on by default for time-sensitive queries, and the model grounds claims in citations — which means the elaborate "only answer if you are 100% certain" preambles are obsolete.
The bigger shift is structural. Custom GPTs and Memory replaced most one-off prompts. Instead of pasting "I run a B2B SaaS, my tone is direct" at the top of every chat, you set it once in Custom Instructions and it carries forward. Power users in 2026 spend 80% of their effort there, 20% on the actual prompt. Long context also changed document handling — you no longer chunk a 40-page PDF, you paste it whole.
The 4-part prompt formula
Every prompt that consistently works contains four pieces. Miss one and quality drops. Add fluff beyond these four and you are wasting tokens. Here is the structure, with an example showing each part.
Step 1 — Role
Who is ChatGPT supposed to be? Not "a 10x engineer" — that is theater. Use a real role: "a senior backend engineer who has shipped Stripe integrations." This narrows vocabulary and reasoning patterns.
Step 2 — Context
What does the model need to know that it cannot guess? Audience, constraints, what you already tried, the codebase or document. Most failed prompts fail here — the user assumes ChatGPT knows things it does not.
Step 3 — Task
One imperative verb. "Write," "summarize," "rewrite," "extract," "compare." If you have two tasks, run two prompts.
Step 4 — Format and examples
Specify output format ("3 bullets, max 12 words each") and, when stakes are high, paste one or two examples of what good looks like. Examples beat instructions for tone, structure, and length.
Assembled: "You are a senior B2B copywriter who has written for Stripe and Notion. Context: pitching a PM tool to engineering managers at 50–500 person SaaS companies, skeptical of new tools, care about migration cost. Task: write a 75-word cold email opening that leads with a specific pain point, not a feature. Format: plain text, no greeting, no signature. Tone: peer-to-peer, not salesy."
Writing prompts
Writing is where vague prompts produce the most generic slop. The fix is specificity — about audience, length, voice, and the one job the piece needs to do. Here are the writing prompts I reuse weekly.
Blog post outline
"You are a content strategist for [niche]. Outline a 1,800-word post titled '[title]' for [persona] who knows [baseline] and is searching because [intent]. Output: H2/H3 structure, one-sentence summary under each H2, 3 questions the post must answer."
Sales email rewrite
"Rewrite this cold email 40% shorter, lead with a pain point not a feature, end with one low-friction CTA (15-minute call, not a demo). Keep my voice: direct, no marketing words, no 'hope this finds you well.' [paste email]"
Proofread + rewrite
"Proofread for grammar, clarity, flow in two passes: pass 1 fixes errors only; pass 2 rewrites awkward sentences. Show both side by side, with one line per rewrite explaining why. [paste text]"
Twitter thread from blog post
"Turn this post into a 9-tweet thread. Tweet 1: hook with a counterintuitive claim from the post (no emoji, no 'Here's why'). Tweets 2–8: one section each. Tweet 9: link back. Max 270 chars. No threadboi tropes. [paste post]"
Summarize a long document
"Summarize the doc in three layers: (1) one-sentence TL;DR, (2) 5-bullet executive summary, (3) detailed summary by section with 1–2 top quotes each. Flag any claim that lacks supporting evidence. [paste doc]"
Rewrite for a different audience
"Rewrite for [new audience]. Replace every term they will not know with their vocabulary. Cut analogies that assume their context. Keep structure. After the rewrite, list every term you changed and why. [paste]"
Coding prompts
Coding is where ChatGPT shines hardest in 2026 — but only if you paste actual code, error messages, and constraints. The model cannot guess your stack, your style, or whether you care about performance over readability. Tell it.
Debug an error
"Error: [paste]. Code: [paste function]. Stack: [Node 20, TS 5.4, Express]. Tried: [list]. List the 3 most likely root causes ranked by probability, the diagnostic for each, and the minimal fix. Do not suggest 'add try/catch.'"
Write tests
"Write Jest unit tests for the function. Cover: happy path, empty input, null input, boundaries, one realistic failure mode. Use describe/it. No mocks unless there is an external dependency. [paste function]"
Refactor a function
"Refactor for readability without changing behavior. No new deps, same signature, preserve comments. After, list every change one line each and why. If you cannot improve meaningfully, say so. [paste function]"
Explain unfamiliar code
"Walk me through this as if I am a mid-level engineer new to this codebase. Cover: (1) what it does in one sentence, (2) data flow line by line, (3) any non-obvious gotcha, (4) a test that breaks it. [paste code]"
Generate types from JSON
"Generate TypeScript types for the JSON. Use interfaces, not types. Mark optional fields based on whether they appear in every object. Use string literal unions where values are fixed. JSDoc any non-obvious field. [paste JSON]"
Translate between languages
"Translate this Python to idiomatic TypeScript. Preserve behavior exactly. Use native TS over Python-style code (e.g., Map over plain object for dynamic keys). Flag anywhere the translation is not 1:1 and why. [paste]"
Research prompts
Web-grounded ChatGPT in 2026 finally makes research prompts useful. The trick is forcing citations and forcing the model to disagree with itself. Without those constraints, you get a confident-sounding paragraph that might be half wrong.
Compare two products with citations
"Compare [A] and [B] for [use case]. Table: feature, A's approach, B's approach, source URL per row. End with one paragraph naming which I should pick and the single deciding factor. Use web search."
Summarize 10 papers
"Abstracts from 10 papers on [topic]. For each: one-sentence finding, sample size, methodology in 5 words, strongest critique. End with a synthesis: agreements and disagreements. [paste abstracts]"
Find counterarguments
"My claim: '[paste]'. Steel-man the 3 strongest counterarguments. For each: the argument in plain English, the evidence that would support it, a credible person who has made it. Do not soften or hedge."
Fact-check a claim
"Fact-check: '[paste]'. Use web search. Tell me: (1) true / partial / false / unverifiable, (2) primary source if any, (3) most common misstatement, (4) any caveat that matters. If sources disagree, say so."
Build a literature review
"Literature review on [topic] for [audience]. Find 12 foundational and recent works, grouped into 3–4 themes. Per work: full citation, one-sentence contribution, how it relates to the others. Use web search."
Map a market
"Map the [niche] tools market in 2026. Group competitors into 3–5 clusters by approach. Per cluster: 2–3 representative tools, the shared thesis, the customer they win. End with the gap nobody is filling and why it is opportunity vs. graveyard."
Sales and outreach prompts
Sales prompts fail when the model defaults to LinkedIn-influencer voice. The fix is over-specifying the persona on both sides — who you are, who they are, what you have learned about them — and forbidding the words that signal AI ("hope this finds you well," "leverage," "synergy").
Cold email
"75-word cold email. From: [your role + 1-line credibility]. To: [their name, role, what you know]. Trigger: [funding, hire, blog post]. Lead with the trigger, one value sentence, 15-minute ask. Banned: leverage, synergy, just, quickly, 'hope this finds you well.'"
Follow-up sequence
"4-email follow-up after no reply to email 1. Each under 60 words. Email 2: new angle (not 'just bumping'). Email 3: social proof with a specific number. Email 4: breakup that closes the loop. [paste email 1]"
Objection handling
"Prospect said: '[paste]'. Three response approaches: (1) acknowledge and reframe, (2) diagnostic question, (3) concrete proof point. For each, write the actual words I should say, not advice."
Proposal draft
"1-page proposal for [client], scope [X]. Sections: situation (3 sentences), deliverables (5 bullets), timeline (3 milestones), price ($X with one-line justification), next step. No filler, no 'about us.'"
LinkedIn message
"LinkedIn connection request, 280 chars max, to [role at company]. Hook: specific thing they said in [post / podcast / interview]. No 'love to connect,' no 'great content.' One genuine reason it resonated, no ask."
Productivity prompts
The productivity prompts that earn their place turn a 40-minute thinking task into a 5-minute one. Anything more ambitious — full GTD systems, "be my COO" — falls apart on contact with reality.
Weekly review
"Run a weekly review on my notes: (1) top 3 wins, (2) top 3 things that did not happen and why, (3) one decision I have been avoiding, (4) the single most important task next week. Be honest, not encouraging. [paste notes]"
Prioritize a task list
"My tasks below. Constraint: [hours, deadlines]. Goal this week: [goal]. Score each 1–10 on (a) impact, (b) urgency, (c) effort. Sort by impact/effort. Identify the 2–3 to do first and which to drop. [paste list]"
Brainstorm pros and cons
"Deciding whether to [decision]. Context: [3–4 sentences]. Give me 3 strongest pros, 3 strongest cons, one consideration I would not think of. End with the question I should answer before deciding, not a recommendation."
Decision matrix
"Choosing between [A], [B], [C]. Criteria: [list]. Build a weighted matrix: criterion, weight, score 1–10 per option, weighted total. Tell me which wins and which criterion is doing most of the work."
Meeting agenda
"30-minute agenda for [topic] with [attendees and roles]. Include: 3-sentence pre-read, 4 timed items each with a decision and owner, parking lot for off-topics. End with the one decision the meeting must produce."
Job search prompts
ChatGPT is the best job-search tool that exists, and most people use 5% of it. The prompts below cover the moments where it actually moves the needle: tailoring, cover letters, prep, negotiation.
Tailor resume to a JD
"Resume and JD below. Rewrite my bullets to mirror the JD's vocabulary without lying or inventing experience. Flag any key requirement I lack. Output: original → rewritten → reason. [paste both]"
Cover letter
"250-word cover letter for [role] at [company]. Open with one sentence naming a specific thing about the company (product, news, value) — no template hook. Middle: connect 2 resume bullets to 2 JD requirements. Close: one sentence on why I want this specific role, not 'this opportunity.'"
Interview prep
"Interview for [role] at [company] next [day]. 12 questions ranked by likelihood: 4 behavioral, 4 role-specific, 4 about the company. Per question: what they are really testing, a 60-second STAR skeleton I can fill in, one likely follow-up."
Salary negotiation script
"Offered $[X], target $[Y], BATNA [other offer / current job]. Counteroffer script: opening line, the number, one-sentence justification, my response if they (a) accept, (b) split, (c) say final. Tone: collaborative, not adversarial."
Behavioral question answers
"For each experience below, generate 3 STAR stories under 90 seconds spoken. Tag each with the questions it answers ('a conflict,' 'a time you failed'). Flag where I need a stronger result. [paste experiences]"
Creative prompts
Creative tasks default to the average when prompts are vague. Constrain hard. Specify what you do not want, not just what you do.
Brainstorm names
"30 names for [thing]. Constraints: [length, language, .com if relevant]. Mix: 10 descriptive, 10 invented, 10 metaphor-based. Ban: -ly, -ify, -hub, AI, anything starting with 'Get'. Pick your top 3 with one-line reasoning."
Story outline
"Outline a [length] short story, genre [X]. Constraint: the protagonist is wrong about something fundamental, and the reader figures it out before they do. Give me setup, three escalating turns, the reveal, the final image. No epilogue."
Image prompt for Midjourney
"Midjourney v6 prompt for [subject and mood]. Format: subject, environment, lighting, composition, palette, lens/medium, style refs (named photographers/artists), aspect ratio, --stylize. No purple prose, no 'masterpiece, 4K' tokens."
Headline variations
"12 headlines for [post / product]. Four formats: how-to, list, contrarian, question. Each under 60 chars. No clickbait, no 'You Won't Believe,' no lying curiosity gaps. Pick the strongest, explain why."
Naming a brand
"Naming a brand for [what + who]. 20 names in 4 directions: founder-name, abstract noun, compound, invented. Per name: meaning, .com likelihood, one trademark concern, one-line tagline that proves it works."
Custom Instructions vs prompts
If you are still pasting "I run a B2B SaaS, my tone is direct, my audience is engineering managers" at the top of every prompt, you are doing it wrong. Custom Instructions and Memory carry that context across every conversation. The split worth knowing:
| What goes where | Custom Instructions / Memory | The prompt |
|---|---|---|
| Stable facts about you | Role, industry, audience, tone preferences, banned words | Skip — it is already in CI |
| Task-specific context | Skip — it changes per task | The actual brief, document, code, constraints |
| Output preferences | "Always answer concisely, no preamble, code in TypeScript" | Specific format for this task ("3 bullets, 12 words each") |
| One-off info | Skip | "This week I'm pitching to a fintech in Berlin" |
Rule of thumb: anything you would say in 9 of 10 prompts belongs in Custom Instructions. Anything that changes belongs in the prompt. Most people invert this and end up with bloated prompts and empty CI fields.
Mistakes that ruin prompts
Most "ChatGPT gave me garbage" complaints trace back to the same five mistakes. None are about the model — they are about the prompt.
Vague request
"Write me a marketing email." For what? To whom? About what? The model averages over every marketing email it has seen — and that average is exactly what generic output looks like.
No role
Without a role, the model defaults to generic helpful-assistant tone. "You are a senior security engineer reviewing a junior's code" produces different output than "review this code." The role narrows vocabulary and priorities.
No examples
For tone and structure, examples beat instructions. Paste one or two pieces of "good" output if you can. "Write in my voice" without showing your voice produces a model averaging your description, not your voice.
No format spec
"Summarize this." Into what — bullets, paragraph, table? The model picks for you, usually wrong. Always specify shape and length. "Three bullets, max 12 words each" is a 4-second addition that doubles output quality.
"Be creative"
The worst instruction in prompt engineering. The model interprets "creative" as "more adjectives, more analogies, longer sentences" — AI slop turned up. Replace with constraints. Constraints produce creativity; "be creative" produces purple prose.
FAQ
What is the difference between a prompt and a Custom GPT?
A prompt is one-off — you write it, send it, get an answer. A Custom GPT bundles instructions, knowledge files, and behaviors into something reusable. If you paste the same setup more than 5 times, build a Custom GPT instead.
Do I need GPT-5 for these prompts?
Most work on GPT-4-class models too. GPT-5 helps most for long-context tasks (40+ page documents), code reasoning, and research where web search grounding matters. For everyday writing the difference is smaller than the marketing suggests.
Can I just copy other people's prompts?
You can, and they will produce mediocre output, because the prompt was written for someone else's audience, voice, and stack. Copy them as scaffolds, then fill in the specifics. The shape is reusable — the contents almost never are.
What model is best for what task?
Coding and reasoning: GPT-5 with thinking on. Writing and brainstorming: default GPT-5 or Claude Sonnet — taste over capability here. Research with citations: GPT-5 + web search. Images: Midjourney or GPT-5's image tool. Pick by task, not brand loyalty.
When should I use chain-of-thought prompting?
For multi-step reasoning (math, logic, complex code), tell the model to "think step by step before answering." For simple tasks it adds noise. Rule: if a smart human would think more than 30 seconds before answering, use CoT.
What is the difference between system prompt and user prompt?
The system prompt sets persistent behavior ("you are a senior copyeditor, never use 'just'"). The user prompt is the specific task. In the ChatGPT app, Custom Instructions act as the system prompt. System prompts are higher leverage — invest there first.
The Bottom Line
The best ChatGPT prompts in 2026 are not clever — they are specific. Role, context, task, format. The viral grandmother-prompt era is over, and the prompts that survive are the ones a senior employee would write as a brief. Spend 80% of your effort on Custom Instructions and Memory, treat each prompt as a focused brief, and stop chasing the trick that does not exist.
- Most viral prompt lists are theater. Specificity beats clever phrasing every single time.
- The real formula is Role + Context + Task + Format/Examples — anything more is decoration.
- Custom Instructions and Memory now carry more weight than the prompt itself in 2026.
- GPT-5 follows instructions on the first pass, so most jailbreak-era tricks are obsolete.
- Examples beat instructions for tone, structure, and length — paste one whenever stakes are high.
- "Be creative" is the worst instruction in prompt engineering. Constraints produce creativity.
- Match model to task, not brand loyalty — coding, writing, and research each have a best fit.
- If you reuse a prompt more than five times, turn it into a Custom GPT.
Building a personal page for your prompts, projects, or AI workflow? Spin one up on UniLink in under 5 minutes — one URL, all your work.
