⚡ 50 Curated Prompts

The Best Claude Code Prompts,
Tested Across Real Projects

Generic prompts get generic output. These prompts are specific, contextual, and designed to unlock Claude Code's full capabilities — across development, writing, analysis, and strategy.

Developers Writers & Marketers Data & Analysis Strategy & Planning Code Review Architecture
Why Most Prompts Underperform

A Good Prompt + Poor Context = Mediocre Output

The prompts below work well on their own. But they work dramatically better when Claude already knows your project — via a well-configured CLAUDE.md. Here's why.

The context multiplier effect

When you type review this function for edge cases, Claude doesn't know: what your error handling convention is, what downstream systems consume this function, what edge cases you've already handled, or what a "correct" result looks like for your team.

When CLAUDE.md includes your conventions, architecture, and standards — that same prompt produces output that's 3-5x more useful and directly actionable. The prompt doesn't change. The context does.

50
Curated prompts
8
Categories
3–5×
Output improvement with CLAUDE.md
60+
Prompts in Brainfile Pro packages
💻
Development & Code
Building, implementing, and extending codebases
10 prompts
Prompt #1 — Feature Implementation
Implement [feature description]. Before writing any code, list the files you'll touch, what you'll add/change/remove, and flag any concerns. Then implement with full test coverage.
Why it works: Forces Claude to plan before acting. The pre-implementation review catches architectural issues before they're baked in. Always gets better output than just "implement X."
With CLAUDE.md Claude already knows your tech stack, test framework, and file structure — so "files I'll touch" is actually accurate, not generic.
Prompt #2 — Refactoring
Refactor [function/module] to improve [readability/performance/testability]. Do NOT change behavior. Show me the before/after diff and explain each change.
Why it works: "Do NOT change behavior" is critical — it prevents Claude from over-engineering while refactoring. The diff format catches unintended changes.
Prompt #3 — TypeScript Types
Add strict TypeScript types to this file. Use exact types, not 'any'. If you're uncertain about a type, add a comment explaining why and use 'unknown' instead of 'any'.
Why it works: "No 'any'" is explicit. "Use 'unknown' if uncertain" gives Claude a safe fallback that doesn't silently break type safety.
Prompt #4 — API Endpoint
Build a REST endpoint for [description]. Include: input validation, error handling with appropriate HTTP status codes, logging, and a test for the happy path and at least 2 error cases.
Why it works: Specifies all four layers (validation, errors, logging, tests) upfront. Claude won't skip them if you enumerate them explicitly.
Prompt #5 — Database Migration
Write a database migration for [change]. Include the up migration, down migration, and a comment explaining why this change is safe to run in production (or flag if it's not).
Why it works: The "explain why it's safe" requirement forces Claude to reason about production impact — often surfacing risks you hadn't considered.
Prompt #6 — Test Writing
Write tests for [function/module]. Cover: happy path, invalid inputs, boundary conditions, and any edge cases specific to the business logic. Do not mock the database unless absolutely necessary.
Why it works: The "no mocking unless necessary" instruction prevents the common mistake of tests that pass because the mock is wrong, not because the code is right.
Prompt #7 — Error Handling Audit
Read [file/module] and audit its error handling. List every error that could occur, whether it's currently handled, how it's handled, and whether that's correct. Don't fix anything yet — just audit.
Why it works: "Don't fix yet" forces a complete audit before action. You'd be surprised what surfaces when Claude focuses only on finding problems, not solving them.
Prompt #8 — Performance Profiling
Analyze this function for performance. Estimate its time and space complexity. Identify the bottleneck. Propose an optimization. Show the expected improvement with rough numbers.
Why it works: "Rough numbers" keeps Claude from being vague. "Estimate" gives it permission to approximate rather than refuse when exact metrics aren't available.
Prompt #9 — Documentation
Document this module as if you're explaining it to a senior engineer joining the team next week. Cover: what it does, why it exists, gotchas, and how to test changes to it.
Why it works: "Senior engineer joining next week" is the perfect persona — knowledgeable enough to skip basics, but unfamiliar enough to need the gotchas documented.
Prompt #10 — Dependency Audit
Review our package.json dependencies. Flag any: outdated major versions, packages with known vulnerabilities, packages we might be able to remove, and packages where a lighter alternative exists.
Why it works: Four specific questions get four focused answers. "Packages we might remove" often surfaces technical debt that nobody has noticed.
🔍
Code Review & Debugging
Finding problems, fixing bugs, hardening code
8 prompts
Prompt #11 — Security Review
Review this code for security vulnerabilities. Focus on: injection attacks, authentication bypass, insecure data handling, and any OWASP Top 10 issues. Rate each finding by severity (Critical/High/Medium/Low).
Why it works: OWASP Top 10 is a specific checklist Claude knows deeply. Severity ratings force prioritization, not just a list of issues.
Prompt #12 — Bug Diagnosis
I'm seeing [describe bug]. Here's the relevant code and the error message. Walk me through your diagnosis: what's happening, why, and what the fix is. Don't give me the fix until you've explained the root cause.
Why it works: "Don't give me the fix until you've explained the root cause" forces diagnosis before prescription — which also helps you understand and prevent future similar bugs.
Prompt #13 — Pre-Commit Review
Review this diff before I commit. Look for: logic errors, missing error handling, TODO comments I forgot to remove, hardcoded values that should be config, and anything that might break in production.
Why it works: The specific list targets the most common pre-commit mistakes. Run this as a slash command (/review) before every non-trivial commit.
Prompt #14 — Race Condition Check
Analyze this async code for race conditions and timing issues. Draw me the sequence of events that would need to happen for a bug to occur, even if it's unlikely.
Why it works: "Even if it's unlikely" gives Claude permission to surface theoretical races that only happen under load. These are exactly the bugs that slip through testing.
Prompt #15 — Code Smell Detection
Read this module and list every code smell you find. Explain why each is a problem and what the refactor would look like — but don't do the refactor yet.
Why it works: Separating detection from action produces a comprehensive list instead of a partial refactor that misses the less obvious problems.
Prompt #16 — Memory Leak Hunt
Look at this code for memory leaks. Pay special attention to: event listeners not being cleaned up, closures holding references, unclosed connections, and growing caches without eviction.
Why it works: Memory leaks are vague as a category. The four specific patterns give Claude concrete things to look for.
Prompt #17 — Test Coverage Gap
Read this module and its test file. List every behavior that is NOT currently tested. Focus on error cases, boundary conditions, and integration points with external systems.
Why it works: "What's NOT tested" is harder to answer than "what IS tested" — it forces Claude to map the full behavior space, not just describe existing tests.
Prompt #18 — Production Readiness Check
Is this code production-ready? Check for: missing logging, missing metrics, missing circuit breakers, insufficient error handling, missing rate limiting, and any behavior that could cause a production incident.
Why it works: "Production incident" is the right framing — it focuses Claude on things that would page someone at 3am, not just things that are aesthetically wrong.
🏗️
Architecture & Decisions
System design, tradeoffs, and technical decisions
7 prompts
Prompt #19 — Architecture Decision Record
Write an ADR (Architecture Decision Record) for [decision]. Include: context, options considered, decision made, consequences (positive and negative), and what would make us revisit this decision.
Why it works: "What would make us revisit this" is the most important field — it forces explicit criteria for future reconsideration instead of a decision that calcifies forever.
Prompt #20 — Tech Stack Evaluation
Evaluate [library/tool/framework] for our use case: [describe what you're building]. I want a balanced view — tell me what it's genuinely bad at, not just where it shines.
Why it works: "Tell me what it's genuinely bad at" counteracts Claude's tendency to be overly positive. This produces the warnings you actually need to make an informed decision.
Prompt #21 — Scaling Bottleneck
At what point will [this system/feature] break under load? Walk me through the bottleneck sequence — what fails first, what fails next, and what the mitigation options are at each stage.
Why it works: "Bottleneck sequence" forces Claude to think in phases, not just say "it won't scale." The mitigation options at each stage make it actionable.
Prompt #22 — API Design Review
Review this API design from the perspective of a developer who will use it for the next 5 years. What will they love? What will they hate? What will they work around? What breaking changes will we regret in 2 years?
Why it works: The "5 years" and "regret in 2 years" framing surfaces long-term design debt that looks fine in the short-term sprint review.
Prompt #23 — Data Model Review
Review this data model. Flag any: normalization issues, missing indexes (based on the queries we'll run), fields that will need to change when we add [likely next feature], and anything that will cause pain at 100x current data volume.
Why it works: The "100x data volume" test catches designs that work at prototype scale but fail at production scale — often where the most expensive tech debt accumulates.
Prompt #24 — Monolith vs Microservices
Given our current scale [describe team size, traffic, complexity], make the case for staying with a monolith AND for moving to microservices. Then tell me which you'd recommend and why.
Why it works: Requiring both cases prevents Claude from just validating whatever you seem to be leaning toward. The steel-man of each position produces a more useful recommendation.
Prompt #25 — Migration Plan
Write a phased migration plan to move from [current state] to [target state]. Each phase should be independently shippable with no rollback needed. Flag any phase that has no safe rollback and explain why.
Why it works: "Independently shippable" prevents the "big bang" migration that risks everything. "Flag any phase with no safe rollback" surfaces the real risk points explicitly.
✍️
Writing & Content
Blog posts, marketing copy, emails, and documentation
8 prompts
Prompt #26 — Blog Post Brief to Draft
Write a blog post on [topic] targeting [audience]. Goal: [what action should reader take]. Do NOT start with "In today's world" or any throat-clearing. Open with the most interesting thing you have to say. Under 1,200 words.
Why it works: The explicit ban on "In today's world" and "throat-clearing" eliminates the generic AI opener that signals low-quality content to readers and search engines.
With CLAUDE.md Claude knows your brand voice, target audience, and competitor positioning — output is on-brand, not generic.
Prompt #27 — Email Subject Lines
Write 10 subject line variants for an email about [topic] to [audience]. Vary the approach: curiosity gap, direct benefit, urgency, social proof, controversy, question. Flag which 3 you'd A/B test first and why.
Why it works: 10 variants with varied approaches gives you a real testing set. The "flag which to A/B test first" requires Claude to apply strategic judgment, not just list options.
Prompt #28 — Landing Page Copy
Write landing page copy for [product] targeting [persona]. Use the framework: Problem → Agitate → Solution → Proof → CTA. Every sentence must earn its place. No filler. Under 400 words for above the fold.
Why it works: PAS (Problem-Agitate-Solution) is a proven framework. "Every sentence must earn its place" stops Claude from padding. The word limit prevents bloat.
Prompt #29 — LinkedIn Post
Write a LinkedIn post about [topic]. Start with a one-sentence hook that creates a pattern interrupt. No hashtags in the body. End with a question that invites genuine engagement (not a fake "what do you think?").
Why it works: "Pattern interrupt" is specific. The anti-hashtag rule and "genuine question" instruction both counteract the most common LinkedIn post mistakes.
Prompt #30 — Newsletter Section
Write a [section type: insight/analysis/recommendation] for my newsletter on [topic]. Readers are [audience description]. One concrete takeaway. No generic advice they've heard before. Aim for something they'll forward.
Why it works: "Something they'll forward" is the right test for newsletter quality — it forces Claude to aim for genuine value, not content that reads well but says nothing.
Prompt #31 — Competitive Comparison
Write a comparison of [our product] vs [competitor]. Be honest about where [competitor] is better — it makes our strengths more credible. Aim for something a skeptical buyer would actually find useful.
Why it works: "Be honest about where the competitor is better" produces the credibility that makes comparison pages actually convert instead of reading like propaganda.
Prompt #32 — Cold Outreach Email
Write a cold email to [target persona] about [topic/offer]. First sentence must reference something specific about them. Under 100 words. One ask only. No "I hope this email finds you well."
Why it works: The 100-word limit and "one ask" constraint forces the ruthless prioritization that separates emails that get responses from ones that get deleted.
Prompt #33 — Case Study
Structure a case study for [customer/outcome]. Follow: Before (the problem, quantified), Journey (what changed and why), After (results, quantified), Quote (what they'd say if asked). Make the Before as vivid as the After.
Why it works: "Make the Before as vivid as the After" produces the emotional contrast that makes case studies compelling. Most case studies write a vivid After but a vague Before.
📊
Data & Analysis
Research synthesis, pattern finding, and reporting
7 prompts
Prompt #34 — Data Synthesis
Here are [N] data points / reports / findings on [topic]. Synthesize them into: (1) what we know with high confidence, (2) what we think but aren't sure, (3) what the data can't tell us. Do not overstate confidence.
Why it works: The three-tier confidence framework prevents Claude from presenting uncertain findings with false confidence — a common problem with AI analysis.
Prompt #35 — Metric Diagnostic
[Metric] dropped from [X] to [Y] between [date range]. Generate a list of hypotheses for why this happened, ordered by likelihood. For each, tell me what data would confirm or rule it out.
Why it works: "What data would confirm or rule it out" turns hypotheses into an investigation checklist — much more actionable than just a list of guesses.
Prompt #36 — Competitor Analysis
Analyze [competitor] based on publicly available information. Focus on: their apparent positioning strategy, who they're NOT targeting (and why), and what they're doing that we should copy (honestly).
Why it works: "Who they're NOT targeting" reveals positioning gaps. "What should we copy honestly" gets past defensiveness to extract genuine competitive intelligence.
Prompt #37 — Survey Data Analysis
Analyze these survey responses for [topic]. Look for: the most surprising finding, the most common concern, what people want but didn't say explicitly, and the insight that changes what we should build next.
Why it works: "What people want but didn't say explicitly" is the highest-value question — it requires inference, not just counting responses, and often surfaces the most important insight.
Prompt #38 — Executive Report
Write an executive summary of [topic/data] for an audience that has 90 seconds to read it. Lead with the most important thing. Use numbers. End with one recommended action. No background section.
Why it works: "90 seconds" is a forcing function for radical prioritization. "No background section" eliminates the most common executive summary mistake.
Prompt #39 — Trend Identification
Looking at [data/observations], what trend is forming that most people haven't noticed yet? What's the leading indicator? What does it look like in 12 months if this trend continues?
Why it works: "Most people haven't noticed yet" gives Claude permission to surface non-obvious patterns. The 12-month projection forces commitment to an actual view.
Prompt #40 — Pre-Mortem
It's [date 1 year from now]. [Project/decision] failed badly. What happened? Generate the 5 most likely failure scenarios, in order of probability. For each, what's the early warning sign we should watch for now?
Why it works: The pre-mortem framing makes failure scenarios concrete and vivid instead of abstract risks. "Early warning sign" converts each failure into a monitoring task.
🎯
Strategy & Planning
Business decisions, roadmapping, and goal-setting
5 prompts
Prompt #41 — Prioritization Framework
I have these [N] projects/features/initiatives: [list]. Help me prioritize them using: impact (revenue/customer value), effort (engineering weeks), strategic alignment, and reversibility. Tell me what to do first and what to cut entirely.
Why it works: Adding "reversibility" to the standard impact/effort matrix surfaces the risk dimension. "What to cut entirely" is more useful than a ranked list where everything stays.
Prompt #42 — Pricing Strategy
Review our pricing: [current structure]. Who is this pricing optimized for? Who does it accidentally exclude? What behavior does it incentivize that we didn't intend? What's the next pricing test we should run?
Why it works: "Who does it accidentally exclude" and "what behavior does it incentivize unintentionally" surface second-order effects that standard pricing reviews miss.
Prompt #43 — OKR Review
Review our Q[N] OKRs: [list]. For each: is this actually measurable, does it lead to the outcome we want (or just look good), and what's the gaming risk (how could a team technically hit this while missing the point)?
Why it works: "Gaming risk" is the most important question in OKR review and the one most organizations skip — it's how you prevent Goodhart's Law from corrupting your metrics.
Prompt #44 — Make the Case Against
I'm planning to [decision]. Make the strongest possible case against doing this. Don't agree with me. Be the devil's advocate who genuinely wants to prevent a mistake.
Why it works: By default Claude agrees with you. Explicitly requesting the opposition gives you the critique you need before committing to a significant decision.
Prompt #45 — Quarterly Planning
Given our goals [list] and constraints [list], design our Q[N] plan. Include what we're NOT doing this quarter and why — the stop-doing list is as important as the to-do list.
Why it works: The "stop-doing list" is consistently the most valuable output of any planning exercise. Asking for it explicitly forces the prioritization that "add X" planning never achieves.
⚙️
Automation & Workflow
Building recurring systems and self-improving workflows
3 prompts
Prompt #46 — Slash Command Design
I want to create a /[name] slash command that does [task]. Design the command: what context it needs to run, what parameters it should accept, what it should output, and what it should never do without asking me first.
Why it works: The "what it should never do without asking" constraint is critical for automation safety — it forces explicit boundaries before you wire up a recurring command.
Prompt #47 — CLAUDE.md Improvement
Read my current CLAUDE.md. Tell me: what context is missing that would make your outputs better, what instructions are ambiguous, what rules contradict each other, and what you keep having to ask me that should be in here.
Why it works: Claude can identify its own knowledge gaps about your project. "What you keep having to ask me" surfaces the most impactful missing context.
Prompt #48 — Session Handoff
Summarize what we accomplished this session. List: verified outputs (with file paths), decisions made, open questions, blockers discovered, and the top 3 things I should do in the next session. Write this as if you're briefing a colleague who will continue this work.
Why it works: "Verified outputs" (not just "completed tasks") ensures the handoff reflects actual done work. "Briefing a colleague" produces a handoff that's actually useful, not a list of activities.
🧠
Meta-Prompts
Prompts that improve your prompting
2 prompts
Prompt #49 — Prompt Improvement
I'm going to give you a prompt I wrote. Tell me: what's ambiguous, what would make you give a worse answer, how you'd rewrite it to get better output, and what a novice would ask vs what an expert would ask for the same goal.
Why it works: "What would make you give a worse answer" is the most actionable question — it reveals the failure modes in your prompting before you deploy it in production workflows.
Prompt #50 — Calibration Check
On a scale of 1-10, how confident are you in that answer? What would change your view? What are you most uncertain about? Where should I independently verify what you told me?
Why it works: Claude's default mode is confident regardless of actual certainty. Directly asking for calibration and "where to independently verify" surfaces the limits of what you can trust — which is essential before acting on AI output.
The Context Multiplier

How CLAUDE.md Makes Every Prompt Better

Every prompt on this page works. But with a properly configured CLAUDE.md, they work 3-5x better. Here's the difference in practice.

Scenario Without CLAUDE.md With Brainfile CLAUDE.md
Prompt #1: "Implement user authentication" Generic JWT implementation in whatever language it guesses Implements in your stack (Next.js + Prisma + PostgreSQL), follows your naming conventions, wires to your error handling pattern, matches your existing auth structure
Prompt #13: "Review this diff" Generic review: looks for obvious errors, suggests generic improvements Reviews against your actual standards: flags violations of YOUR coding conventions, checks for YOUR team's known failure patterns, applies YOUR security requirements
Prompt #26: "Write a blog post on [topic]" Generic informational article in neutral tone Written in your voice, targeting your specific audience, avoiding your brand's prohibited phrases, with your preferred CTA style, appropriate for your content calendar
Prompt #35: "Why did [metric] drop?" Generic hypotheses: seasonality, technical issues, user behavior changes Hypotheses specific to YOUR product: references your recent releases, your known data quality issues, your seasonal patterns, your user segments
Prompt #44: "Make the case against this decision" Generic risks: cost, time, team capacity Risks specific to YOUR situation: references your current runway, your team's known skill gaps, your competitors' positioning, your existing technical debt
The insight:

Claude's reasoning capability isn't the limitation. Context is. The model can handle any of these tasks brilliantly — if it knows your project, your conventions, your priorities, and your history. CLAUDE.md solves the context problem. Brainfile ships a pre-built CLAUDE.md for your profession, so you don't have to write it from scratch.

Common Questions

FAQ

What makes a good Claude Code prompt?+
The best Claude Code prompts are specific about the output format, provide clear success criteria, give Claude permission to ask clarifying questions if needed, and reference relevant context that's in your CLAUDE.md. Generic prompts like "review my code" produce generic output. Specific prompts like "review this function for edge cases in the payment flow — flag any that could cause double-charges" produce actionable output.
How do prompts interact with CLAUDE.md?+
CLAUDE.md is read at the start of every session and provides the base context. Your prompts then work on top of that context. If your CLAUDE.md says "we use TypeScript with strict mode and follow Google style guide," then a prompt like "add error handling to this function" will automatically produce TypeScript output following your style guide — without you repeating those instructions in every prompt.
Should I store prompts in CLAUDE.md?+
Frequently-used prompts belong in .claude/commands/ as slash commands, not in CLAUDE.md. CLAUDE.md is for context (who you are, how you work). Slash commands are for workflows (common tasks you run repeatedly). Your most-used prompts become /commands that trigger with a single keystroke.
What's the difference between a prompt and a slash command?+
A prompt is a one-time instruction. A slash command (/review, /deploy, /analyze) is a saved prompt stored in .claude/commands/ that you invoke with a shortcut. For any prompt you use more than twice a week, convert it into a slash command. This removes the friction of typing and ensures consistency.
Can I automate prompts to run on a schedule?+
Yes, via the Claude Code SDK's Agent tool or by running Claude Code with --dangerously-skip-permissions in an automated script. You can wire prompts to run nightly, on git hooks, or on file change events. Configure AGENTS.md to govern autonomous execution.
Do prompts work differently in Claude Code vs Claude.ai?+
Yes, significantly. Claude Code runs in your terminal with access to your actual files, git history, and system commands. This means your prompts can reference real code, run commands, and interact with your environment. Claude.ai (the web interface) only sees what you paste into the chat.
How many prompts should I have in my CLAUDE.md?+
CLAUDE.md shouldn't contain prompts at all — it contains context. Your prompts should live in .claude/commands/ (as slash commands) or in brain/prompts.md (as a reference library). Keep CLAUDE.md focused on who you are and how you work, not on specific tasks you want Claude to perform.
Where can I get more Claude Code prompts?+
Brainfile's role-specific packages (Developer OS, Marketer OS, Trader OS, etc.) each include 50-60 curated prompts for that profession, plus slash command files ready to drop into .claude/commands/. The free starter kit includes 10 high-value prompts to get started.
Get 60+ More Role-Specific Prompts

Brainfile Pro includes 60+ curated prompts per role — developer, marketer, trader, executive, founder — plus the CLAUDE.md context that makes every prompt dramatically more effective.

$149/mo
or $119/mo billed annually · 30-day money-back guarantee
Get Brainfile Pro →
Try Free Starter Kit First
60+ prompts per role CLAUDE.md context included Monthly updates 30-day guarantee
Related Guides
CLAUDE.md Templates
Ready-to-use CLAUDE.md templates for every role
Rules Files Guide
How to use .claude/rules/ for modular context
AGENTS.md Guide
Configure Claude for autonomous agent mode
Getting Started Guide
Complete Claude Code setup from zero