Most .cursorrules files are 10 lines of vague instructions that Cursor mostly ignores. These examples are different — structured around what actually shapes AI output in your specific role.
A .cursorrules file sits in your project root and Cursor reads it as persistent context for every interaction in that project.
Unlike a one-off chat prompt, your .cursorrules shapes every autocomplete suggestion, every inline chat response, every Ctrl+K edit, and every Composer task. It's the difference between Cursor that knows nothing about your project and Cursor that behaves like a senior engineer who's been on the codebase for months.
Cursor treats your .cursorrules like a system prompt — read at the start of every context window. This means it has real influence on what the AI does, but also means vague or generic instructions produce vague, generic results. The engineers who get the most from Cursor are the ones who treat their .cursorrules file as a first-class engineering artifact.
A great .cursorrules answers the questions Cursor would ask if it could: Who am I? What am I building? What standards must I follow? What am I absolutely not allowed to do? What does the product actually do? The examples below show exactly how to answer each of those.
Copy-paste ready. Each example follows the 6-section anatomy above — structured around what actually shapes AI output, not what feels obvious to write.
# Role You are a senior frontend engineer working on a React 18 + TypeScript SPA. This is a financial dashboard for portfolio managers. Performance is critical. # Stack - React 18 with hooks (no class components) - TypeScript strict mode — no `any` types allowed, use `unknown` and narrow - Tailwind CSS for styling (no inline styles, no CSS modules unless legacy) - React Query for all server state — no useEffect for data fetching - Zustand for client state — no Redux, no Context for global state - Vitest + React Testing Library for all tests # Code Standards - All components are function components with explicit return types - Props interfaces named `ComponentNameProps`, exported from the component file - Custom hooks prefixed with `use`, exported from /hooks/ directory - Files: PascalCase for components (Button.tsx), camelCase for hooks/utils - Max 200 lines per file — extract logic into hooks or utils if exceeded - No barrel files (index.ts re-exports) in /components — import directly # Testing Requirements - Unit tests for all custom hooks using Vitest - Integration tests for all user-facing flows using React Testing Library - Test file co-located: Button.test.tsx next to Button.tsx - Never mock implementation details — test behavior, not internals - Test IDs: use data-testid attribute, never rely on CSS class selectors # Forbidden Patterns - No useEffect for data fetching — always use React Query - No prop drilling more than 2 levels deep — use Zustand or context - No console.log in committed code — use debug utilities - No `any` type — if you don't know the type, use `unknown` and narrow it - No direct DOM manipulation — React controls the DOM - No string-based event handlers in JSX (onClick="..." is JSX, not HTML) # Project Structure /src /components — UI components (dumb, presentational) /features — Feature modules (smart, connected to state) /hooks — Custom React hooks /lib — API clients, utilities, helpers /store — Zustand stores /types — Shared TypeScript interfaces and types # Domain Context This is a real-time financial dashboard. Data arrives via WebSocket. Users are portfolio managers — precision and data integrity matter more than speed. Never round numbers without a comment explaining the precision choice. All currency values are in USD cents (integer) in the API — display layer converts.
# Role Senior backend engineer. Building a Node.js 20 + Express + TypeScript REST API. This API serves a B2C mobile app with 50k+ daily active users. # Stack - Node.js 20 LTS with TypeScript strict mode - Express 4 for routing — no framework migration without discussion - PostgreSQL (Postgres 15) with Prisma ORM — no raw SQL unless Prisma can't do it - Zod for all input validation — validate at the route level before business logic - JWT for auth (access token 15m, refresh token 7d) — see /lib/auth.ts - Winston for structured logging — never console.log in production code - Jest + Supertest for all tests # Code Standards - All route handlers are async — always use try/catch or centralized error middleware - Request validation: Zod schema defined in same file as route, validated before handler - Service layer: business logic lives in /services/, never in route handlers - Repository pattern: all DB access through /repositories/ — never import Prisma directly in routes - TypeScript: no implicit any, no non-null assertions (!) unless absolutely necessary with comment # Testing Requirements - Integration tests with Supertest for all API endpoints (happy path + error cases) - Unit tests for all service layer functions - Test database: use dedicated test Postgres DB, never the dev DB - Jest test file convention: user.service.test.ts next to user.service.ts - Mock external services (email, SMS, payment) — never call real APIs in tests # Forbidden Patterns - No raw SQL queries — use Prisma query builder or Prisma $queryRaw with parameterized queries only - No sync I/O in route handlers — fs.readFileSync, etc. are forbidden in request paths - No secrets in code — use process.env, never hardcode API keys, DB credentials, or tokens - No unhandled promise rejections — every async call must be handled - No returning raw Prisma models to clients — always map to response DTOs - No bypassing the auth middleware on protected routes # Project Structure /src /routes — Express route definitions (thin — call services) /services — Business logic /repositories — Database access layer (Prisma wrappers) /middleware — Auth, error handling, request logging /lib — Shared utilities (JWT, logger, validators) /types — TypeScript interfaces and Zod schemas # Domain Context This API powers a consumer finance app. PII data (SSN, bank account numbers) exists. Never log PII — mask it in Winston before it hits the log stream. Rate limiting is enforced per-user-ID, not just per-IP (see /middleware/rateLimit.ts). All financial calculations must be done server-side — never trust client-sent amounts.
# Role & Context You are building a B2B SaaS product. Next.js 14 App Router + Supabase + Stripe. These are paying customers — never break production paths. Revenue depends on it. # Stack - Next.js 14 App Router (not Pages Router — never mix them) - TypeScript strict mode throughout - Supabase for database + auth + storage - Stripe for billing (subscriptions + usage-based) - Tailwind CSS + shadcn/ui component library - Zustand for client state, React Query for server state # Architecture Rules - Server Components by default — add "use client" only when interaction or browser APIs needed - Database access ONLY through /lib/db.ts — never import supabaseClient directly in components - All Stripe webhooks handled in /app/api/webhooks/stripe/route.ts — never elsewhere - Multitenant: every DB query must include org_id filter — RLS enforces this but always explicit - Feature access gated on Organization.plan — never check User.plan for feature flags # Code Standards - Every new page gets error.tsx + loading.tsx by default — no exceptions - API routes in /app/api/ — all validated with Zod before business logic - Metadata exported from every page for SEO (title, description minimum) - No `any` — TypeScript strict mode is enforced in CI and will fail the build # Testing Requirements - Vitest for unit and integration tests - Playwright for E2E tests on critical paths (auth, billing, core feature) - Test file co-located with source file - Never call real Stripe or external APIs in tests — use mocks # Forbidden Patterns - Never hardcode org IDs, user IDs, or price IDs — use env vars or DB lookups - Never use client components to fetch sensitive data — use Server Components + server actions - Never skip RLS policies — always add corresponding RLS when adding a new table - Never call Stripe from client components — always through /app/api/ routes - Never edit database migration files after they've been applied — create new ones - No `console.log` in production code — use structured logging # Project Structure /app — Next.js App Router pages and API routes /components — Shared UI components /lib — db.ts, stripe.ts, auth.ts, utils /hooks — Client-side React hooks /types — TypeScript interfaces # Domain Context B2B SaaS with organization-based multi-tenancy. Users belong to Organizations. Free plan: 1 seat, 100 records. Pro: 10 seats, unlimited. Enterprise: custom. Billing state lives in subscriptions table (synced from Stripe webhooks). Trial period is 14 days — trial_ends_at on Organization record.
# Role Senior Python engineer. FastAPI + SQLAlchemy 2.0 + PostgreSQL API backend. Async everywhere — this is a high-throughput data ingestion service. # Stack - Python 3.12 with strict type hints on every function signature - FastAPI for HTTP API layer — async handlers only - SQLAlchemy 2.0 (async) with Alembic for migrations - Pydantic v2 for all request/response models — never use dataclasses for API models - PostgreSQL 15 as primary database - pytest with pytest-asyncio for tests - black for formatting, ruff for linting — run both on every change # Code Standards - All type hints required — mypy strict mode enforced in CI - Pydantic v2 models: use model_config = ConfigDict(from_attributes=True) for ORM models - Async SQLAlchemy: use AsyncSession, never sync Session in async context - Dependency injection: use FastAPI Depends() for DB sessions, auth, shared services - Constants and config in /app/core/config.py using pydantic-settings - Never use global variables for state — use FastAPI app.state or proper DI # Testing Requirements - pytest for all tests — no unittest - pytest-asyncio for async test functions - Fixtures in conftest.py — test DB, client, auth token - Use a dedicated test database — never the dev DB - Every API endpoint gets at least one integration test (happy path + 422 validation error) - Mock external HTTP calls with httpx.MockTransport or respx # Forbidden Patterns - No sync I/O in async route handlers — use asyncio.to_thread() if needed - No raw SQL strings — use SQLAlchemy Core or ORM - No hardcoded credentials — all secrets via environment variables (pydantic-settings) - No bare except: clauses — always catch specific exception types - No mutable default arguments in function signatures - No print() in production code — use Python logging module # Project Structure /app /api — FastAPI route handlers (thin) /services — Business logic layer /models — SQLAlchemy ORM models /schemas — Pydantic request/response models /core — Config, security, dependencies /db — Database session, migrations # Domain Context High-throughput data ingestion — up to 10k events/second at peak. Idempotency is critical: all write endpoints must handle duplicate requests safely. All timestamps stored as UTC in the DB — convert to user timezone at display layer only.
# Role Infrastructure engineer. Terraform + AWS + Docker + GitHub Actions. Managing production infrastructure for a 50-person startup. Changes affect real users. # Stack - Terraform (latest stable) for all infrastructure as code - AWS as the cloud provider — no GCP/Azure resources without discussion - Docker for containerization — multi-stage builds required for production images - GitHub Actions for CI/CD pipelines - Helm charts for Kubernetes deployments (EKS cluster) - AWS Secrets Manager for all secrets — never SSM Parameter Store for sensitive values # Infrastructure Principles - Immutable infrastructure: never SSH into production. Deploy new, destroy old. - Everything in Terraform — no manual console changes, ever - Use Terraform modules for all repeated patterns (VPC, ECS service, RDS instance) - Tag all AWS resources: Environment, Project, Owner, CostCenter - State: remote in S3 with DynamoDB locking — never local state # Security Rules - Never hardcode credentials anywhere — AWS credentials via IAM roles, never access keys in code - Least privilege IAM: every service gets its own role with minimum required permissions - Security groups: default deny, explicit allow only. Never 0.0.0.0/0 on inbound unless documented - All data at rest encrypted (KMS). All data in transit TLS 1.2+. - Public subnets for load balancers only — application tier in private subnets always # Code Standards (Terraform) - Modules in /modules/ directory, root modules in /environments/ - Variables: always include description and type constraint - Outputs: export all resource IDs and ARNs that downstream modules might need - terraform fmt before every commit — enforced in CI - Use for_each over count for resource iteration when possible - No hardcoded AMI IDs — use data sources to look up latest # GitHub Actions Standards - Pin action versions by commit SHA, not tag (e.g., actions/checkout@abc1234) - Secrets via GitHub Secrets — never in workflow YAML - All production deploys require manual approval gate - Always run terraform plan in CI, terraform apply only on merge to main # Forbidden Patterns - No manual console changes to production infrastructure — Terraform only - No hardcoded region strings — use var.aws_region - No wildcard IAM permissions (Action: "*") without documented justification - No unencrypted S3 buckets, RDS instances, or EBS volumes - No storing secrets in environment variables in Docker — use Secrets Manager # Domain Context Production runs in us-east-1 with failover to us-west-2. Cost optimization matters: right-size instances, use Spot where appropriate (non-prod). On-call rotation: production incidents wake people up. Caution is appropriate.
# Role ML engineer. Python + PyTorch + Hugging Face Transformers + wandb. Building and training production models — reproducibility and experiment tracking are non-negotiable. # Stack - Python 3.11+ with type hints on all function signatures - PyTorch 2.x for model development and training - Hugging Face Transformers + Datasets for NLP work - wandb for experiment tracking — every training run logged - pytest for tests, especially data pipeline tests - black + ruff for formatting/linting # Reproducibility Rules (Critical) - Seed everything at the top of every training script: torch.manual_seed(config.seed) numpy.random.seed(config.seed) random.seed(config.seed) - All hyperparameters in a config dataclass or YAML — never hardcoded in training loop - Checkpoint every N steps — never end a long training run without checkpoints - Log git commit hash to wandb at the start of every run - Pin all dependency versions in requirements.txt — use exact versions (==), not ranges # Experiment Tracking - Every training run gets a wandb run with: config, metrics, model architecture summary - Log loss, LR, gradient norms at every step; eval metrics at every epoch - Group related experiments with wandb tags and project names - Never delete wandb runs — mark as "failed" or "archived" instead # Data Validation - Validate data before training: check shapes, dtypes, null values, class balance - Log dataset statistics to wandb before training starts - Use Hugging Face Datasets .map() with num_proc for preprocessing — never custom DataLoader loops for standard transforms - Assert expected schema at data loading time — fail fast on bad data # Code Standards - Model classes inherit from nn.Module — always implement forward() with type hints - No global variables for model state — use class attributes - Device handling: use device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - Mixed precision: use torch.autocast where appropriate — document why when not used # Testing Requirements - Unit tests for all data preprocessing functions - Shape tests for all model forward passes (input → expected output shape) - pytest fixtures for model instances and sample batches - CI runs tests on CPU — never require GPU in CI # Model Documentation - Every trained model gets a model card in /models/cards/ before deployment - Model card must include: task, architecture, training data, eval metrics, limitations - Export ONNX version alongside PyTorch checkpoint for production serving # Forbidden Patterns - No hardcoded hyperparameters in training loops — use config objects - No training without wandb logging — even quick experiments - No loading data inside the training loop — preprocess and cache first - No ignoring data validation failures — fix data, don't suppress errors
The difference between a useful .cursorrules and a useless one is specificity. Vague instructions produce vague behavior. Here's the pattern.
Every high-quality .cursorrules file covers these six areas. Each section shapes a different aspect of how Cursor generates code in your project.
Who you are, what you're building, what languages and frameworks are in use. 3–5 lines max. Precision beats length — "React 18 + TypeScript strict + Tailwind" tells Cursor more than a paragraph of context.
Naming conventions, file structure, max lengths, preferred patterns. The rules you'd put in a code review comment: "PascalCase for components, camelCase for hooks, max 200 lines per file."
Framework, coverage targets, file location, what to mock vs not mock. This section shapes every function Cursor generates — it will write test-friendly code if it knows testing is required.
Explicit "never do this" rules. These are the highest-ROI lines in your .cursorrules — they directly prevent the mistakes Cursor makes repeatedly. Add one every time Cursor does something wrong.
Where files go, how modules are organized, import conventions. Cursor needs to know your architecture to respect it. Without this, it will make reasonable guesses that conflict with your actual structure.
What the product does, who uses it, constraints not obvious from the code. A Brainfile for your codebase. "These are paying enterprise customers" changes how Cursor handles error messages and edge cases.
Three tools, three context files — but the same underlying principles. Write one well and you can adapt it to all three in under 10 minutes.
| File | Tool | Scope | Format |
|---|---|---|---|
| .cursorrules | Cursor | Project-level (project root) | Markdown or plain text |
| CLAUDE.md | Claude Code | Project-level or user-level (~/.claude/CLAUDE.md) | Markdown (can be much longer — higher context limit) |
| .windsurfrules | Windsurf | Project-level (project root) | Markdown or plain text |
These examples show the structure. Pro templates take it further — domain-specific context, role-specific guardrails, and monthly updates as Cursor evolves. 10+ developer roles.
Frontend, backend, ML, DevOps, SaaS founder, mobile engineer, security engineer. Each template is built around the actual context that matters for that role and stack.
Cursor evolves fast. The .cursorrules patterns that worked six months ago may not be optimal today. Templates are updated every month — your subscription keeps you current automatically.
Drop it in your project root, update the stack section, start using it immediately. No configuration, no learning curve. Production-quality context from minute one.
Expert-engineered templates for 10+ dev roles. Copy-paste ready. Updated monthly. Works with Cursor, Claude Code, and Windsurf.
30-day money-back guarantee · Instant access · Cancel anytime
Need team setup? Enterprise at $299/mo →
Cursor tips, .cursorrules updates, and AI coding best practices. Free weekly.
No spam. Unsubscribe anytime.