🤖 CLAUDE CODE GUIDE

Claude Code Subagents: Build Multi-Agent Workflows That Actually Scale

Subagents let Claude spawn specialized sub-instances that work in parallel — a researcher, a code reviewer, a test runner — each with its own context and task. Here's how to build workflows that hold up in production.

Read the Guide → Full Claude Code Guide
In this guide

What Are Claude Code Subagents?

Claude Code subagents are independent Claude instances spawned by a parent agent during a session. The parent delegates a discrete task to the subagent, waits for the result, and then continues its own work. Each subagent runs with its own context window — completely isolated from the parent's conversation history and in-memory state.

Subagents are created via the Agent tool — a first-class Claude Code tool alongside Bash, Read, Write, and Edit. When Claude calls the Agent tool, it's essentially running a fresh Claude session with a specific goal, then returning that session's output as a string the parent can use.

Why this matters

A single Claude context window is finite. With subagents, you can decompose work into specialized roles — each with its own full context budget — and run them in parallel or sequence. A research subagent can crawl documentation while a code-review subagent audits your codebase, with neither crowding the other out of context.

The mental model is a team of contractors. The parent Claude is the project manager. It breaks work into chunks, delegates each chunk to a specialist, collects results, and assembles the final output. The specialists don't know about each other. They just complete their task and report back.

This architecture is especially powerful when tasks are:

How the Agent Tool Works

The Agent tool is invoked like any other Claude Code tool — by Claude including a tool call in its response. You can also describe subagent behaviors in your CLAUDE.md to guide when and how Claude uses the Agent tool.

The tool accepts the following parameters:

Parameter Type Required Description
description string yes Short label for the task — Claude uses this to identify the subagent in its reasoning
prompt string yes The full instruction sent to the subagent — its entire context. Be specific about expected output format
subagent_type string optional Hint for the subagent's role. Values like general, researcher, or a custom string. Not a hard constraint — the prompt drives behavior

Here's what an Agent tool call looks like in practice — this is what Claude emits when you've instructed it to use a research subagent:

Agent tool call — research subagent
// Claude's internal tool call — you configure this behavior via CLAUDE.md { "tool": "Agent", "input": { "description": "Research best practices for Redis cache invalidation", "subagent_type": "researcher", "prompt": "You are a senior systems engineer. Research Redis cache invalidation strategies. Focus on: TTL-based expiry, event-driven invalidation, cache-aside pattern. Output format: structured JSON with keys: strategy_name, pros, cons, when_to_use. Return ONLY the JSON array, no prose." } }

The subagent's response comes back as a string in the parent's tool result. The parent can then parse it, pass it to another subagent, or use it directly. To wire this behavior, add agent instructions to your CLAUDE.md:

CLAUDE.md — instructing Claude to use subagents
## Subagent Usage Policy When a task requires deep research (>1 page of docs) OR code review of a file you haven't read yet, spawn a subagent using the Agent tool. ### Research Subagent - subagent_type: "researcher" - Always specify output format in the prompt (JSON preferred) - Include: topic, depth required, constraints ### Code Review Subagent - subagent_type: "code-reviewer" - Pass the full file content in the prompt - Request structured output: issues[], severity, recommendations[] ### Test Runner Subagent - subagent_type: "test-runner" - Pass test file + source file content - Request: pass/fail per test, coverage gaps, suggested additions ## Subagent Output Handling Always validate subagent JSON output before consuming. If parsing fails, re-prompt the same subagent with explicit format correction.
Context isolation is not optional

Subagents do NOT share the parent's context. Any information the subagent needs — files, prior decisions, constraints — must be explicitly included in the prompt parameter. If the parent read a file and made changes, the subagent won't know unless you pass that content in. See Common Mistakes for the full list of context-sharing pitfalls.

The Agent tool is available in Claude Code sessions by default — you don't need to install anything. It's part of Claude's built-in tool set, alongside Bash, Read, Write, Edit, Glob, and Grep. What you DO need is a clear instruction architecture in your CLAUDE.md so Claude knows when and how to use it.

5 Subagent Patterns That Work in Production

These are the patterns that consistently produce reliable results across real Claude Code deployments. Each card shows the subagent type, the trigger condition, and a minimal prompt template.

researcher

Research Agent

Delegates all deep research tasks — library comparisons, API documentation, best practices, competitive analysis. The parent stays focused on the task while the research agent does the reading.

Trigger: Any task requiring reading external docs, comparing approaches, or surveying options
Output format: Structured JSON or numbered list with sources
subagent_type: "researcher" prompt: "Research [TOPIC]. Return JSON: { findings: [...], sources: [...], recommendation: '...' }"
code-reviewer

Code Review Agent

Audits a specific file or function for bugs, security issues, performance problems, and style violations — without needing the parent to explain the entire codebase context.

Trigger: Before committing new code, after a significant refactor, or when reviewing a PR diff
Output format: Array of issues with severity, line numbers, and suggested fixes
subagent_type: "code-reviewer" prompt: "Review this code for bugs and security issues. File: [FILENAME] Content: [FILE_CONTENT] Output: JSON { issues: [{line,severity, description,fix}] }"
test-runner

Test Runner Agent

Generates, runs, and interprets tests for a given module. Can write missing tests, identify coverage gaps, and flag test cases that are testing the wrong thing.

Trigger: After writing new functions, before marking a feature complete, or on coverage drops below threshold
Output format: Pass/fail per test, coverage percentage, list of missing test cases
subagent_type: "test-runner" prompt: "Write unit tests for this module. Source: [SOURCE_CODE] Framework: [pytest/jest/etc] Output: complete test file + coverage summary JSON"
deploy-validator

Deploy Validation Agent

Checks all pre-deploy conditions before code ships: environment variables present, config files valid, dependencies installed, schema migrations applied, health endpoints reachable.

Trigger: Before any production deploy, as a hard gate in CI/CD
Output format: Pass/fail with blockers list and remediation steps
subagent_type: "deploy-validator" prompt: "Validate deploy readiness for [SERVICE]. Check: env vars, config, deps, migrations. Output: { ready: bool, blockers: [], warnings: [] }"
doc-writer

Documentation Agent

Generates comprehensive documentation for a module, API, or system — without requiring the parent to context-switch into writing mode. Pass it the source code, get back production-ready docs.

Trigger: After a feature ships, when onboarding a new team member, or on a documentation coverage gap
Output format: Markdown with sections: Overview, Parameters, Examples, Edge Cases, Related
subagent_type: "doc-writer" prompt: "Write API documentation for [MODULE_NAME]. Source: [CODE] Output format: Markdown with: Overview, Params table, 3 examples, Edge cases"
Combining patterns: the audit pipeline

A powerful real-world pattern is the audit pipeline: spawn a code-review agent and a test-runner agent in sequence, feed the code-review output into the test-runner prompt (so it writes tests that specifically cover the flagged issues), then feed both outputs into a doc-writer agent. Three subagents, one cohesive workflow. Each runs with full context. Total parent context used: minimal.

Sequential vs. Parallel vs. Hierarchical Agents

The Agent tool supports all three topologies. The right choice depends on whether your tasks have dependencies, and whether you need specialization at multiple levels.

Topology How it works Best for Main tradeoff
Sequential Parent spawns subagent A, waits, passes A's output to subagent B, waits, continues Pipelines where step 2 depends on step 1 output. Audit → fix → verify. Slower — each subagent blocks the next. Output errors cascade downstream.
Parallel Parent spawns multiple subagents simultaneously (Claude batches Agent tool calls), collects all results, then continues Independent tasks: research + code review + test generation running at once Harder to orchestrate. Parent must merge conflicting outputs. Context cost multiplies.
Hierarchical A subagent itself spawns further subagents (sub-subagents) to break its own task down Very large codebases where a single "review" task needs to branch into per-file reviewers Deep nesting becomes hard to debug. Cost and latency multiply at each level. Use sparingly.
Hybrid Parallel spawn for independent tasks, sequential handoff for dependent tasks within each branch Most production workflows. Research and review in parallel, then sequential fix → validate → document. Requires careful orchestration logic in CLAUDE.md to keep parent from getting confused.

For most teams starting with subagents, sequential patterns are recommended first. They're easier to debug, easier to audit in logs, and less prone to the output-merging complexity that parallel topologies introduce. Once you have sequential pipelines working reliably, add parallelism to the independent branches.

For orchestrating complex agent topologies, consider pairing subagents with Claude Code hooks — hooks can checkpoint state between subagent calls, log every agent's output to disk, and trigger notifications when a subagent fails.

Common Mistakes With Claude Subagents

Most subagent failures fall into five predictable categories. Understanding them before you build saves significant debugging time.

Mistake 01

Assuming subagents share context with the parent

The single most common mistake. Subagents start with a completely blank context. They don't see the parent's conversation, the files the parent has read, or the in-memory changes the parent has made. If your parent Claude just edited auth.py, a subagent asked to "review auth.py" will read the original file from disk — not the parent's version.

Fix: Write changes to disk (or commit) before spawning subagents that need them
Mistake 02

Not specifying output format

A subagent that returns freeform prose is nearly useless to a parent agent that needs to parse and act on results. If you ask for a code review and get three paragraphs of narrative, the parent has to either pass that prose to yet another agent for parsing, or hallucinate a structured interpretation. Always specify the exact output format — preferably JSON with a schema — in the prompt.

Fix: End every subagent prompt with "Output ONLY: [format spec]" — no prose, no preamble
Mistake 03

Spawning too many subagents in sequence

It's tempting to decompose every step into a subagent. But each Agent tool call adds latency and consumes tokens at both the call and return points. A chain of 8 sequential subagents for a task that could be done in 3 compounds errors — a failure at step 5 means 4 wasted subagent runs. Design subagents to be substantial units of work, not thin wrappers around a single tool call.

Fix: Each subagent should do 5–10 minutes of human-equivalent work. If it's under 30 seconds, keep it in the parent.
Mistake 04

No error handling on subagent output

Subagents can return malformed JSON, empty responses, or error messages instead of results — especially when the task is ambiguous or the model hits a content boundary. Parents that blindly pass subagent output downstream cause cascading failures. Always validate before consuming: check that expected keys exist, values are non-empty, and format matches the spec.

Fix: Add explicit validation in CLAUDE.md — "If subagent output fails JSON.parse(), re-prompt once with a format correction instruction"
Mistake 05

Infinite delegation loops

A hierarchical agent setup where subagents themselves spawn subagents can create runaway delegation. If the orchestration logic isn't explicit about termination conditions, Claude can enter delegation chains that consume enormous amounts of context and tokens before completing any real work. Always set a maximum depth in your CLAUDE.md instructions.

Fix: Explicitly state "Subagents MUST NOT spawn further subagents unless [specific condition]. Max depth: 2."

Pre-Built Subagent Workflows

Configuring subagents from scratch means writing prompt templates, output schemas, error handling logic, and orchestration rules — across every agent type you need. Most teams spend 2–3 days getting a reliable 3-agent pipeline working before they can focus on the actual product work.

Brainfile Claude Code OS

12 specialized agents, pre-configured and ready to deploy

Brainfile ships a complete Claude Code OS with 12 pre-built subagent configurations — each with battle-tested prompt templates, output schemas, error handling, and CLAUDE.md orchestration rules. Drop it into your project and start delegating immediately.

researcher quality-auditor pipeline-runner algo-researcher code-reviewer test-runner deploy-validator doc-writer security-scanner performance-profiler dependency-auditor migration-planner

Each agent includes: role-specific system prompt, expected input schema, output format spec, error-handling fallback, and CLAUDE.md integration rules. The free starter kit includes 3 agents (researcher, code-reviewer, test-runner) to trial before upgrading.

See All 12 Agents → Free Starter Kit (3 agents)

What's included in the Brainfile agent configs

Each agent in the Brainfile Claude Code OS is documented with four components:

1

Role prompt template

The full system prompt for the subagent, with placeholders for your specific inputs. Engineered to minimize hallucination and maximize output structure consistency.

2

Input schema

What the parent agent needs to pass in — file paths, code content, prior outputs, constraints. Defined as a JSON schema so your CLAUDE.md can validate inputs before spawning.

3

Output contract

The exact JSON structure the subagent returns. Versioned. If the schema changes, old orchestration logic won't silently break — it'll fail visibly and tell you why.

4

CLAUDE.md orchestration block

A ready-to-paste CLAUDE.md section that tells the parent Claude when to spawn this agent, how to pass inputs, how to validate outputs, and what to do on failure. Copy, paste, customize.

Works with your existing CLAUDE.md

Brainfile agent configs are modular — each is a self-contained CLAUDE.md block you can add to an existing configuration. You don't need to replace your current setup. Add only the agents you need. If you're starting from scratch, the free starter kit includes a base CLAUDE.md with the 3-agent workflow already wired.

For teams using MCP servers, Brainfile's agent configs include MCP-aware variants — the research agent can pull live data from your MCP filesystem or GitHub server, and the deploy validator integrates with MCP database tools to check migration status before deploying.

Start Building Multi-Agent Workflows

Pre-built subagent workflows.
Deploy in minutes.

12 specialized Claude agents pre-configured with prompt templates, output schemas, and CLAUDE.md orchestration rules. Free trial starts with 3 agents.

$149 /month

30-day money-back guarantee · Instant access · Cancel anytime

Need team setup? Enterprise at $299/mo →

Free: Claude Code Multi-Agent Recipes

New subagent patterns, orchestration examples, and CLAUDE.md templates. Free weekly digest.

No spam. Unsubscribe anytime.