What Are Claude Code Subagents?
Claude Code subagents are independent Claude instances spawned by a parent agent during a session. The parent delegates a discrete task to the subagent, waits for the result, and then continues its own work. Each subagent runs with its own context window — completely isolated from the parent's conversation history and in-memory state.
Subagents are created via the Agent tool — a first-class Claude Code tool alongside Bash, Read, Write, and Edit. When Claude calls the Agent tool, it's essentially running a fresh Claude session with a specific goal, then returning that session's output as a string the parent can use.
A single Claude context window is finite. With subagents, you can decompose work into specialized roles — each with its own full context budget — and run them in parallel or sequence. A research subagent can crawl documentation while a code-review subagent audits your codebase, with neither crowding the other out of context.
The mental model is a team of contractors. The parent Claude is the project manager. It breaks work into chunks, delegates each chunk to a specialist, collects results, and assembles the final output. The specialists don't know about each other. They just complete their task and report back.
This architecture is especially powerful when tasks are:
- Long and context-hungry — each subagent gets a full context budget without competing with other work
- Parallelizable — the parent can spawn multiple subagents and aggregate their results
- Role-specific — a code reviewer needs different instructions than a documentation writer
- Repetitive — run the same auditor agent across 20 files without re-prompting each time
How the Agent Tool Works
The Agent tool is invoked like any other Claude Code tool — by Claude including a tool call in its response. You can also describe subagent behaviors in your CLAUDE.md to guide when and how Claude uses the Agent tool.
The tool accepts the following parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
| description | string | yes | Short label for the task — Claude uses this to identify the subagent in its reasoning |
| prompt | string | yes | The full instruction sent to the subagent — its entire context. Be specific about expected output format |
| subagent_type | string | optional | Hint for the subagent's role. Values like general, researcher, or a custom string. Not a hard constraint — the prompt drives behavior |
Here's what an Agent tool call looks like in practice — this is what Claude emits when you've instructed it to use a research subagent:
The subagent's response comes back as a string in the parent's tool result. The parent can then parse it, pass it to another subagent, or use it directly. To wire this behavior, add agent instructions to your CLAUDE.md:
Subagents do NOT share the parent's context. Any information the subagent needs — files, prior decisions, constraints — must be explicitly included in the prompt parameter. If the parent read a file and made changes, the subagent won't know unless you pass that content in. See Common Mistakes for the full list of context-sharing pitfalls.
The Agent tool is available in Claude Code sessions by default — you don't need to install anything. It's part of Claude's built-in tool set, alongside Bash, Read, Write, Edit, Glob, and Grep. What you DO need is a clear instruction architecture in your CLAUDE.md so Claude knows when and how to use it.
5 Subagent Patterns That Work in Production
These are the patterns that consistently produce reliable results across real Claude Code deployments. Each card shows the subagent type, the trigger condition, and a minimal prompt template.
Research Agent
Delegates all deep research tasks — library comparisons, API documentation, best practices, competitive analysis. The parent stays focused on the task while the research agent does the reading.
Code Review Agent
Audits a specific file or function for bugs, security issues, performance problems, and style violations — without needing the parent to explain the entire codebase context.
Test Runner Agent
Generates, runs, and interprets tests for a given module. Can write missing tests, identify coverage gaps, and flag test cases that are testing the wrong thing.
Deploy Validation Agent
Checks all pre-deploy conditions before code ships: environment variables present, config files valid, dependencies installed, schema migrations applied, health endpoints reachable.
Documentation Agent
Generates comprehensive documentation for a module, API, or system — without requiring the parent to context-switch into writing mode. Pass it the source code, get back production-ready docs.
A powerful real-world pattern is the audit pipeline: spawn a code-review agent and a test-runner agent in sequence, feed the code-review output into the test-runner prompt (so it writes tests that specifically cover the flagged issues), then feed both outputs into a doc-writer agent. Three subagents, one cohesive workflow. Each runs with full context. Total parent context used: minimal.
Sequential vs. Parallel vs. Hierarchical Agents
The Agent tool supports all three topologies. The right choice depends on whether your tasks have dependencies, and whether you need specialization at multiple levels.
| Topology | How it works | Best for | Main tradeoff |
|---|---|---|---|
| Sequential | Parent spawns subagent A, waits, passes A's output to subagent B, waits, continues | Pipelines where step 2 depends on step 1 output. Audit → fix → verify. | Slower — each subagent blocks the next. Output errors cascade downstream. |
| Parallel | Parent spawns multiple subagents simultaneously (Claude batches Agent tool calls), collects all results, then continues | Independent tasks: research + code review + test generation running at once | Harder to orchestrate. Parent must merge conflicting outputs. Context cost multiplies. |
| Hierarchical | A subagent itself spawns further subagents (sub-subagents) to break its own task down | Very large codebases where a single "review" task needs to branch into per-file reviewers | Deep nesting becomes hard to debug. Cost and latency multiply at each level. Use sparingly. |
| Hybrid | Parallel spawn for independent tasks, sequential handoff for dependent tasks within each branch | Most production workflows. Research and review in parallel, then sequential fix → validate → document. | Requires careful orchestration logic in CLAUDE.md to keep parent from getting confused. |
For most teams starting with subagents, sequential patterns are recommended first. They're easier to debug, easier to audit in logs, and less prone to the output-merging complexity that parallel topologies introduce. Once you have sequential pipelines working reliably, add parallelism to the independent branches.
For orchestrating complex agent topologies, consider pairing subagents with Claude Code hooks — hooks can checkpoint state between subagent calls, log every agent's output to disk, and trigger notifications when a subagent fails.
Common Mistakes With Claude Subagents
Most subagent failures fall into five predictable categories. Understanding them before you build saves significant debugging time.
Assuming subagents share context with the parent
The single most common mistake. Subagents start with a completely blank context. They don't see the parent's conversation, the files the parent has read, or the in-memory changes the parent has made. If your parent Claude just edited auth.py, a subagent asked to "review auth.py" will read the original file from disk — not the parent's version.
Not specifying output format
A subagent that returns freeform prose is nearly useless to a parent agent that needs to parse and act on results. If you ask for a code review and get three paragraphs of narrative, the parent has to either pass that prose to yet another agent for parsing, or hallucinate a structured interpretation. Always specify the exact output format — preferably JSON with a schema — in the prompt.
Fix: End every subagent prompt with "Output ONLY: [format spec]" — no prose, no preambleSpawning too many subagents in sequence
It's tempting to decompose every step into a subagent. But each Agent tool call adds latency and consumes tokens at both the call and return points. A chain of 8 sequential subagents for a task that could be done in 3 compounds errors — a failure at step 5 means 4 wasted subagent runs. Design subagents to be substantial units of work, not thin wrappers around a single tool call.
Fix: Each subagent should do 5–10 minutes of human-equivalent work. If it's under 30 seconds, keep it in the parent.No error handling on subagent output
Subagents can return malformed JSON, empty responses, or error messages instead of results — especially when the task is ambiguous or the model hits a content boundary. Parents that blindly pass subagent output downstream cause cascading failures. Always validate before consuming: check that expected keys exist, values are non-empty, and format matches the spec.
Fix: Add explicit validation in CLAUDE.md — "If subagent output fails JSON.parse(), re-prompt once with a format correction instruction"Infinite delegation loops
A hierarchical agent setup where subagents themselves spawn subagents can create runaway delegation. If the orchestration logic isn't explicit about termination conditions, Claude can enter delegation chains that consume enormous amounts of context and tokens before completing any real work. Always set a maximum depth in your CLAUDE.md instructions.
Fix: Explicitly state "Subagents MUST NOT spawn further subagents unless [specific condition]. Max depth: 2."Pre-Built Subagent Workflows
Configuring subagents from scratch means writing prompt templates, output schemas, error handling logic, and orchestration rules — across every agent type you need. Most teams spend 2–3 days getting a reliable 3-agent pipeline working before they can focus on the actual product work.
12 specialized agents, pre-configured and ready to deploy
Brainfile ships a complete Claude Code OS with 12 pre-built subagent configurations — each with battle-tested prompt templates, output schemas, error handling, and CLAUDE.md orchestration rules. Drop it into your project and start delegating immediately.
Each agent includes: role-specific system prompt, expected input schema, output format spec, error-handling fallback, and CLAUDE.md integration rules. The free starter kit includes 3 agents (researcher, code-reviewer, test-runner) to trial before upgrading.
What's included in the Brainfile agent configs
Each agent in the Brainfile Claude Code OS is documented with four components:
Role prompt template
The full system prompt for the subagent, with placeholders for your specific inputs. Engineered to minimize hallucination and maximize output structure consistency.
Input schema
What the parent agent needs to pass in — file paths, code content, prior outputs, constraints. Defined as a JSON schema so your CLAUDE.md can validate inputs before spawning.
Output contract
The exact JSON structure the subagent returns. Versioned. If the schema changes, old orchestration logic won't silently break — it'll fail visibly and tell you why.
CLAUDE.md orchestration block
A ready-to-paste CLAUDE.md section that tells the parent Claude when to spawn this agent, how to pass inputs, how to validate outputs, and what to do on failure. Copy, paste, customize.
Brainfile agent configs are modular — each is a self-contained CLAUDE.md block you can add to an existing configuration. You don't need to replace your current setup. Add only the agents you need. If you're starting from scratch, the free starter kit includes a base CLAUDE.md with the 3-agent workflow already wired.
For teams using MCP servers, Brainfile's agent configs include MCP-aware variants — the research agent can pull live data from your MCP filesystem or GitHub server, and the deploy validator integrates with MCP database tools to check migration status before deploying.
Pre-built subagent workflows.
Deploy in minutes.
12 specialized Claude agents pre-configured with prompt templates, output schemas, and CLAUDE.md orchestration rules. Free trial starts with 3 agents.
30-day money-back guarantee · Instant access · Cancel anytime
Need team setup? Enterprise at $299/mo →