One AI session trying to do everything is like one person trying to be the whole company. It burns out the same way — just faster.


What Are Subagents?

In Claude Code, a subagent is a specialized AI assistant that runs in its own isolated context window. You define what it knows, what tools it has access to, and what task it should accomplish. It runs independently, completes its work, and returns results to your main conversation.

Your main Claude Code session is the project manager. Subagents are the specialists you bring in for focused tasks. The project manager understands the whole project; each specialist focuses deeply on one thing.

Why One Session Isn’t Enough

Two reasons, both about context:

Context pollution. When you ask your main session to lint CSS, write API tests, and update documentation all in the same conversation, every task’s context bleeds into the others. The CSS linting output sits in memory while Claude writes tests, consuming space and potentially confusing its reasoning about unrelated code.

Context limits. A single session has a finite context window. Three substantial tasks might fill it up, degrading quality across all three. Three subagents each get their own window. Quality stays high across the board.

There’s also the obvious practical win: parallelism. Subagents run simultaneously. While one writes tests, another lints, and a third updates docs. In a single session, these would be sequential.


Defining Subagents

Subagents are defined as markdown files in your project’s .claude/agents/ directory. Each file describes a specialist with a specific role, knowledge, and set of constraints.

Basic Structure

# Test Writer

## Role
You are a test specialist. Your job is to write comprehensive tests
for the code you're given.

## Context
- This project uses Vitest for unit tests and Playwright for E2E tests
- Test files mirror the source structure: src/api/users.ts → tests/unit/api/users.test.ts
- We prefer arrange-act-assert structure
- Mock external dependencies, don't mock internal modules

## Constraints
- Only modify files in the tests/ directory
- Never modify source code — if you find a bug, report it, don't fix it
- Run tests after writing them to verify they pass

## Output
When finished, provide:
1. List of test files created or modified
2. Test results (pass/fail counts)
3. Any bugs discovered in source code

Anatomy of a Good Subagent Definition

Role — One sentence. What is this agent? “Test writer,” “Code reviewer,” “Documentation updater.” Clear and narrow.

Context — What does this agent need to know about your project? Include conventions, tools, and patterns specific to its task. Don’t repeat your entire CLAUDE.md — include only what’s relevant.

Constraints — What should this agent not do? This is critical. Without constraints, a test-writing agent might “helpfully” fix bugs it finds, introducing changes you didn’t ask for. Constraints keep specialists in their lane.

Output — What should the agent return when it’s done? Specify the format so you can evaluate results consistently.

More Subagent Examples

Code Reviewer:

# Code Reviewer

## Role
You review code changes for quality, consistency, and potential issues.

## Context
- Review against the project conventions in CLAUDE.md
- Focus on: correctness, error handling, edge cases, naming, consistency
- This project's critical path is auth and payment processing

## Constraints
- Do NOT modify any files
- Do NOT run any commands that change state
- Read-only analysis only

## Output
Provide a review with:
1. Critical issues (must fix before merge)
2. Suggestions (should fix, but not blocking)
3. Nits (style/preference, optional)
For each item: file, line, issue, suggested fix.

Documentation Updater:

# Documentation Updater

## Role
You update project documentation to reflect current code.

## Context
- README.md is the main user-facing doc
- docs/ contains developer documentation
- API docs should match actual route implementations in src/api/

## Constraints
- Only modify files in docs/ and README.md
- Never modify source code
- Keep documentation concise — no filler text

## Output
List of files updated with a one-line summary of each change.

Three Ways to Dispatch Work

Fire and Forget (Parallel Dispatch)

When to use: Multiple independent tasks with no shared state. The output of one doesn’t affect the others.

Example: You’ve just finished a feature. You want to write tests, update docs, and lint/format the code. These three tasks don’t depend on each other.

In practice:

Run these three tasks in parallel:
1. @test-writer — Write unit tests for src/api/invoices.ts
2. @doc-updater — Update API docs for the new invoice endpoints
3. Lint and format all changed files in src/api/

Claude Code dispatches tasks 1 and 2 to their respective subagents and handles task 3 directly (or via another subagent). All three run simultaneously, each in its own context.

Why this saves money: Each subagent’s context contains only what it needs, not the accumulated history of the other tasks. Faster and cheaper.

Assembly Line (Sequential Dispatch)

When to use: Tasks where the output of step A feeds into step B.

Example: You want to generate a database migration, then write the query functions that use the new schema, then write tests for those queries. Each step depends on the previous one.

In practice:

Step 1: Create the database migration for the invoicing tables
        (spec in specs/invoicing.md, data model section)
Step 2: After the migration is created, write query functions in
        src/db/invoices.ts that work with the new schema
Step 3: After query functions are done, @test-writer — write tests
        for src/db/invoices.ts

Claude Code runs step 1, waits for completion, feeds the result into step 2, waits, then dispatches step 3 to the test-writer subagent.

Key insight: In sequential dispatch, your main session orchestrates. It reads each step’s output to inform the next step’s input. This is where your CLAUDE.md and specs matter most — they provide the continuity that each individual step lacks.

Scout Missions (Background Dispatch)

When to use: Research or analysis tasks that you want running while you keep working on something else.

Example: You’re about to refactor the auth module. Before diving in, you want a thorough analysis of everywhere auth is used across the codebase.

In practice:

Background task: Analyze all files that import from src/api/middleware.ts
or reference auth tokens. List every file, the specific auth-related
code in each, and flag any inconsistencies. Save results to analysis/auth-usage.md.

I'll continue working on the invoicing module in the meantime.

The background subagent does its research while you keep working. When it finishes, you have a comprehensive analysis waiting.

Good scout missions:


Cost Management

Subagents use the same API credits or subscription as your main session. But how you structure them affects cost significantly.

Picking the Right Model

Claude Code lets you specify which model a subagent should use. The general principle:

Use the most capable model (Opus) for:

Use a faster/cheaper model (Sonnet) for:

To put rough numbers on it: dispatching three focused tasks to Sonnet subagents might cost $0.50-1.00 total, while running those same three tasks sequentially in an Opus session (where context accumulates and each task drags the history of the others) could run $3-5. The exact numbers depend on task size, but the pattern is consistent — focused context is cheaper context.

A practical workflow: run your main session on Opus for the hard thinking, dispatch the routine work to Sonnet subagents. Frontier-quality architecture decisions, fast and cheap execution on the mechanical stuff.

Avoiding Waste

Don’t dispatch what you can do directly. If a task takes one command (pnpm test), running it yourself is faster and cheaper than dispatching a subagent.

Don’t over-specialize. You don’t need 20 subagents. 3-5 well-defined ones cover most workflows: test writer, code reviewer, documentation updater, researcher. Add more only when you have a recurring task that genuinely benefits from isolation.

Don’t pass entire codebases to subagents. Use .claudeignore and be specific about which files a subagent needs to read. “Review src/api/invoices.ts” costs less than “review the entire API.”


Limitations Worth Knowing

No nested delegation. Subagents cannot spawn their own subagents. If you need multi-level delegation, your main session has to orchestrate. In practice: break the task into steps and chain subagents from your main session. It’s more explicit, and honestly, you want that visibility anyway.

No interactive mode in subagents. Subagents can’t ask you clarifying questions mid-task. They execute based on their instructions and return results. This means your subagent definitions need to be clear enough that questions aren’t necessary — yet another reason good specs matter.

No shared state between subagents. Parallel subagents can’t see each other’s work in progress. If subagent A creates a file that subagent B needs, they must run sequentially, not in parallel.

Context limits still apply per-agent. Each subagent has its own context window. A subagent working on a massive codebase can still run out of context. Scope tasks tightly.


Worked Example: Building an Invoice Feature

Here’s how all three patterns combine in a real workflow:

Session Start — CLAUDE.md loaded, spec/invoicing.md written

Step 1 (Main session): "Review the invoicing spec and create the
database migration."
→ Creates src/db/migrations/003_invoices.sql

Step 2 (Main session): "Implement the query functions in src/db/invoices.ts
based on the migration."
→ Creates src/db/invoices.ts with createInvoice, getInvoice, etc.

Step 3 (Parallel dispatch):
  → @test-writer: "Write tests for src/db/invoices.ts"
  → @doc-updater: "Add invoice query documentation to docs/database.md"
  → Main session: "Implement the API routes in src/api/invoices.ts"

Step 4 (Main session): "Review the test results and fix any failures."

Step 5 (Background): "@code-reviewer: Review all files changed in this
session against the spec and CLAUDE.md conventions."

Step 6: Review the code review, address issues, commit.

Total time with subagents: maybe 30 minutes, with tests, docs, and code review happening in parallel with your active work.

Without subagents: easily double, with each task waiting in a serial queue and all sharing a single degrading context window.


Getting Started

You don’t need to set up everything at once. Start with one subagent:

  1. Create .claude/agents/test-writer.md using the template above
  2. On your next feature, dispatch test writing to it: @test-writer write tests for [file]
  3. Evaluate: Were the tests good? Did the isolation help? Did it save time?
  4. If yes, add a second subagent (code reviewer is the natural next choice)
  5. Build up gradually based on what your workflow actually needs

The 100+ pre-built subagent definitions repository on GitHub is worth browsing for inspiration, but resist the temptation to install 50 of them on day one. Start with what you need.


Further Reading

ResourceWhy You’d Read It
100+ Specialized Subagents (GitHub)Pre-built subagent definitions to browse and adapt
Claude Code Best Practices — Subagents (Official)Official guidance on when and how to use subagents
How Claude Code Got Better by Protecting ContextTechnical deep dive on why context isolation improves quality
Anthropic: 8 Trends Defining How Software Gets BuiltIndustry context for where multi-agent workflows are heading
MIT Missing Semester: Agentic CodingAcademic take on the same patterns — good for deeper understanding

Previous: Module 03 — Spec-Driven Development | Next: Module 05 — OpenClaw Agent Runtime