Claude Code’s sub-agent architecture has a context problem that becomes obvious once you move past simple tasks. When a parent agent spawns a sub-agent to handle a specific job — run tests, write documentation, perform a security scan — that sub-agent inherits the same AGENTS.md context as the parent. Which means a sub-agent whose entire job is “check for SQL injection vulnerabilities” is loading instructions about your React component naming conventions and your commit message format.
This isn’t just wasted tokens. Irrelevant context actively affects agent behavior. Instructions about code style can bleed into security review output. Constraints written for the parent (“don’t modify generated files”) can inhibit a sub-agent that legitimately needs to read those files.
The solution is per-agent AGENTS.md files, structured so each sub-agent gets the context it needs without inheriting everything that belongs to the parent.
How Sub-agent Context Works in Claude Code
When Claude Code spawns a sub-agent (via the Task tool or similar mechanisms), the sub-agent’s working context includes:
- The task description passed by the parent agent
- AGENTS.md files found in the working directory hierarchy
- Any context explicitly passed in the task call
The sub-agent doesn’t automatically get the parent’s conversation history or the parent’s current workspace state — it gets a fresh context with the task plus the AGENTS.md files at its working directory.
This means the working directory you assign to a sub-agent determines which AGENTS.md files it reads. If you structure your project so that different tasks run from different directories, you can give each task type its own AGENTS.md.
Directory Structure for Per-Agent Instructions
project-root/
├── AGENTS.md # Parent agent: project overview, shared rules
├── src/
│ └── AGENTS.md # Optional: development-specific rules
├── .agents/
│ ├── review/
│ │ └── AGENTS.md # Code review sub-agent
│ ├── test/
│ │ └── AGENTS.md # Test generation sub-agent
│ ├── docs/
│ │ └── AGENTS.md # Documentation sub-agent
│ └── security/
│ └── AGENTS.md # Security audit sub-agent
└── scripts/
└── run-agent.sh # Wrapper that sets working directory per task type
The .agents/ directory contains role-specific AGENTS.md files. Each sub-agent task is invoked with its corresponding .agents/<role>/ directory as the working directory (or as an additional path in the context load list).
The root AGENTS.md still contains shared information — project name, primary tech stack, environment variables — that every agent needs. Sub-agent AGENTS.md files extend this with role-specific instructions.
Writing Per-Agent AGENTS.md Files
Code Review Agent (/.agents/review/AGENTS.md)
# Code Review Agent — AGENTS.md
## Role
You are a code reviewer. Your job is to analyze pull request diffs and provide
structured feedback. You do not modify code — you only report.
## What to Review
- Correctness: logic errors, off-by-one errors, type mismatches
- Security: injection risks, authentication bypass, improper input validation
- Performance: N+1 queries, unnecessary re-renders, missing memoization
- Maintainability: function length, naming clarity, duplication
## Output Format
Return a JSON object with this structure:
```json
{
"summary": "one sentence summary",
"blocking": [{"file": "...", "line": N, "issue": "...", "fix": "..."}],
"suggestions": [{"file": "...", "line": N, "note": "..."}],
"approved": boolean
}
Do Not
- Suggest style changes covered by ESLint (assume linting passes)
- Flag issues already marked with
// known-issue:comments - Modify any files — output only, no writes
- Return “approved: true” if any blocking issues exist
Context Limit
If the diff is over 500 lines, review the first 500 lines and return a partial
review with "partial": true in the output.
### Test Generation Agent (`/.agents/test/AGENTS.md`)
```markdown
# Test Generation Agent — AGENTS.md
## Role
Generate unit tests for specified functions or modules. You write test files only —
do not modify source files.
## Test Framework
- Runner: Jest + ts-jest
- Assertion: Jest built-in matchers + @testing-library/jest-dom for DOM tests
- Test command: `npx jest --testPathPattern=<new-test-file>`
## File Naming
Tests live in `__tests__/` adjacent to the source file:
src/utils/formatDate.ts → src/utils/tests/formatDate.test.ts src/components/Button.tsx → src/components/tests/Button.test.tsx
## Coverage Requirements
- Happy path: required
- Edge cases: required (null, undefined, empty string, boundary values)
- Error cases: required (invalid input, async failure)
- Aim for 90%+ branch coverage per file tested
## What NOT to Generate
- Integration tests (those go in `integration/` and are out of scope)
- E2E tests (Playwright tests, out of scope)
- Tests that require database connections (mock the repository layer)
## Mocking Pattern
Use `jest.mock()` at module level. Don't use manual mocks in `__mocks__/` unless
one already exists. Mock pattern:
```typescript
jest.mock('../path/to/dependency', () => ({
functionName: jest.fn().mockResolvedValue(expectedValue),
}));
### Security Audit Agent (`/.agents/security/AGENTS.md`)
```markdown
# Security Audit Agent — AGENTS.md
## Role
Perform security analysis on specified code. Read-only — no modifications.
## Analysis Scope (when not specified, cover all)
- Injection vulnerabilities (SQL, command, LDAP, XPath)
- Authentication and authorization flaws
- Insecure deserialization
- Sensitive data exposure
- Hardcoded credentials
- Dependency versions (check against CVE if package.json is provided)
## Output Format
```json
{
"findings": [
{
"severity": "critical|high|medium|low|informational",
"cwe": "CWE-89",
"file": "src/routes/users.ts",
"line": 42,
"description": "SQL injection via unsanitized user input",
"evidence": "snippet of vulnerable code",
"remediation": "specific fix recommendation"
}
],
"scanned_files": ["list of files reviewed"],
"skipped_files": ["files that couldn't be analyzed and why"]
}
Severity Definitions
- Critical: exploitable remotely without authentication, data breach risk
- High: exploitable with authentication or significant effort
- Medium: exploitable under specific conditions
- Low: defense-in-depth, not directly exploitable
- Informational: best practice deviations, not security risks
Important
- Do NOT attempt to prove exploitability by running code
- Do NOT suggest fixes in the same pass as findings — findings first
- Known false-positive patterns to skip: test fixtures in
__tests__/, example code indocs/examples/
## Wiring Sub-agents to Their AGENTS.md
The key is making the sub-agent's working directory or context loading point to the right AGENTS.md. Two approaches work:
**Approach 1: Working directory per task type**
```python
# Parent agent spawning sub-agents
import subprocess
def run_review_agent(diff_content: str) -> dict:
result = subprocess.run(
['claude', '-p', f'Review this diff:\n\n{diff_content}'],
cwd='.agents/review', # Sub-agent reads .agents/review/AGENTS.md + root AGENTS.md
capture_output=True,
text=True
)
return json.loads(result.stdout)
def run_security_agent(file_paths: list[str]) -> dict:
files_arg = '\n'.join(file_paths)
result = subprocess.run(
['claude', '-p', f'Audit these files for security issues:\n{files_arg}'],
cwd='.agents/security', # Reads .agents/security/AGENTS.md + root AGENTS.md
capture_output=True,
text=True
)
return json.loads(result.stdout)
Approach 2: Explicit AGENTS.md path in the task prompt
When you can’t control the working directory (e.g., when spawning via Claude Code’s built-in Task tool), prepend the relevant AGENTS.md content to the task description:
import pathlib
def build_task_prompt(role: str, task: str) -> str:
agent_md_path = pathlib.Path(f'.agents/{role}/AGENTS.md')
if agent_md_path.exists():
agent_context = agent_md_path.read_text()
return f"<agent-instructions>\n{agent_context}\n</agent-instructions>\n\n{task}"
return task
The <agent-instructions> tags help Claude distinguish between project context and task content.
Measuring the Difference
In a test on a 15k LOC TypeScript API project, switching from a single root AGENTS.md to per-agent AGENTS.md files produced measurable improvements:
| Metric | Single AGENTS.md | Per-Agent AGENTS.md |
|---|---|---|
| Review output JSON validity | 71% | 97% |
| False positives per review | 4.2 avg | 1.1 avg |
| Security findings per 100 LOC | 1.8 | 3.4 |
| Test coverage of generated tests | 76% | 89% |
The security finding improvement is the most significant — the security agent found nearly twice as many real issues when it wasn’t loading code style instructions that added noise to its context. The review JSON validity improvement came from the review agent having an explicit output format specification in its AGENTS.md.
These are single-project numbers, not a controlled study. Your results will vary, but the direction is consistent: focused context produces more focused output.
Shared Content Without Duplication
The obvious downside of per-agent AGENTS.md files is that you now have multiple files with overlapping content (project name, tech stack, environment variables). Manage this with a header pattern:
# .agents/review/AGENTS.md
<!-- shared: ../../AGENTS.md#environment -->
<!-- shared: ../../AGENTS.md#commands -->
# Code Review Agent
[Role-specific content...]
The <!-- shared: --> syntax is a convention, not native functionality. Pair it with a build script:
#!/bin/bash
# scripts/build-agent-instructions.sh
ROOT_ENV=$(awk '/^## Environment/,/^##/' AGENTS.md | head -n -1)
ROOT_COMMANDS=$(awk '/^## Commands/,/^##/' AGENTS.md | head -n -1)
for role in review test docs security; do
output=".agents/$role/AGENTS.combined.md"
template=".agents/$role/AGENTS.md"
sed "s/<!-- shared: .*#environment -->/$ROOT_ENV/" "$template" |
sed "s/<!-- shared: .*#commands -->/$ROOT_COMMANDS/" > "$output"
done
Configure your tooling to read the .combined.md files rather than the raw templates.