Anthropic's Own Engineers on Using Claude Code: Sub-Agents, Hooks, and Parallel Workstreams
Anthropic published a candid account of how its own product and infrastructure engineering teams actually use Claude Code in day-to-day work — not a polished tutorial, but a description of patterns that emerged organically as teams grew comfortable delegating larger and larger tasks. The key insight is that Claude Code at scale is not a faster way to write individual functions: it is a tool for decomposing complex projects into parallel workstreams that run simultaneously while the engineer manages priorities and reviews outputs.
Pattern 1: Sub-agent parallelism
Anthropic's infrastructure team regularly opens three to five Claude Code sessions on the same repository and assigns each a distinct workstream — one refactoring tests, one updating documentation, one backfilling type annotations, one implementing a new API endpoint. The sessions share the same filesystem, so completed work by one agent is immediately visible to the engineer reviewing another. This replaces the traditional sequential code-review-commit cycle with a parallel pipeline.
# In three separate terminal tabs for the same repo:
# Tab 1:
claude "Refactor all test files in /tests/unit to use pytest fixtures. Do not change logic."
# Tab 2:
claude "Update CHANGELOG.md with entries for all commits since the last tag."
# Tab 3:
claude "Add type annotations to all public functions in /src/api/ — verify with mypy."
Parallel Claude Code sessions are most valuable when the tasks are independently scoped (different directories or different files), have clear completion criteria that Claude can self-verify, and when human review time is the actual bottleneck. If the tasks touch the same files, run them sequentially to avoid merge conflicts.
Pattern 2: Hook-driven quality gates
Several Anthropic teams use Claude Code's hooks configuration to run automated checks after every session. A post-session hook triggers the test suite; if tests fail, the output is captured and fed back as context for a follow-up session. This creates a tight feedback loop that catches regressions before they reach review — Claude Code effectively acts as its own CI system during active development sessions.
// .claude/settings.json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "npm test -- --passWithNoTests 2>&1 | tail -30"
}
]
}
]
}
}
Pattern 3: CLAUDE.md as shared team context
Every Anthropic engineering team maintains a CLAUDE.md file at the project root that documents team-specific conventions, non-obvious architectural decisions, common pitfalls, and the preferred way to run the project locally. New team members orient faster because Claude Code reads this file at session start, but the bigger benefit is that Claude Code's suggestions align with established team conventions from the first message — reducing the correction overhead that otherwise accumulates when Claude proposes changes that violate implicit norms.
- The exact commands to run tests, lint, and build — so Claude can execute them without asking
- Which directories are generated/output-only and should never be edited
- The project's error-handling convention (exceptions vs. result types, for example)
- Any third-party APIs that are mocked in tests and require real credentials in production
- The preferred PR size and commit message format