Most developers use Claude Code for writing code. Fewer use it systematically for reviewing code — and those who do tend to get better results than tools purpose-built for review, because they can configure the review behavior to match their team’s standards exactly.
Here is a practical workflow for turning Claude Code into a structured code reviewer.
The Core Problem with Ad-Hoc Review Requests
Asking Claude Code “review this code” in a conversation produces inconsistent output. You get different depth, different focus, different structure every time. It might catch the SQL injection today and miss it tomorrow. This is fine for casual review but useless for building a reliable quality gate.
The solution is to make review mode deterministic: define exactly what Claude Code should check, in what order, and in what format.
Setting Up a Review Skill
Create .claude/skills/review.md:
# Code Review
When invoked, perform a systematic code review of the changes provided or described.
## Review Checklist
Run through each category in order. Do not skip categories because "they look fine" — check explicitly.
### 1. Security
- Input validation: is all user-supplied data validated before use?
- SQL/NoSQL injection: are queries parameterized or using an ORM properly?
- Authentication: does any new endpoint require auth? Is it enforced?
- Authorization: does the code check that the authenticated user has permission for the specific resource?
- Secrets: are any credentials, API keys, or tokens hardcoded?
- CORS: are any new CORS headers introduced? Are they appropriate?
### 2. Data Integrity
- Null/undefined: are nullable values handled before access?
- Type coercion: are there implicit conversions that could produce unexpected values?
- Boundary conditions: what happens at empty array, zero, negative numbers, max int?
- Concurrent access: can two requests racing produce corrupted state?
### 3. Performance
- N+1 queries: does any loop contain database queries?
- Missing indexes: are new WHERE clauses covered by indexes?
- Unbounded operations: can any loop or query return an arbitrary amount of data?
- Caching: should this result be cached? Is existing cache invalidated correctly?
### 4. Error Handling
- Are all failure paths handled? Can this throw unexpectedly?
- Are errors logged with enough context to debug?
- Are user-facing error messages safe (no stack traces, no internal paths)?
### 5. Code Quality
- Single responsibility: does each function do one thing?
- Naming: do variables and functions describe what they do?
- Duplication: is this logic duplicated elsewhere?
- Dead code: does this introduce unused code paths?
### 6. Tests
- Are new behaviors covered by tests?
- Are edge cases tested (empty, null, boundary values)?
- Do tests test behavior or implementation?
## Output Format
Structure the review as:
**Critical** (must fix before merge):
- [specific issue with file:line reference if possible]
**Warning** (should fix, can merge with owner discretion):
- [specific issue]
**Suggestion** (optional improvement):
- [specific improvement]
**Looks Good**:
- [one line on what was done well]
If a category has no issues, write "No issues found" — do not skip the category silently.
Now invoke it with /review when looking at code changes.
PR Review Workflow
For reviewing a full pull request:
# Get the diff
git diff main...HEAD
# Pass it to Claude Code
claude "$(git diff main...HEAD)" /review
Or if you want Claude Code to pull the PR details itself:
# With gh CLI
claude "Review PR #123. Here is the diff: $(gh pr diff 123)" /review
For large PRs, break it into files:
# Review changed files one at a time
for file in $(git diff main...HEAD --name-only); do
echo "=== $file ===" && git diff main...HEAD -- "$file"
done | claude - /review
Configuring Review Mode in CLAUDE.md
If you use Roo Code, you can create a dedicated review mode (see the Roo Code rules guide). For Claude Code, you can distinguish review behavior with a section in CLAUDE.md:
## Code Review Mode
When asked to review code (not write it), follow these constraints:
- Do not write replacement code unless explicitly asked. Identify and describe the issue.
- If you suggest a fix, show it in a code block prefixed with "Suggested fix:" — not inline in the review.
- Be specific: "line 47" not "somewhere in the authentication logic."
- Do not mark something as Critical unless it can cause a security vulnerability, data loss, or system outage in production.
- Format output as: Critical / Warning / Suggestion / Looks Good sections.
This makes a difference. Without it, Claude Code tends to rewrite entire functions in the middle of the review, which obscures the actual findings.
Security-Focused Review Prompts
For security-specific reviews, use targeted prompts rather than a general review:
Perform a security-only review of the following code. Focus exclusively on:
- Injection vulnerabilities (SQL, command, LDAP, XPath)
- Authentication and authorization gaps
- Insecure deserialization
- Sensitive data exposure
- Missing rate limiting or resource exhaustion vectors
- Dependency vulnerabilities (flag any library imports that have known CVEs)
Do not comment on code quality, naming, or performance.
This is more effective than a combined review for security-critical code paths — the security findings don’t get buried under style comments.
Reviewing CLAUDE.md Itself
One underused pattern: use Claude Code to review your own rules files.
Review my CLAUDE.md for the following:
1. Contradictions: rules that conflict with each other
2. Ambiguities: rules that could be interpreted multiple ways
3. Missing coverage: obvious scenarios not addressed
4. Token efficiency: rules that could be stated more concisely without losing meaning
5. Enforceability: rules that Claude Code cannot actually follow (requires knowledge it doesn't have)
Run this every quarter. Rules files accumulate cruft, develop internal contradictions, and pick up rules that made sense for a project version that no longer exists.
Automated Review Hooks
You can wire a review into Claude Code’s hook system to run automatically on commit:
{
"hooks": {
"PostToolUse": [
{
"matcher": "Bash(git commit *)",
"hooks": [
{
"type": "command",
"command": "echo 'Run /review on staged changes before committing'"
}
]
}
]
}
}
This is a reminder hook, not a blocking hook. A blocking review hook (exit code 2 to block the commit) is possible but requires careful scoping — you do not want Claude Code blocking every commit while it thinks.
For CI, a better pattern is a GitHub Actions step that calls claude /review --output=github and posts the output as a PR comment. This keeps the review in the PR thread where it is visible and actionable.
Building a Review Culture with Claude Code
The goal is not to replace human review. Claude Code catches different things than humans do — it is more consistent at mechanical checks (null safety, input validation, missing error handling) and less reliable at systemic judgment (is this the right abstraction for the whole system?).
Use Claude Code review for:
- Security and data integrity checks
- First-pass review before requesting human review
- Checking your own code before opening a PR
- Reviewing code in areas where your team has limited domain knowledge
Reserve human review for:
- Architectural decisions
- Evaluating whether the abstraction fits the system
- Cross-cutting concerns that require knowing the whole codebase
- Anything where the reviewer needs business context the code doesn’t contain
The combination — Claude Code for the mechanical, humans for the judgmental — produces better outcomes than either alone.