Essay - Published: 2026.02.20 | 9 min read (2,261 words)
artificial-intelligence | build | claude | create | vibe-engineering
DISCLOSURE: If you buy through affiliate links, I may earn a small commission. (disclosures)
I've been diving into vibe engineering over the past several months in order to streamline my software engineering cycles. One of the biggest problems I see people run into is how to ensure AI makes quality code changes and how to review code at the speed AI can produce it.
My answer is basically - let AI do most of the grunt work.
A couple months ago I shared my simple script for reviewing code with AI but I've now gone through several more cycles of refinement to get a code review that is much more comprehensive for what I'm looking for. So here I'll share how I'm leveraging Claude Code subagents to review my code.
The command runs a comprehensive code review by spinning up 9 parallel subagents, each focused on a specific aspect of code quality. Each agent analyzes the changes independently and returns findings ranked by severity. The main agent then combines all results into a prioritized summary with a final verdict: Ready to Merge, Needs Attention, or Needs Work.
I personally use this skill on a lot of the changes I write myself and also tell my AI agents to run this skill and iterate on the feedback before saying they're "done" with a task - both when I'm iterating one task at a time with them and when they're running autonomously on a long-running task. The feedback is generally pretty good - critical and high are almost always useful unless they're just out of scope and the medium / lows are usually good ideas. I'd peg the suggestions at ~75% useful which is much better than the <50% I saw previously.
The subagents all handle a category of problems I frequently see in code I review from AI. It also covers cases where I think I could use suggestions like checking dependencies or seeing if there's maybe a simpler way to do something / existing code that can be deduplicated. I tweak the exact content of the agent prompts every now and then but I really like having several parallel ones so they can run pretty quick and use a decent amount of context for their particular job.
The subagents:
Full Code Review command is below. You can place it in .claude/commands to be able to run it via a slash command: /code-review.
# Code Review
Run a comprehensive code review using parallel agents, then synthesize findings.
## Scope
Determine what code to review using this priority:
1. **User specifies scope** - If the user provides a branch name, commit SHA, PR number/URL, or file paths, review that
2. **On a feature branch** - Review all changes on current branch vs main/master (`git diff main...HEAD`)
3. **On main/master with staged changes** - Review staged files (`git diff --staged`)
4. **On main/master, nothing staged** - Review the latest commit (`git show HEAD`)
Examples:
- "review my branch" → branch diff
- "review pr 123" or "review https://github.com/org/repo/pull/123" → fetch PR via gh
- "review commit abc123" → that specific commit
- "review src/auth.ts" → just that file's recent changes
- (no scope given, on feature branch) → automatic branch diff
## Instructions
Launch all 9 agents in parallel using a single message with multiple Task tool calls:
### Agent 1: Test Runner
Run relevant tests for the changed files. Report:
### Agent 2: Linter & Static Analysis
Run linters AND collect IDE diagnostics (using getDiagnostics) for the changed files.
Report:
### Agent 3: Code Reviewer
First, check if CLAUDE.md or a similar project style guide exists. If so, read it to understand project conventions.
Review the code changes and provide up to 5 concrete improvements, ranked by:
Only include genuinely important issues. If the code is clean, report fewer items or none.
Format each suggestion as:
Focus on non-obvious improvements - skip formatting, naming nitpicks, and things linters catch.
### Agent 4: Security Reviewer
Review the code changes for security concerns:
Also check error handling:
Report issues with severity (Critical/High/Medium/Low) and specific file:line references. If no issues found, report "No security concerns identified."
### Agent 5: Quality & Style Reviewer
First, check if CLAUDE.md or a similar project style guide exists. If so, read it to understand project conventions.
Review the code changes for quality and style issues:
Quality:
Style Guidelines: 4. Naming conventions - does naming match project patterns and style guide? 5. File/folder organization - are files in the right place? 6. Architectural patterns - does code follow established patterns in the codebase? 7. Consistency - does new code match the style of surrounding code? 8. Project conventions - does code follow rules in the project style guide (if present)?
For each issue found, provide:
If code is clean, report "No quality or style issues identified."
### Agent 6: Test Quality Reviewer
Review test coverage and quality for the changed code:
Coverage (with ROI lens):
Quality:
Test Code Quality:
Flakiness Risk:
Anti-patterns:
Report issues with specific suggestions. If tests are well-balanced, report "Test coverage is appropriate and behavior-focused."
### Agent 7: Performance Reviewer
Review the code changes for performance concerns:
For each concern, explain the impact and suggest a fix. If no concerns, report "No performance concerns identified."
### Agent 8: Dependency, Breaking Changes & Deployment Safety Reviewer
Review changes for dependency, compatibility, and deployment concerns:
Dependencies (if package files changed):
Breaking Changes (if public APIs or exports changed):
Deployment Safety:
Observability:
Report issues with specific file references. If no concerns, report "No dependency, compatibility, or deployment concerns."
### Agent 9: Simplification & Maintainability Reviewer
Review the code changes with fresh eyes, asking "could this be simpler?"
Simplification:
Maintainability ROI:
Look for:
Change Atomicity & Reviewability:
For each finding, explain:
If the code is appropriately simple and atomic, report "Code complexity is proportionate to the problem and changes are well-scoped."
## After Agents Complete: Synthesize Results
Collect all agent results and produce a prioritized summary:
1. **Categorize findings** - separate issues (should fix) from suggestions (nice to have)
2. **Rank by severity** - Critical > High > Medium > Low across all agents
3. **Collapse clean results** - agents with no findings get one-line summary
4. **Give verdict** - Ready to merge / Needs attention / Needs work
### Output Format
Tests (N passed), Linter (no issues), [other clean agents...]
[One sentence summary of what to do next]
### Verdict Guidelines
- **Ready to Merge** - All tests pass, no critical/high issues, suggestions are optional
- **Needs Attention** - Has medium issues or important suggestions worth addressing
- **Needs Work** - Has critical/high issues or failing tests that must be fixed
So that's how I'm currently doing code review with Claude Code.
The real win is telling AI agents to run this and iterate before marking tasks "done" - it catches issues before I even look at the code. Adding in the extra subagents has further improved how comprehensive and useful the feedback has been.
Note that I'm constantly iterating on my workflows. I kind of think of it like coding in markdown. If I see a problem or potential improvement, I can dive in and update the instructions.
HAMINIONs members get access to the HAMY LABS Example Repo which now contains my full Claude Code configuration that I use for all my projects. It includes my full changes workflow which I've used to vibe engineer games, libraries, and web apps as well as my configs for permissions, status lines, and additional commands I find useful.
If you liked this post you might also like:
The best way to support my work is to like / comment / share this post on your favorite socials.