Essay - Published: 2025.12.03 | claude-code | create | software-engineering |
DISCLOSURE: If you buy through affiliate links, I may earn a small commission. (disclosures)
I previously shared how I review code with AI and have since iterated on my approach.
Here's how I've been doing AI code reviews in my terminal using Claude Code.
Most of the time I just need a simple code review to get feedback on my direction and sanity check if anything has gone off the rails.
My current prompt for this is:
Review the code in this PR - Provide feedback and list the top 5 things we could do to improve it ranked by criticality and level of effort.
Some changes from my previous prompt:
In general this prompt typically gives ~2 ideas that are worth implementing and can be run in ~30s which is pretty good ROI.
This next code review is basically the same as above but we ask Claude Code to run tools and parse outputs to help provide data for the code review. It uses sub agents to protect the main agent's context and parallelize the tasks.
This provides a more comprehensive check of where the code's at and what needs to be changed.
Run the following steps in parallel with a generic subagent for each one.
1. Run tests on the files in my current commit. Summarize the broken tests in an easy to read format so I can go and debug.
2. Run lint on the files in my current commit. Summarize the failed lints in an easy to read format so I can go and debug.
3. Review the code in the commit - List the top 5 things we could do to improve it ranked by criticality and level of effort.
That's how I've been getting fast, "local" (it still talks to central servers) code reviews with Claude Code.
I've also been playing around a bit with cloud-first code reviews with tools like Claude Code Cloud and Codex but we'll leave that for another time.
If you liked this post you might also like:
The best way to support my work is to like / comment / share for the algorithm and subscribe for future updates.