Date: 2025.09.03 | artificial-intelligence | build | code-review | create | software-engineering | tech | vibe-coding | vibe-engineering |
DISCLOSURE: If you buy through affiliate links, I may earn a small commission. (disclosures)
I've been using AI to help me code for the past several months and have noticed a pattern that seems to set those who use it effectively apart from those who use it haphazardly.
The pattern seems to be that those who review their AI's code, understand it, and change it accordingly build more robust systems faster than those who don't.
AI generated code and references are incredibly good. But they're not perfect.
The reason they're not perfect can be many-fold:
So every time you do a generation, it might be a little bit off.
This means that even good generations are often off by some % from the ideal - maybe it's just 1% or maybe it's 10%.
The problem is that these misalignments compound. It might not be 2 or 3 generations from now but give it 10 or 20 and those small misalignments in what we hoped to achieve have now turned into large misalignments in how the system has grown.
This is often why vibe coding projects turn into big balls of mud after a few days / weeks. They start out working but as you layer small misalignments onto each other, even the AI starts to get confused about what's going on.
These misalignments happen in software engineering teams too but the key is the speed at which the system's trajectory goes off course. This is mostly because effective software engineering teams have processes in place to regularly reconcile these misalignments.
The way we resolve this in software engineering teams at scale is RFCs for large pieces of work and code reviews with peers for each change to reconcile those small % differences at implementation time.
Some people are against code reviews as they "slow people down" but I think they're one of the most effective practices we have for improving system quality and knowledge transfer. I also think this view of slowing down is short-sighted as it's a small cost short-term to avoid huge costs long-term if you need to fix a misalignment later.
So hopefully we're aligned that AI code is good but not great, misalignments compound, and code reviews are an effective tool for reconciling misalignments.
Here I want to share a few practices I've found work well for turning this AI code into a production-ready package.
So yeah, review your AI's code. Your system will be better for it and you'll likely improve as an engineer from it. Plus I find the more active approach to using AI is more fulfilling - we're building together, I'm not just pressing the slot machine button over and over.
If you liked this post you might also like:
The best way to support my work is to like / comment / share for the algorithm and subscribe for future updates.