How I Actually Code with AI as a Senior Software Engineer

Date: 2025-10-22 | artificial-intelligence | build | create | tech | vibe-engineering |

DISCLOSURE: If you buy through affiliate links, I may earn a small commission. (disclosures)

It's been a couple months since I posted Stop Vibe Coding and I've been using AI daily at work and in my side projects.

I wanted to give an update on how I'm actually using AI in my workflows.

What I'm using AI for

I'm using AI to code daily. Not in every task but in lots of them.

Some areas where I think AI is a great tool for coding:

  • Investigations / research around coding topics
  • Coding - faster prototyping and implementation, especially for boilerplate-heavy code
  • Reviews - Quick first passes to turn a first draft into a second draft

Investigations

AI has largely replaced Google for me for common code-related searches.

  • Give an example of a try/catch in language x
  • Error with text Y
  • How to do an exhaustiveness check in TypeScript

This is something I predicted a year ago in How Software Engineers Actually Use AI to Improve Productivity and has generally remained true. AI is simply faster at information retrieval and can mold the answers to better fit your exact query which beats out sifting through several ad-infested SEO-filled tangentially-related articles to try and craft an answer yourself.

Of course AI still gets things wrong and is typically better at more surface level queries. But the speed up on first retrieval is generally worth the tradeoff as the harder queries would require additional research anyway. For those I typically fall back to reading the docs cause if the AI is getting it wrong then it's likely many of the initial articles are wrong as well.

Tools I use:

  • Claude primarily
  • Fallback to ChatGPT and Gemini depending on what's available (work licenses different tools than I use personally)

Coding

I use AI regularly to help code features. It's still not great at building large features IMO but it is getting quite good at well-scoped tasks / parts of features where examples exist for it to follow.

In general, my process follows my Vibe Engineering cycle:

  • Don't outsource the plan / thinking to AI as it's still not great at creating and maintaining direction / vision long-term
  • Do outsource specific coding tasks where it can be a speed enhancer over manually typing stuff out yourself.
  • Checkpoint your work regularly to avoid AI trashing all the progress you've made

The Cycle:

  • Plan my project myself, use AI for research and reviews
  • In each milestone / sub milestone do a tight loop of:
    • Specific prompt - directly if small, Markdown file if larger
    • Review its plan and make changes as necessary
    • Auto accept all edits
    • Review that code and trash / change / accept it based on how it did
    • If it made good progress / in right direction make a temp commit of that
    • Iterate to the next bit of the feature

This is pretty similar to my own coding process:

  • Observe - Make a plan
  • Create - Iterate towards the plan in small batches (I'm a fan of Atomic Commits)
  • Reflect - Review the code at each juncture and make changes as necessary

I currently stick with agentic workflows for coding. I don't like autocomplete features in general because it breaks my train of thought. Agentic workflows on the other hand can be called up when I want them and work in the background which fits my flow better.

I'm primarily using Claude Code but am open to experimenting with other tools like Codex at some point. I previously used Roo Code and Cline but found that Claude Code was smoother and had a better billing plan for my usecases (I run Max right now).

Reviews

I've started using AI as a first-pass reviewer. It's surprisingly good in the sense that it regularly catches things / has ideas that are useful.

I think this is less up to how good the AI is and more just having a second pair of eyes on a work. I have my own practices for reviewing my code / writing to try and catch more of this stuff - like reviewing the code in the review tool and my writing on my website to view it from a different perspective of the editor. But at the end of the day you wrote the thing and have some preconceptions about it that someone else just won't have.

So my process here is:

  • Review my code myself and iterate
  • Ask the AI to review the code and provide 3 things that could be improved about it and iterate
  • Do a final pass before submitting it to humans to review

Related: Review your AI's Code - A Simple Process for Building More Robust Systems Faster with AI

For this I typically just use Claude Code if it needs context across multiple files or an AI chat if it's a bit smaller / self-contained. Nothing fancy although I've got my eye on things like CodeRabbit and Claude's GitHub integration and may try those in the future.

Next

So that's how I'm using AI these days for coding. Similar processes also apply to my writing although I do all the actual writing myself as I find AI writing to be flat and kind of soulless.

I'd estimate AI is writing around 40% of the code I submit though that code leans heavily on boilerplate (like unit tests) and those are heavily edited / iterated on before pushing. But still a significant chunk of code is touched by AI.

As for how much faster this makes me, I'd say ~20%. AI helps me research and code faster but it doesn't necessarily help things get over the line that much quicker. But still, a pretty significant speedup.

If you liked this post you might also like:

Want more like this?

The best way to support my work is to like / comment / share for the algorithm and subscribe for future updates.