I Vibe-Coded a C# Library with Claude Code - Here's 6 Things I Learned

Date: 2025-07-11 | artificial-intelligence | build | cinderblockhtml | claude | claude-code | create | csharp | vibe-coding |

DISCLOSURE: If you buy through affiliate links, I may earn a small commission. (disclosures)

I spent a few hours vibe-coding a new C# library with Claude Code last week.

Here are a few things I learned along the way.

What did I vibe-code?

I built CinderBlockHtml - a C# DSL for building HTML with composable building blocks. You can read the announcement post for more on why I built it.

I built it in a few hours over the course of 3 days. I just had a baby so my freetime has been fleeting and sporadic (newborns wake up every 2-3 hours to eat). Without interruptions this could've easily been done in a single sitting.

I used Claude Code on the Pro plan and only ran into rate limits once though if I had more uninterrupted time I probably would've hit these rate limits faster.

The CLI didn't bother me as much as I thought

I'm used to using Cline in VS Code which offers a lot of GUI integrations.

The CLI seemed weird to me - it's 2025 why not have a GUI?

But I found the CLI to be pretty straight forward and there are a few integrations with VS Code for showing diffs in window. Plus I can see how the CLI is easier to integrate as it works anywhere there's a terminal (so basically everywhere).

I did find myself using Claude in browser for most general Q&A. It felt more natural as it didn't pollute my CC session with random tangents. But targeted Q&A related to my project still felt alright in terminal.

Note: A common myth is that CC, Web, and Desktop have separate budgets but the official answer is that they share the same limits / usage.

Claude Code is the best agent I've used so far

Claude Code is by far the best coding agent I've used so far.

Caveat that I have only used Cline and Copilot for agentic coding tasks and I know there are plenty of other solutions out there so may be missing some. But in general CC is the best at taking my typed instructions, creating a reasonable plan for it, and actually one-shotting that plan. Other ones can do it but often need a LOT more direction and iterations.

One thing that stood out is that CC is really good at finding the files / code I'm talking about just from descriptions. Other ones can do this but by default seem to prematurely limit their search, presumably to limit token spend. CC doesn't seem to have this restriction and I care less about token spend due to the subscription model which I think is why it outperforms.

That said, linking files when you know them is probably best practice as it saves time and tokens which can be important over a long session and larger codebases.

Claude Code feels fast

CC feels like a power tool - it can code far faster than I can myself.

When it gets it right this is a huge improvement for me. For example I asked it to add XML documentation on all public members to remove an annoying build warning and to add tests for my benchmarks to ensure we were building similar HTML with each framework. These are both things I could do myself but is relatively low impact for a lot of typing.

By asking CC to do this, I saved myself ~30 minutes each and freed myself up to think more about the next tasks I wanted to accomplish.

I want to caveat that while CC can code faster than I can in terms of raw typing speed, there's a balance here.

  • CC is faster when it gets it right. But if I have to redirect it ~3 times then it often becomes slower. Checkpointing and small, atomic tasks help here.
  • While CC is faster than me, it may not be faster than other AI agents. I think its ability to one-shot successfully gives it fast throughput right now but could see other ones being technically faster right now.
  • CC's ability to do things in the background helps it feel faster. It may not be the fastest in raw output but the ability for it to go munch on a problem while I do other things means less active hand-holding for me.

You still need to review Claude Code's code

Claude Code is a fast, decent coder but that doesn't mean it always gets what you want.

In fact I'd argue that there's always something I want changed in every change it makes. This is normal - basically any PR by a human software engineer will have at least one thing it could improve. Some of this will be objective (this thing is broken) and a lot will be subjective (don't mutate the variable).

But I think the act of reviewing is very important for a few reasons - all similar to why we do code reviews in typical Software Engineering teams:

  • Making sure it's doing what you wanted - CC does an admirable job of following instructions but it still gets confused. For example: I asked it to add Razor to my project and it thought RazorLight was the same thing. Honest mistake but still one I had to course correct it on. In a code review this is basically the same as checking the PR against the assigned ticket to make sure the code change is accomplishing what the business asked for.
  • Making sure the code is reasonable - In a code review this is the same as giving suggestions for improvements or pointing out bugs when you see them. CC sometimes will add superfluous checks or write an operation 5 times instead of using a loop or pull in Moq for testing. These can be fine and workable in some situations but may not be consistent with your codebase so we want to clean up the code while we're here.
  • Making sure you understand the code - One of the biggest benefits of code review is making sure someone else understands what's going on. This helps prevent knowledge silos and single points of failure - if that engineer is gone, is there anyone who could step in to fix / build in this area? This is useful in production software but also on side projects so that when the AI gets stuck / makes a mistake (and it will!) you know enough about the project to get it unstuck.

A cycle I've found useful is to get Claude to code an atomic task fully (accept all edits) and then do the review in my editor by looking at the git change set.

  • Easy to rollback - git commit provides natural checkpointing.
  • Can see all relevant changes at once - Review the full change, not bits of it in isolation which may make more sense. Plus you can use whatever editor / display you like for code review.
  • Uses time efficiently - Claude can work async while you come in and check when it's done meaning a bit less hand holding along the way.

If I was using this on a team, I would do the local code review first before pushing up to remote code review for other teammates to look at. Local code review should not replace code review from others.

I didn't have an AGENTS.md or CLAUDE.md file but I bet this would cut down on the amount of subjective code review changes I ask of it as I can encode best practices for it to follow.

Claude gets overwhelmed so you should break down the project into smaller steps

While Claude is good at planning and execution, it breaks down when tasks get larger.

In many ways it feels like a junior engineer. It can code but it doesn't have great project management skills. So, similar to a junior, it works best if the tasks it's given are pre-scoped and described. Give it too big of a task and it's going to spin its wheels trying to make progress - often getting blocked on broken builds while boiling the ocean.

(Claude will tend to make a huge todolist of things but the ordering is a little off so there's no clear "milestone"s to stop at which often means 2-3 half-completed features but one of them broke something and now Claude is stuck and can't continue work on any of them.)

A better approach I've found is to make a plan and break it down into chunks - just like you'd break down a real world software project or epic into tasks at work. Then I start feeding claude the context and tasks one by one - similar to an engineer working on a single task at a time.

I find this allows it to one shot its tasks better (where most of its productivity comes from) and gives natural points for checkpointing and reviewing via atomic commits. Moreover it helps me stay on track with what we're trying to accomplish so I can better prompt, review, and project plan next steps.

As an aside: I think the atomic commits and PR stacking workflows work REALLY well for AI (and also for humans). They allow you to make progress on discrete items and have logical checkpoints to review and rollback to incase it goes off the rails. It's a little bit more time per change but overall improves throughput by reducing amount of rework.

I've seen some people say that getting Claude to track its own work and progress in a markdown file has been useful for keeping it on track. I haven't tried this yet but I bet it does keep it on track for bigger asks. In my next experiment I'll try to do this in a way where it keeps context but also only does one logical change at a time for checkpointing.

Claude still needs a driver

Claude is a good coder. It feels like magic to give it instructions and see it churn out code.

But code, like words, don't really matter. It's not the output but the outcome that matters.

Claude is great at output but the outcomes are still highly variable.

  • It gets bogged down in large asks
  • It gets confused and does the wrong thing
  • It codes in some non-optimal fashions

In this regard it reminds of a junior engineer - super useful but requires significant direction to make forward progress.

I think the metaphor for power tools also fits - a large productivity improvement in removing manual work but still requires strategy and preparation to use effectively. Like going from a manual saw to a table saw - cuts things much faster but still need to know how to use it, provide the projects you want to work on, and know how to do all the other things required to finish your project.

Next

I had a good time building this C# library with Claude and was so impressed that I've upgraded myself from Pro to Max. I rarely ran into limits with Sonnet 4 but wanted to try out Opus and Max lets me do that.

I'll be doing more coding with Claude in the future in addition to how I currently use it - as a better Google for technical references. Overall I'm way more positive about AI coding agents than I was previously - it was a useful pair programmer and will help me build more things in my limited free time while raising my kid.

If you use C# and are looking to build composable HTML server-side, check out CinderBlockHtml. It's vibe coded but I promise I reviewed most of the code!

If you liked this post you might also like:

Want more like this?

The best way to support my work is to like / comment / share for the algorithm and subscribe for future updates.