Introducing - a Real-Time, Globally Synced Display of 1000 Checkboxes

Date: 2024-07-29 | create | tech | projects | 1000-checkboxes | htmx | fsharp | cloudseed |

A few weeks ago I stumbled upon - a website that syncs 1 million checkboxes across the internet.

It was fun and inspired me to want to build something too because:

  • I could
  • It seemed fun
  • I was bored
  • I wanted to try to build it with HTMX
  • I wanted to see if I could do it in a few hours

So I built it. Here we'll dive into what it is and how it's built.

Try it now:

What is

The primary experience is a big list of 1000 checkboxes.

  • You check a box -> it gets sent to the server
  • Your web page pulls the latest list every few seconds

This syncing only happens every few seconds via polling so it's really near-real-time but truly all real-time systems are really near-real-time systems because physics so it's close enough.

How it's built

I had two ideas I wanted to explore with this build:

  • How would I build this with HTMX?
  • Can I build this in a few hours?

In the end I was able to build and deploy this with HTMX in ~6 hours so I think it was largely a success.

I improved my odds by sticking with technologies I am familiar with and loosening requirements to fit my parameters. This is the whole idea around Simple Scalable Systems - to find solutions that balance the values that matter in an attractive compromise.

Here I chose to use The HAM Stack and get a headstart on it with CloudSeed which gave me a ready-to-build fullstack webapp.

Tech stack:

  • Frontend: SSR HTML via F# + Giraffe.ViewEngine (with HTMX sprinkled in)
  • Backend: F# + Giraffe
  • DB: None - all data is in memory
  • Hosting: Serverless containers on Railway

Want to see the full source code? HAMINIONs members get access to 1000 checkboxes' full source code along with dozens of other projects.

You can learn more about how I build with this stack in:

Interesting Challenges

This build had a few interesting challenges, many that forced me into certain suboptimal solutions due to my choice of HTMX - both its features and my skill with it.

I like building side projects with HTMX but one of the downsides I've noted is that it's often not the right choice for things that need constant client updates like video games. This is because HTMX sends its frontend from the server so if you're constantly updating the client this means lots of network usage which is often one of the slowest and most expensive hops in a webapp.

Global Replication and Syncing

If you look at what one million checkboxes does it's basically a SPA that loads on the page, opens a socket, and seems to get updates from the server when a checkbox is changed. This how many real-time apps work as sockets are both fast and lightweight (you only need to send data that's changed).

HTMX might be able to do this streaming diff thing (idk am a noob) but it seems rather complicated as you would need a way for it to both get streamed HTML and enable it to identify the single checkbox that was updated. I feel like this is probably possible with the use of sockets, ids, and a streaming change pipeline but was outside the scope of what I've done in the past so would probably have killed my "build in a few hours" goal.

So instead I opted for a simpler polling approach where HTMX just polls for the updated checkbox grid periodically giving it a near-real-time sync while working within very simple and common request <> response patterns.

Of course this polling approach does mean sending the whole checkbox grid on each poll which leads to potential bandwidth problems at scale.

Dealing with HTMX Network Bandwidth

One of the benefits of the SPA socket transfer is it can send just the diffed data which is probably pretty small (if I was doing this I'd do smth like an array of indexes to check / uncheck which is probably a few bytes).

HTMX is sending the whole frontend though so it at least is sending the markup for an individual checkbox (if we were streaming diffs) but in my case is sending all 1000 checkboxes each time! On investigation this came out to about 45kb uncompressed which is reasonable for a periodic update but not one that happens every few seconds for every single user.

This is a quick way to explode bandwidth usage and incur runaway cloud costs which I would like to avoid.

Related: How this Developer’s Side Project racked up a $100k Cloud Bill on Netlify - and 5 ways to avoid the same fate.

To deal with this I implemented ASP.NET's standard response compression which was surprisingly easy and effective. This reduced my payload to about 5kb even with all the tailwind and htmx tags which is a pretty nice win (~90%) for the work.

It's still larger than it could be but small enough that I don't need to worry too much ab melting bandwidth. (Please be nice to my servers, ty.)

Handling Server Load

Now most of my projects get zero server load cause no one uses them. But I always like to think about this because it's fun and I want to avoid ridiculous runaway cloud costs so this hobby remains fun instead of leading to financial ruin and a lifetime of regret.

To handle this I decided to setup the simplest, most robust thing I could that would get close to the ux I wanted.

Basically this thing runs on a single server that has a max capacity set and goes to sleep if no one is using it. This means that if it gets too much load it will just die instead of costing me money.

This single-server approach brings with it some nice benefits that are both simple and scalable:

  • In-memory DB: I use a single array of 1000 items to store this datastructure which is very fast and memory efficient.
  • Caching: I use a single cache and don't need to worry about replication. It caches the most expensive thing (the output grid html) so that it saves as much compute as it can.

I don't expect this thing to really need these optimizations but it was fun to work through and it has them if it needs them.


I enjoyed building for fun. Sometimes I get caught up in the indie hacker / capitalist / NY mindset of everything needing a purpose and that purpose being money and impact which often robs the joy of building. So sometimes it's nice to just take a step back and build stuff cause you want to - a lesson I am continually relearning.

If you liked this project, I have a few others that are similar - totally useless but doing an oddly specific thing:

If you liked this post you might also like:

Want more like this?

The best / easiest way to support my work is by subscribing for future updates and sharing with your network.