My site tech stack
Date: 2018-08-11 |
By design, my site is simple. In the past, I'd used other services, some paid/free, some manual/hosted (Blogger, Ghost, WP), but I decided I wanted something that I could build and maintain end-to-end, in order to gain experience doing so and the peace of mind that if something ever happened, I'd know how to fix it.
I'm current hosting my system on server space I'm renting from Digital Ocean, but there's nothing special about this machine. On the contrary, I could just as easily port this entire setup to another machine (AWS, Heroku, Google Cloud) if I so chose. I decided to go with DO because it's easy to get up and running with, has a bunch of flexibility (basically zero abstractions from the machine), and is relatively cheap (at time of writing, I pay $5/month to host all my sites).
The components
1. Human - That's you visiting my site via your browser. There's some DNS stuff that happens before you actually hit my server, but I leave that as an exercise for you to investigate if curious.
2. ReverseProxy (Nginx) - You can think of this as a router within my server. It takes in incoming requests, matches them against rules I've encoded in its config, and redirects those requests to the appropriate destination. Right now, all of these rules are mapping domains (hamy.xyz, hamy.xyz, hamy.xyz) in the request to different ports on the same machine, behind which lies another app server listening for requests to serve up the corresponding content. You should note that these rules could just as easily forward requests to different servers instead of different ports if I wanted to share the load across machines.
3. hamy.xyz (Nginx) - hamy.xyz is pure, vanilla html/css served up via Nginx - a fast, light-weight, open source server application that is super easy to use. I wanted to play around with Docker containers cause I'd heard all the hype and think the best way to learn is to do, so I created each of these apps (including the ReverseProxy) as one. It took a bit to wrap my head around and implement but I eventurlally got it working and it's turned out to be super useful, particularly with the docker-compose functionality to be able to manage the build/deploy of multiple containers as part of a larger whole.
4. / 5. hamy.xyz, hamy.xyz (Nginx, Hugo) - Both of these leverage Hugo (an open source, static-site generator) to buid pretty pages awith a cohesive (not-built-by-me) theme. Hugo spits out regular HTML/CSS files, so I just serve those up with another Nginx server via container. I decided to go with Hugo rather than a more heavy, feature-rich blogging software like WP because I wanted to maintain simplicity all the way down and figured it would be easier to scale should the need arise (serving static files is supre cheap processing-wise). As for why I didn't just build all the static pages myself - that seems like a hassle and I've recently observed that I just don't get that much satisfaction from nit-picking at site styles like I used to.
*6. .hamy.xyz (???) - These are the proerties I have yet to build, but I wanted to call them out to highlight the flexibility of my setup. If I want, I can spin up as many sites, apps, whatevs as I want and link to them by adding a rule to my ReverseProxy without having to pay another dime. Now, obviously, you probably don't want thousands of apps running on the same machine at the same time, but we could easily route to other machines if that ever became a problem.
That's the stack.
As I've said, it's super simple and I'm sure there's a lot of room for improvement - this is only my second personal site hosted this way and I'm tearfully new to the whole container/machine running thing. But I'm learning.
I like to not engineer for problems that don't yet exist (depending on perceived likelihood and potential cost of course), but I believe the next step for my setup is to figure out how to scale to more users. Right now, I only get ~100 users/day and my tiny Droplet (that's what they call server instances in DO) has been able to easily accomodate this. I will, however, want to think about ways to scale should the need arise (like getting to the front page of Product Hunt/Hacker News) with the hope that the likelihood of such an event happening increases with each passing day.
But we'll see. I've never load tested an Nginx server but if it can handle > 10K views / day (which I'm pretty sure it can), then this likely won't be an issue for a long, long time.
HAMY.OUT
Want more like this?
The best / easiest way to support my work is by subscribing for future updates and sharing with your network.