The Hamniverse now runs on Google Kubernetes Engine
Date: 2018-12-17 | gke | google-cloud-engine | google-kubernetes-engine | k8s |
Over the past 6-12 months, I've been working on rebuilding my online properties into a form that is both efficient and sustainable. I had been using Wordpress to host my many properties for some time, but eventually decided that the system was too "crufty" for my tastes. Thus began my multi-step journey towards crafting my own online identity.
The first move in this campaign was to tear down my long-lived Wordpress sites and move to something less "crufty". For that, I chose Hugo, an open-source static-site generator, for its simplicity and vision.
The next, was to host that site somewhere. Going in the theme of simplicity, I chose Digital Ocean - known for very competitive pricing and bare-bones attitude.
Once that was in place, the next infrastructural improvement was to build out a CI/CD pipeline, or the things that allows my new code to automatically propagate out to what you see as my website when you hit my domain. Without this improvement, I had to manually login to my server every time I wanted it to receive updated code (ew, inefficiency!).
Why I moved to Google Kubernetes Engine from Digital Ocean
I love Digital Ocean, but their bare-bones attitude has recently begun to feel a little too bare-bones for my tastes. As much as I like making the hard decisions/doing the extra work for better/cheaper outcomes (e.g. brewing my own cold brew), there are some things where the payoff just doesn't make sense. For me, that line was needing to admin my own servers (and doing my own laundry). There are companies out there whose whole job is to make deployments/hosting as easy as possible. Why would I spend an extra 5-10 hours a month dealing with maintenance of my properties when I could just pay one of them an extra $10-20 each month to get that time back for allocation on things I actually like doing, like building dope shit (same logic applies for my laundry)?
When I phrased it like that, it didn't make sense to me either.
The discrepancy first occurred to me as I attempted to build out my CI/CD pipeline using GitLab's built-in CI. Part of the process involves setting up a CI Runner on your server so that it can run deploy jobs when new code is out. However, in the process of setting up this Runner, I found that it didn't work with my current deploy process because I'd set up permissions on my server incorrectly by serving everything via root permisisons. This meant I had to reconfigure my server if I wanted to make progress down this route.
Note: Having finally finished my newest iteration of The Hamniverse's CI/CD pipeline, I now understand that I didn't actually have to install that Runner. But mistakes are part of the journey and, ultimately, it got me to a better outcome.
But the thought didn't excite me. Sure I could figure out how to reconfigure it so it'd work, but it would take some time and there were no guarantees that I wouldn't run into other configuration issues down the road. So, yeah, I could figure it out, but the question then became should I figure it out?
Server configurations don't excite me. What they enable does. So I thought about how I could optimize for what I like and my solution was to delegate things below my application layer to someone else.
How do we accomplish this? By moving our apps to a more managed hosting provider.
This worked for me in a few ways
- Less time spent on server maintenance, more on building dope shit
- Experience with a main-stream cloud provider. I feel the need to sidecar that comment by saying that I really do love DO. They are a great choice depending on where your values lie, they just weren't the right choice for me right now. That being said, I will absolutely be vetting them once their Kubernetes Engine reaches public release.
- Experience with Kubernetes, a huge buzz word in the industry today
After getting lost in AWS' endless bland menus for a few hours, I decided to give Google Cloud Engine (and specifically Google Kubernetes Engine) a try. I'll do a deep dive at some point once I feel I have enough experience with the platform to provide useful insights, but so far the process has been pretty painless. The prices are steeper, but those prices come with a lot more bells-and-whistles which is exactly what I wanted.
How iamhamy is deployed
My site's architecture is almost exactly the same as it was a few months ago:
- front-iamhamy - the frontend for hamy.xyz, this is The Hamniverse's landing page. Vanilla HTML/JS for now.
- blog-iamhamy - the code for my primary blog, hamy.xyz. Built with Hugo.
- labs-iamhamy - the code for my creations blog, hamy.xyz. Also built with Hugo
- iamhamy - the code for configuring each of these sites, building out the service that links them together
- everything still runs inside docker containers
The only difference is that now we've got some added Kubernetes features to help glue things together. I am by no means an expert, but I believe these are the most notable differences:
- each site has a Deployment (which configures deploys) and a Service which tells k8s how to talk to it
- We use a k8s-specific nginx-ingress and nginx-ingress-controller to allow for external traffic to hit my containers. We can still configure these just as we would a normal nginx-server, so it's still the agent that deals with reverse proxying for my subdomains
With all that infrastructure setup, I was able to build out the CI/CD process that triggered this cloud migration in the first place. It more or less works like this:
- I write code locally and push to the respective repo
- GitLab CI (configured within each individual repo) triggers on commit and starts the build/deploy pipeline which
- pulls the code
- builds the container image
- publishes the image to GitLab's image repo
- logs into GKE using service account auth and deploys the new image to k8s
Because Hugo is statically-generated, it doesn't have a concept of "hidden" pages. Instead, it deals with this by only building the things that should be seen, leaving things like posts that are still in draft mode/aren't supposed to be published yet (discerned via post timestamp) out of the public/ folder. This means, however, that builds must happen periodically lest a post that should go out today doesn't just because no commit has happened since targetPublishTime (no, I don't code all the time). Luckily, GitLab has built-in support for build pipeline schedules, so I created a conservative schedule that balances the want to have posts out as soon as possible with limiting Runner usage to the free tier (and otherwise not being a resource hog).
Moving forward
So that's how my site works. If you've got questions, feel free to reach out and I'd be happy to answer.
In the coming months, I hope to focus my efforts on other projects but there are some things I'm interested in tackling in the not-so-distant future:
- Regular posts - I have a system to make deploys easy, now I just need a reason to use it
- HTTPS - it's more secure
- Experiments - Hosting non-blog-style content here, like interactive experiments and such
- Migration of old WP posts - There's a lot of good content just sitting in my archives. Would be nice to get it out.
Want more like this?
The best / easiest way to support my work is by subscribing for future updates and sharing with your network.