4 min read

LK Weekly: In the Clouds

"LK Weekly: In the Clouds"

Technical post ahead!

As the public project / sharing link feature is nearing completion, I've been looking for a painless way of deploying it to ☁️tHe cLoUd ☁️. This public-facing, mobile-compatible, read-only (at first) interface for LegendKeeper will run as entirely new app, as described here. It lays the foundation for the next generation of LegendKeeper, and cleans up technical debt that's been slowing me down lately.

History

When I first started working on LegendKeeper as a side project, LK ran on Heroku. My employer at the time planned to introduce Google Kubernetes Engine for its infrastructure. As a learning opportunity, and because it was cheaper, I decided to move LK to GKE. I definitely learned a ton, but I'd be lying if I said it wasn't a pain in the ass sometimes.

The basics of k8s aren't that bad if you're already familiar with Docker. I'd say 95% of the time, things work as expected and go off without a hitch. The other 5% is random internal DNS resolution errors, keeping clusters updated, cost vs. capacity optimization and provisioning, and managing disk volumes. (Luckily I was smort and didn't run my database in kubernetes.) Interesting tech, but too crunchy when I'd rather be focusing on building a cool product and business. It made sense at the time when my main focus was evolving as an engineer, before I knew LK was gonna be this big thing.

On the cluster, LK has its web server, a websocket server for real-time, and a sync worker that ensures your documents get saved to long term storage in a timely manner. This worker automatically becomes a fleet of workers during spikes of chunky traffic. Other than that, there are a couple redis caches. All in all, it's pretty neat having a "personal heroku", but as LK grows and my attention is needed elsewhere, I'm less and less willing to spend time tinkering with infrastructure.

Another point is cost: LK's redis caches on GKE cost me maybe 10 bucks a month, while running those same caches on Google Memorystore or Redis cloud would be $200-400 a month. I'm sure they're more reliable than the ones I'm running, but I haven't had many issues to justify that price.

Anyways, all that to say: Kubernetes is cool but I'm looking to simplify things. Nowadays there are many services that do much of this for you, and they've become cheaper over the years as competition has heated up.

Serverless?

Since the new LK is running on Next.js, my most obvious next choice would be using Vercel. Vercel is a front-end, serverless-focused web platform, allowing for the easy building and hosting of Nextjs projects, among others. It does builds and deploys automatically, and turns your Next.js API routes into serverless functions.

This was pretty cool, but had some downsides. So far I'm not a huge fan of serverless. While testing it, everytime an API call was slow, I had to ask myself: Is this a cold start? Or is something wrong with my client code? Or my server code? Or my Redis instance? Or my connection pooler? Or my database? Or Cloudflare? I value consistency---consistency saves you a lot of cognitive capacity. Vercel API calls varied wildly in performance even with good caching habits--enough to be too distracting.

Working with Vercel + a traditional database was also not fun. First off, Prisma, the DB access library I'm using, is chunky and takes a while to boot up in a serverless environment. Vercel will eagerly spin up thousands of serverless functions to serve requests, which then immediately overwhelm the database with connections if it's not ready for them. If you're not using a DB service that already has connection pooling, the solution is Prisma Data proxy or pgBouncer. Prisma Data Proxy is proprietary and seemed kinda half-baked. I spun up a pgBouncer instance on my GKE cluster and that worked but... Kinda goes against the mission of doing less work and getting off of GKE, right?

Railway

My next trial was Railway, which is an ops and hosting platform for any web service that can run in a Docker container. I think it's just a wrapper around GKE, but that's exactly what I'm looking for. It took a while to get my monorepo configured correctly to run in Railway services, but once I did it was smooth sailing. I really like it so far. It's simple and fairly inexpensive, and it just works once you get things going. It's also easy to spin up Redis instances, and it looks like they just charge by VM usage, rather than wild stuff like "per command" like some hosted Redis services do.

Things are looking pretty good with Railway so far! If it continues to impress, I'll probably move the old stuff to Railway too if I can. It'll take a while to migrate everything to "New LK", so Old LK will run in parallel for a while. I'm hoping Railway proves to be the infrastructure provider that takes work off my plate. I'd rather focus on improving LegendKeeper itself, rather than the servers it runs on. 👍

That's all for this week; I'm in the home stretch for LK public sharing features. Once that's out, I'll move my attention to making "New LK" capable of editing as well, so everyone can hop on that and start using new editor.

Published
Written by Braden Herndon

Join 3,000+ worldbuilders getting practical tips

The LegendKeeper newsletter provides worldbuilding deep dives, RPG content, inspiration, and occasional product updates.

Unsubscribe anytime. Your email will be guarded with unbreakable wards.
Read our privacy policy.