Rebuilding Cricalix.Net – Part 1

I’ve been hosting cricalix.net and other domains on Linode for years. There have been a few hiccups over that time-frame, but it’s generally been smooth sailing. The current VPS runs NGINX for web hosting (it was Apache until earlier this year), Postfix for SMTP, Courier for IMAP, and OpenVPN for VPN. Everything runs on one host, “bare metal” – there are no containers in play. Configuration changes are tracked with a Mercurial repository attached locally to /etc/.

Every so often, I look around at the hosting world to see what’s going on with pricing; I pay Linode for the VPS and backups of the VPS, and that currently costs me about 360 USD a year. Not a terrible price, but in my last round of looking around, I noticed that Hetzner were offering the same storage and vCPU resource level for less than Linode charge – about 50% less for the equivalent resources, and about 30% less for double the storage and 1 more vCPU.

The one “gotcha” is that Hetzner block port 25 and 465 outbound from new shared hosting clients for the first month; however, I already route all outbound mail via an authenticated relay that’s listening on 587, so this won’t be a problem.

Containers versus monolithic

I’ve been away from the pure operations side of life for a while now; the past 5+ years have been more about being a Python programmer with a dash of ops on the side (my employer works on a “you build it, you deploy it, you own it” model). This means I’ve been away from the changes in the industry towards containers and container orchestration (work has containers, but I use them as a user, I don’t have to worry about how they work). So that’s one reason in favour of trying out a container-based approach to my service hosting.

The other is that I like the idea of the container model for separating out services – I can spin up a new container to try a new version of software, and I can base the initial configuration on the existing configuration.

So, what are my options for containers in 2022? (This list is not exhaustive).

  • LXD natively, using the CLIs to manage it
  • LXD via Nuber; but I really dislike curl <address> | bash as an installation method
  • Proxmox VE as the base OS, and use their web UI to manage things
  • Docker

I tested both Proxmox VE and Nuber lightly, to see how they behaved. Proxmox VE required me to spin up a QEMU VM, which wasn’t a major issue. The UI isn’t bad, and I can see where having a whole cluster of container hosts would make Proxmox VE a useful tool. However, I won’t need that level of complexity, and I don’t mind using CLIs to manage systems. Installation on a Hetzner host looked like it might be slightly involved, but nothing overly complex or taxing (it’s not a default offering in their machine imaging process).

Nuber is a nice light-weight web UI, and far simpler that Proxmox VE. However, I have a philosophical objection to tools that install as curl thing | bash; I prefer packages that have signing keys. I also felt I wouldn’t learn as much about the underlying container technology if I used it to manage my small group of containers. It’s also possible to conduct attacks on machines executing curl | bash.

Docker. Well, I’ve tried Docker before, and I don’t like the compose-and-build approach it takes. It does have a really neat option for managing NGINX containers though; events are exposed on a bus to the containers, and you can use a nxginx-proxy container that monitors the bus and spins up reverse proxies to the web server installations in other containers. That centralises the certificate setup to one container. LXD also exposes a socket in the containers, but I don’t think anyone has written nginx-proxy for LXD yet.

So, I ended up on LXD native. I’ll be managing the storage volumes, proxy devices etcetera all from the CLI, which should lead to me having a better understanding of how the tooling and system works.

Preparation

When I’ve done hosting migrations in the past, it’s been a case of create a new VPS on the new hosting (or the same host), install all the software, copy the data across, and copy the old configurations across. This has worked out most of the time, but there are drawbacks. The first is that blindly coping the configuration file across means I miss out on new features, or syntax; sometimes this means the new version of the software won’t work, or works in a different manner from the older version. The second is that sometimes I’ve changed the OS distribution (say CentOS to Debian), and different distributions keep files in different places – just copying the config to the same location on the new host doesn’t mean it gets read by the software. The third is that I pretty did them live – pick a weekend, and start the process of copying everything and hacking on the configs, and then updating DNS to flip the services over.

For this rebuild, before I even reserve a VPS on Hetzer’s cloud service, I’m working with everything locally on my home PC. It’s got plenty of grunt – a Ryzen 5900X with 32 GB of RAM and a few TB of storage – so building out my entire stack on it is easy, and it only costs me the electricity to run the computer (cheaper than paying my electricity bill and a bill for Hetzner just for testing).

This is where LXD and containers in general become valuable – the cost of setting up a container and testing software in it is virtually free compared to building out a monolithic host. Containers are also much, much faster to deploy than a VM that’s installed from an ISO (I suppose I could have used a base image and iterated, so no re-install from ISO every time). I’m not affecting the installed packages on my personal computer, nor am I messing with packages on the hosting provider – I’ve got my own little bridged network for the containers to use, and I can create and destroy containers all day long until things work how I want them to. At the end of the day, I turn the computer off, and I don’t get billed for consuming resources on a cloud provider; with no “this is costing me (more) money” thoughts in the back of my head, learning and experimentation is much more free-flowing.

Approximate system layout

With containers, my goal is to split out services and provide isolated spools (using ZFS pools for production, but dir for testing) for the storage of the data processed by those services. Following this approach, I get the following containers

  • One mx instance running Postfix
  • One mail instance running Dovecot
  • One www instance running NGINX
  • One db instance running MariaDB
  • One vpn instance running Wireguard
  • One bbs instance running Synchronet (maybe)

All of the containers will be bound to a LXD bridge network device, with NAT for IPv4 outbound packets that originate from the containers, LXD proxies for inbound IPv4, and routed IPv6. For the IPv6 connectivity, I should be able to bind ::1/128 of the Hetzner-supplied range to the VPS NIC, and set up the bridge with the remainder of the pool (assuming it’s a /64).

The LXD proxy approach (or nftables port forwards) means the containers only expose the ports they need to expose, and I can rely on lxc shell <container> for management work; no need for SSH on a variety of ports.