Self-hosting at home

This website works at a Kubernetes cluster at home, which I built from old thin-clients and mini-pc. This is a story of how I did it.

You could check the status of services anytime at status page.

All the configuration is stored in Git and publicly available at GitHub.

Backstory or Why?

I love watching YouTube – and watching all the tech-bloggers building their own home labs made me realize that I want the same: the cluster at home, a digital playground where I could play around with all the stuff, but to my own taste.

Being an ex-DevOps-lead for one of the largest investment banks, I have a certain bias about availability, disaster recovery and all boring controls stuff.

On top of that, I didn’t want to spend a ton of money on hardware which I would barely use (there’s a decent chance for that!), and I really wanted independent bare-metal hardware nodes, without any kind of VM virtualization – I want to be able to physically turn off any one of them, replace with newer / other, and do any kind of maintenance without any disruption.

Because of electricity prices and noise, I couldn’t buy used HP server and put it into server rack at home: each node would consume at least 200 W of power, and for high availability I need at least two of them.

So, inevitably this led to the weirdest solution of all possible – a multi-master Kubernetes cluster on Thin-Clients and mini-pcs!

Network

As with every project, the first step was to write down a detailed plan what it should look like within my home network.

So, currently I have a router coming from ISP, Fritzbox. It is a simple network router for use at home, with a coaxial cable connection (yes, it is still a thing in Germany in 2024), single 2.5Gb port and four 1Gb ports. This router is not very configurable, so my current home network is 192.168.178.0/24 with single gateway to internet 192.168.178.1.

I knew that I needed at least three nodes with static ips for Kubernetes for proper multi-master highly available setup. Plus, I might need some virtual Ips for loadbalancing of services running inside K8s cluster.

So this led me to the following network setup:

IP Description
192.168.178.0/23 Subnet coming from router
192.168.178.1 Fritzbox router
192.168.178.2 - 192.168.178.9 Reserved for unknown future usages
192.168.178.10 - 192.168.178.20 Kubernetes nodes. I doubt I’ll ever have more than 10 physical k8s nodes at home
192.168.178.100 - 192.168.178.255 DHCP for devices. When I tried to count my smart bulbs and other IoT devices which loves to connect to wifi – I could get up to 50. Probably 150 is future-proof enough.
192.168.179.1 Virtual IP for K8s API
192.168.178.2 - 192.168.178.10 Reserved for unknown future usages
192.168.179.11 - 192.168.179.255 Virtual IPs for K8s services

Hardware

Next steps would be getting actual hardware. In the era when latest Raspberry PI costs almost 100$, and provides very limited performance per bucks the very next choice would be used thin-client or mini-pc (with upgradable RAM and much more expansion options!).

So hardware behind my home lab looks like this:

Node name Device CPU RAM Storage Price Notes
Alfheimr HP T620 AMD GX-415 4*1.5Ghz core 16Gb 256GB SSD 45$ + SSD + RAM Bought used from some website
Midgard ThinkCenter M710q i3 6100T 4*3.2GHz core 16GB 512GB SSD 100$ + SSD + RAM Bought used from a friend
Niflheimr Firebat AM02 N100 4*3.4Ghz cores 16GB 512GB SSD 160$ Bought from AliExpress as new
Yotunheimr SOYO M2PLUS N100 4*3.4Ghz cores 16GB 512GB SSD 115$ Bought from AliExpress as new

Also, I had spare old network router (Zyxel Armor Z2), which is used as 1Gbps switch – I certainly don’t want to build the cluster over Wi-Fi!

Operating system setup and bootstrap

Once I got all the hardware, I had to set up an operating system, and basic remote access – because devices are to be resided in network rack without any keyboard or display to configure them.

So I decided to use Debian Testing (pre-release of next Debian release (Trixie, as of writing) with all the new and shiny stuff).

Why not Ubuntu (or other distro name) you may ask? Well, simply because Debian isn’t as bloated as Ubuntu and I still hate snaps and all ad about Ubuntu Advantage. Also, I wanted to get all the nice and shiny things – so had to switch to Testing from Bookworm.

So I made a bootable USB-drive with Rufus, installed OS on each of the nodes, configured the same user with password, enabled openssh, set up static IP, connected to the switch – and moved to the rack.

Configuration

All future actions are to be automated, so I don’t have to repeat them on each of the nodes.

For OS configuration, I’ve used Ansible: I wrote a simple playbook with multiple roles which would set everything up. To help me set up everything - I wrote a small shell-script which would selectively trigger playbook roles by tags.

Lame, but it works.

As a Kubernetes distribution, I decided to use k3s. It ships as single binary, with a few components cut out, and a few others which can be disabled.

Originally I thought about using k3s + kine + Mysql Galera cluster instead of etcd, but that turned out to be a terrible idea – kine can’t work with clustered databases (or when id increases non-monotonically).

Besides k3s for a highly-available K3S api endpoint I’ll use keepalived. This is a linux project which basically advertises additionally IP over ARP, if no one else took that IP. It doesn’t do any kind of load-balancing – so if k3s is stopped, but node and keepalived is alive – then endpoint won’t work. If we really want to have HA access to K8S Api – then we can install loadbalancer with a list of backends for each of API server and then use keepalived on each of these servers -> then we’ll have a single loadbalancer IP. … todo insert here config example… …todo ansible playbook for installing k3s…

Ansible playbook is stored in Git and publicly accessible – this means that all passwords/credentials/secret have to be encrypted in the playbook as well. Ansible Vault is used for encryption of secrets, and JetBrains PyCharm offers really nice integration to encrypt password as simple as pushing Alt+Enter -> Encrypt.

The only things we need to install initially on K8s cluster – is ArgoCD. This is a GitOps tool which add CRDs for applications, allowing us to store in Git all objects we need to create in cluster – this way we can have a single repo containing all cluster configuration both from operating system configuration and k8s objects perspective – allowing us to easily recreate cluster from scratch, if required. So, to bootstrap a cluster we’ll also create a first meta-app Application which would reference a subfolder of yaml files in repos to install them into cluster. There will reside all references to other k8s apps we’ll install in next step.

Storage

TBD. Longhorn, Backups

DNS and HTTPS

TBD. DNS setup, cert-manager, external-dns, traefik, ExternalDNS + PiHole (PR+DR+OrbitalSync)

Tunnel to the internet, internal and external connectivity

TBD. Cloudflared. MetalLB.

Monitoring

TBD. Prometheus stack

Operators

Backup and DR

TBD.

Actual services I am self-hosting

TBD.

*arr media stack

TBD.

Personal finance

TBD. ActualBudget

Private cloud

TBD. Nextcloud for filesync

Status page

TBD. Gethomapage.dev

Bonsai demo-site

TBD. For a friend of mine.

RSS to telegram bot

TBD.

Telegram chat summarizer

TBD.

This website

TBD. Hugo. Same mount as nextcloud.