code for my homelab
most machines in my lab are ones i save from the dumpster or recycler, mostly laptops and a few desktops used for network storage. the only machine i have purchased is the raspberry pi.
all servers are named after characters from lost, except for my storage server which switches off between the names "zira" and "cornelius" (named after the chimpanzees from the original planet of the apes) with every rebuild, we are currently on zira iv.
i historically used ubuntu server for all my servers, and kde neon for desktops/laptops. i am now moving everything to nixos and xfce with only my storage server and lab cluster nodes left to migrate.
my main network is a tp-link router and switch which are mounted in a custom 3d printed rack that hangs from the underside of my sit/stand desk. my switch has four poe ports which are routed to:
- tp-link access point in my living room
- amcrest camera in my lab
- amcrest camera mounted on my roof covering the front yard
- poe pass through switch in my garage
the custom rack also has custom mounts for the following devices:
- aplifier for my desktop speakers
- hurley [
pi 4
|debian
]: networking entrypoint (caddy), granafa instance, random other lightweight services - ben [
gmktec g3
|nixos
]: k3s single node server - hdhomerun: for recording local over-the-air tv
i also run ethernet to a switch in my laundry room storage shelves where i have a few more machines:
- zira [
custom dell optiplex
|ubuntu
]: zfs storage server- pools
- bucket - media + long term storage
- x2 18tb mirror
- x2 24tb mirror
- 42tb usable
- scratch - download cache, misc. files
- x2 2tb zfs raid zero
- 4tb usable
- bucket - media + long term storage
- pools
- charlie [
dell precision t1700
|ubuntu
]: k3s "lab" master- runs home assistant using a zigbee usb receiver
- jack [
dell precision 7520
|ubuntu
]: k3s "lab" worker- runs frigate with a coral tpu usb
the poe pass through switch in my garage provides internet to three thinkpad laptops mounted to pegboard next to my 3d printer on my workspace:
- t480-0 [
thinkpad t480
|nixos
]: i do all my development remotely on this machine (vscode server) and it also runs my central prometheus server that gathers metrics on all my machines/clusters - claire [
thinkpad t470p
|ubuntu
]: k3s "lab" worker - plex
[thinkpad t480
|ubuntu
]: runs plex and tautuli, accesses media via nfs shares on storage server
it also has a free port which i use when i work on machines in the garage.
- zaius [
dell precision t1700
|ubuntu
]: backup server, k3s single node server- pools
- pale - offsite backup
- x1 10tb
- 10tb usable
- pale - offsite backup
- pools
i am starting to run newer services in kubernetes, because i was running on docker compose for many years and needed a new challenge. most mission critical things are still run using plain docker or systemd.
my arr stack runs using the docker compose file located at playbooks/templates/zira/docker-compose.yaml
.
- applications are (usually) launched as a deployment
- when a service is created of type
LoadBalancer
metallb provisions the service an ip address on my local network- optionally the service is added to the tailnet using the tailscale operator and/or given a local dns entry (usually
<service>.r.ss
) using external dns.
- optionally the service is added to the tailnet using the tailscale operator and/or given a local dns entry (usually
- if external public access is needed an ingress record is created with a
<service>.k8s.rileysnyder.dev
domain- routed from a caddy reverse proxy acting as the entrypoint to my local network (pi4)
- longhorn for storage within the cluster, nfs for critical items (using main storage server)
- manifests are under
infra/k8s
applied either with kubectl, k3s manifests directory, or harness (both regular deployments and gitops), because i need to try everything
applications are deployed using kubectl apply
, whereas cluster baselines and core services are created by running an ansible playbook that places manifests on the master nodes manifest directory. i strive to be simple.
a few apps like frigate and home assistant are deployed using harness (my current employer).
my "lab" cluster is my main cluster, i have another running in oracle cloud using their free tier where i run some paid services, and one at my parents which serves as my offsite backup. all clusters remote-write their metrics to my main prometheus server which I use with grafana to create dashboards and alerts.
using ansible vault with a password in a local file
ansible-vault encrypt_string --vault-password-file .vault_password 'bar' --name 'foo'
cidr | notes | |
---|---|---|
home | 192.168.2.0/24 | |
lab | 192.168.254.0/24 | |
micro | 192.168.253.0/24 | |
tailscale | 100.64.0.0/10 | |
lab cluster | 10.42.0.0/16 | |
lab svc | 10.43.0.0/16 | |
oc cluster | 10.42.0.0/16 | need to migrate to 10.44.0.0/16 |
oc svc | 10.43.0.0/16 | need to migrate to 10.45.0.0/16 |
ocdr cluster | 10.46.0.0/16 | |
ocdr svc | 10.47.0.0/16 | |
oc2 cluster | 10.48.0.0/16 | |
oc2 svc | 10.49.0.0/16 |