Home-Lab Setup
This guide walks through a typical home-lab setup: one mini-PC as the main host, a Raspberry Pi as an edge node, and a NAS for backups. Everything managed from a single dockmesh server.
The setup
Section titled “The setup”pluto— Intel NUC or similar (4 cores, 16 GB RAM). Main workhorse.pi— Raspberry Pi 4 (4 GB). Runs low-power 24/7 services (Pi-hole, Home Assistant).nas— Synology / TrueNAS / Unraid. Not running Docker — used only as an SMB target for backups.
Total home-lab budget: ~€600 one-time, ~€5/month electricity (the NAS is the hungriest).
Topology
Section titled “Topology”[Router] ─── [pluto] ── dockmesh server + agent (self-hosted apps) │ ├────── [pi] ── dockmesh agent (lightweight services) │ └────── [nas] ── SMB share for backups (no dockmesh)dockmesh server runs on pluto. Agent on pluto manages its local Docker. Agent on pi dials outbound to pluto. NAS is a backup target only.
Step 1 — Install dockmesh on pluto
Section titled “Step 1 — Install dockmesh on pluto”# On plutocurl -fsSL https://get.dockmesh.dev | bashDefault binding is 0.0.0.0:8080. Behind a home router, only reachable from the LAN — which is what you want for a home lab.
No inbound port forwarding from the internet. You’ll reach the UI via local DNS or Tailscale.
Step 2 — Tag hosts
Section titled “Step 2 — Tag hosts”In Hosts → pluto → Edit, add tags:
home-labx86always-on
Clicking Add host for the Pi, tag it:
home-labarmlow-power
Tags drive RBAC scoping and stack placement logic.
Step 3 — Enroll the Pi
Section titled “Step 3 — Enroll the Pi”In the UI: Hosts → Add host → pi. Copy the install command.
On the Pi:
curl -fsSL https://get.dockmesh.dev/agent | bash -s -- \ --server http://pluto.lan:8443 \ --token <enrollment-token>Within ~5 seconds the Pi shows up online. dockmesh handles the architecture difference — pulls linux/arm64 images for Pi, linux/amd64 for pluto.
Step 4 — Pick your services
Section titled “Step 4 — Pick your services”A typical home-lab stack list:
On pluto (x86, needs RAM):
- Nextcloud (see deploy guide)
- Jellyfin / Plex
- Gitea (see deploy guide)
- Vaultwarden (see deploy guide)
- n8n
- Monitoring stack (see deploy guide)
On pi (ARM, low-power 24/7):
- Pi-hole (DNS + ad-blocking)
- Home Assistant
- Mosquitto (MQTT broker for smart home)
- Uptime Kuma (monitors the other hosts)
Use host tags to constrain deploys: pi-tagged stacks go to the Pi, x86-tagged to pluto.
Step 5 — Local DNS
Section titled “Step 5 — Local DNS”For nice URLs like nextcloud.home, jellyfin.home:
- On Pi-hole (running on the Pi): Local DNS → Records
- Add:
cloud.home→192.168.1.10(pluto’s LAN IP) - Repeat for each service
- Point your router’s DNS at the Pi
Now any device on your LAN can browse to https://cloud.home and hit Nextcloud.
Step 6 — LAN TLS
Section titled “Step 6 — LAN TLS”For *.home domains, Let’s Encrypt won’t work (you don’t own the .home TLD). Options:
- Self-signed cert — valid but browsers nag; fine for tech folks
.local.example.comdomain with DNS-01 challenge — get a real cert via Cloudflare DNS API even if the host is only reachable on LAN- Tailscale HTTPS — Tailscale provides real certs for your tailnet
The second option is cleanest. Example Caddyfile block:
cloud.local.example.com { tls you@example.com { dns cloudflare {env.CF_API_TOKEN} } reverse_proxy nextcloud_app_1:80}Step 7 — Backups to the NAS
Section titled “Step 7 — Backups to the NAS”Without needing an agent on the NAS:
- On the NAS, create an SMB share
dockmesh-backups - In dockmesh: Backups → Targets → New target → SMB
- Host:
nas.lan, credentials, share:dockmesh-backups - Test connection — should list the share content
- Create a backup job for each stack, targeting this SMB target
Schedule daily backups at off-hours (02:00 or 03:00). Retention 30 days.
For off-site redundancy, set up a second backup target to cloud storage (Wasabi B2 is €5/TB/month — a great secondary) and create parallel jobs with weekly retention.
Step 8 — External access (optional)
Section titled “Step 8 — External access (optional)”You probably don’t want to port-forward your home router. Two clean options:
Tailscale — install on every device that needs to reach the lab. Create a tailnet. Access https://pluto.tailXXXX.ts.net:8080 from anywhere.
Cloudflare Tunnel — expose specific services (not the dockmesh UI) to the internet without opening any ports. See Cloudflare Tunnel guide.
Don’t expose the dockmesh UI to the public internet unless you’ve locked it behind SSO + 2FA + IP allowlist.
Monitoring
Section titled “Monitoring”Deploy the monitoring stack on pluto. Point it at both hosts via node-exporter. Now you have:
- Per-host resource graphs
- Per-container metrics
- Alert on: Pi unreachable for > 2m, NAS backup failed, disk > 85%
Connect alerts to a ntfy.sh topic (free, self-hostable) — phone notifications for home-lab events.
Home-lab servers run 24/7. A UPS is cheap insurance — CyberPower 650VA for ~€80 protects against blackouts and brownouts that corrupt your SQLite and PostgreSQL.
What I’d NOT do
Section titled “What I’d NOT do”- Expose dockmesh UI publicly — use Tailscale or a bastion instead
- Run databases on SD cards (they die in months) — use SSDs on the Pi
- Skip backups because “I’ll set them up later”
- Run dockmesh on the Pi itself and the NUC as agent — put the server where the resources are
See also
Section titled “See also”- Multi-Host — how the agent model works
- Backup & Restore — full reference
- Deploy Tutorials — each app gets its own walkthrough