Add SD image pipeline, documentation overhaul, and fix module issues

- Add automatic SD image builds for Raspberry Pi via Forgejo Actions
- Enable binfmt emulation on cryodev-main for aarch64 cross-builds
- Add sd-image.nix module to cryodev-pi configuration
- Create comprehensive docs/ structure with installation guides
- Split installation docs into: first-install (server), reinstall, new-client (Pi)
- Add lib/utils.nix and apps/rebuild from synix
- Fix headplane module for new upstream API (tale/headplane)
- Fix various module issues (mailserver stateVersion, option conflicts)
- Add placeholder secrets.yaml files for both hosts
- Remove old INSTRUCTIONS.md (content moved to docs/)
This commit is contained in:
steffen 2026-03-11 08:41:58 +01:00
parent a5261d8ff0
commit 5ba78886d2
44 changed files with 3570 additions and 609 deletions

174
docs/deployment/cd.md Normal file
View file

@ -0,0 +1,174 @@
# Continuous Deployment
The cryodev infrastructure uses two deployment strategies optimized for different host types.
## Overview
| Host | Strategy | Tool | Trigger |
|------|----------|------|---------|
| `cryodev-main` | Push-based | deploy-rs | Git push via Forgejo Actions |
| `cryodev-pi` | Pull-based | Comin | Periodic polling |
## Push-based Deployment (cryodev-main)
### How It Works
1. Developer pushes to `main` branch
2. Forgejo Actions workflow triggers
3. `deploy-rs` connects via SSH and deploys
### Setup
#### 1. Generate Deploy Key
```bash
ssh-keygen -t ed25519 -f deploy_key -C "forgejo-actions"
```
#### 2. Add Public Key to Server
On `cryodev-main`:
```bash
echo "PUBLIC_KEY_CONTENT" >> /root/.ssh/authorized_keys
```
#### 3. Add Private Key to Forgejo
1. Go to Repository Settings > Secrets
2. Add secret named `DEPLOY_SSH_KEY`
3. Paste the private key content
#### 4. Workflow Configuration
`.forgejo/workflows/deploy.yaml`:
```yaml
name: Deploy
on:
push:
branches: [main]
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v24
- run: nix flake check
deploy:
needs: check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v24
- name: Setup SSH
env:
SSH_PRIVATE_KEY: ${{ secrets.DEPLOY_SSH_KEY }}
run: |
mkdir -p ~/.ssh
echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
ssh-keyscan cryodev-main >> ~/.ssh/known_hosts
- name: Deploy
run: nix run github:serokell/deploy-rs -- .#cryodev-main
```
### Rollback
deploy-rs automatically rolls back if the new configuration fails health checks.
Manual rollback:
```bash
# List generations
sudo nix-env -p /nix/var/nix/profiles/system --list-generations
# Rollback to previous
sudo nixos-rebuild switch --rollback
```
## Pull-based Deployment (cryodev-pi)
### How It Works
1. Comin periodically polls the Git repository
2. On changes, it builds and activates the new configuration
3. Works through NAT without incoming connections
### Configuration
```nix
# hosts/cryodev-pi/services/comin.nix
{
services.comin = {
enable = true;
remotes = [{
name = "origin";
url = "https://git.cryodev.xyz/steffen/cryodev-server.git";
branches.main.name = "main";
}];
};
}
```
### Monitoring
Check Comin status:
```bash
sudo systemctl status comin
sudo journalctl -u comin -f
```
Force immediate update:
```bash
sudo systemctl restart comin
```
### Troubleshooting
If Comin fails to build:
```bash
# Check logs
sudo journalctl -u comin --since "1 hour ago"
# Manual build test
cd /var/lib/comin/repo
nix build .#nixosConfigurations.cryodev-pi.config.system.build.toplevel
```
## Manual Deployment
For hosts not using automated deployment:
```bash
# Build locally
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel
# Deploy with nixos-rebuild
nixos-rebuild switch --flake .#<hostname> --target-host root@<hostname>
# Or using deploy-rs
nix run github:serokell/deploy-rs -- .#<hostname>
```
## Testing Changes
Before pushing, always verify:
```bash
# Check flake validity
nix flake check
# Build configuration (dry-run)
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel --dry-run
# Full build
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel
```

93
docs/deployment/dns.md Normal file
View file

@ -0,0 +1,93 @@
# DNS Configuration
Required DNS records for the cryodev infrastructure.
## Primary Domain (cryodev.xyz)
### A/AAAA Records
| Hostname | Type | Value | Purpose |
|----------|------|-------|---------|
| `@` | A | `<SERVER_IP>` | Main server |
| `@` | AAAA | `<SERVER_IPV6>` | Main server (IPv6) |
| `mail` | A | `<SERVER_IP>` | Mail server |
| `mail` | AAAA | `<SERVER_IPV6>` | Mail server (IPv6) |
### CNAME Records
| Hostname | Type | Value | Purpose |
|----------|------|-------|---------|
| `git` | CNAME | `@` | Forgejo |
| `headscale` | CNAME | `@` | Headscale |
| `headplane` | CNAME | `@` | Headplane |
| `netdata` | CNAME | `@` | Netdata Monitoring |
### Mail Records
| Hostname | Type | Value | Purpose |
|----------|------|-------|---------|
| `@` | MX | `10 mail.cryodev.xyz.` | Mail delivery |
| `@` | TXT | `"v=spf1 mx ~all"` | SPF |
| `_dmarc` | TXT | `"v=DMARC1; p=none"` | DMARC |
| `mail._domainkey` | TXT | `"v=DKIM1; k=rsa; p=..."` | DKIM |
## Getting the DKIM Key
After deploying the mailserver, retrieve the DKIM public key:
```bash
sudo cat /var/dkim/cryodev.xyz.mail.txt
```
Add this as a TXT record for `mail._domainkey.cryodev.xyz`.
## Verification
### Check DNS Propagation
```bash
# A record
dig A cryodev.xyz
# MX record
dig MX cryodev.xyz
# SPF
dig TXT cryodev.xyz
# DKIM
dig TXT mail._domainkey.cryodev.xyz
# DMARC
dig TXT _dmarc.cryodev.xyz
```
### Online Tools
- [MXToolbox](https://mxtoolbox.com/) - Comprehensive DNS/mail testing
- [Mail-tester](https://www.mail-tester.com/) - Email deliverability testing
- [DMARC Analyzer](https://dmarcanalyzer.com/) - DMARC record validation
## TTL Recommendations
For initial setup, use low TTLs (300 seconds) to allow quick changes.
After verification, increase to:
- A/AAAA records: 3600 (1 hour)
- CNAME records: 3600 (1 hour)
- MX records: 3600 (1 hour)
- TXT records: 3600 (1 hour)
## Firewall Requirements
Ensure these ports are open on `cryodev-main`:
| Port | Protocol | Service |
|------|----------|---------|
| 22 | TCP | SSH |
| 80 | TCP | HTTP (ACME/redirect) |
| 443 | TCP | HTTPS |
| 25 | TCP | SMTP |
| 465 | TCP | SMTPS |
| 587 | TCP | SMTP Submission |
| 993 | TCP | IMAPS |

View file

@ -0,0 +1,179 @@
# Erstinstallation (x86_64 Server)
Diese Anleitung beschreibt die **manuelle Installation** eines neuen x86_64 Servers (z.B. cryodev-main).
> **Für Raspberry Pi:** Siehe [Neuen Raspberry Pi hinzufügen](new-client.md) - dort wird ein SD-Image automatisch gebaut.
## Übersicht
Bei der Erstinstallation gibt es ein Henne-Ei-Problem:
- SOPS-Secrets werden mit dem SSH-Host-Key verschlüsselt
- Der SSH-Host-Key wird erst bei der Installation generiert
- Daher: Erst installieren, dann Secrets konfigurieren
## Voraussetzungen
- Bootbares NixOS ISO ([Minimal ISO](https://nixos.org/download/#nixos-iso))
- Netzwerkverbindung
- Host-Konfiguration in `hosts/<hostname>/` (ohne secrets.yaml)
## Schritt 1: Host-Konfiguration vorbereiten
### 1.1 Template kopieren
```bash
cp -r templates/generic-server hosts/neuer-server
```
### 1.2 Hostname setzen
`hosts/neuer-server/networking.nix`:
```nix
{
networking.hostName = "neuer-server";
}
```
### 1.3 In flake.nix registrieren
```nix
nixosConfigurations = {
neuer-server = mkNixosConfiguration "x86_64-linux" [ ./hosts/neuer-server ];
};
```
### 1.4 Placeholder secrets.yaml erstellen
```bash
touch hosts/neuer-server/secrets.yaml
```
### 1.5 SOPS-Secrets temporär deaktivieren
In `hosts/neuer-server/default.nix` alle `sops.secrets.*` Referenzen auskommentieren oder mit `lib.mkIf false` umgeben, bis die echten Secrets existieren.
## Schritt 2: Zielmaschine vorbereiten
### 2.1 NixOS ISO booten
Von USB/CD booten.
### 2.2 Root-Passwort setzen (für SSH)
```bash
passwd
```
### 2.3 IP-Adresse ermitteln
```bash
ip a
```
### 2.4 Per SSH verbinden (optional)
```bash
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no nixos@<IP>
sudo -i
```
## Schritt 3: Installation durchführen
### 3.1 Repository klonen
```bash
nix-shell -p git
git clone <GIT_REPO_URL> /tmp/nixos
cd /tmp/nixos
```
### 3.2 Disk-Konfiguration anpassen
**Wichtig:** Die Disk-ID muss zur Hardware passen!
```bash
# Verfügbare Disks anzeigen
lsblk -o NAME,SIZE,MODEL,SERIAL
ls -la /dev/disk/by-id/
```
In `hosts/neuer-server/disks.sh` oder `disks.nix` die richtige Disk-ID eintragen.
### 3.3 Install-Script ausführen
```bash
bash scripts/install.sh -n neuer-server
```
Das Script:
1. Partitioniert die Disk (via disko oder disks.sh)
2. Generiert hardware.nix (falls nicht vorhanden)
3. Installiert NixOS
### 3.4 Reboot
```bash
umount -Rl /mnt
reboot
```
## Schritt 4: Nach dem ersten Boot
### 4.1 Einloggen
Standard-Passwort: `changeme`
```bash
passwd # Sofort ändern!
```
### 4.2 SSH-Host-Key zu Age-Key konvertieren
```bash
nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age'
```
**Ausgabe notieren!** (z.B. `age1abc123...`)
### 4.3 Auf Entwicklungsrechner: SOPS konfigurieren
`.sops.yaml` bearbeiten:
```yaml
keys:
- &admin_key age1e8p35795htf7twrejyugpzw0qja2v33awcw76y4gp6acnxnkzq0s935t4t
- &neuer_server_key age1abc123... # Key von oben
creation_rules:
- path_regex: hosts/neuer-server/secrets.yaml$
key_groups:
- age:
- *admin_key
- *neuer_server_key
```
### 4.4 Secrets erstellen
```bash
sops hosts/neuer-server/secrets.yaml
```
Mindestens den Tailscale Auth-Key eintragen (siehe nächster Schritt).
### 4.5 SOPS-Referenzen wieder aktivieren
Die in Schritt 1.5 auskommentierten `sops.secrets.*` Referenzen wieder aktivieren.
### 4.6 Konfiguration deployen
```bash
# Lokal bauen und per SSH deployen
nixos-rebuild switch --flake .#neuer-server --target-host root@<IP>
```
## Nächste Schritte
- [Tailscale einrichten](../services/tailscale.md) - VPN-Verbindung
- [Netdata konfigurieren](../services/netdata.md) - Monitoring
- [CD einrichten](../deployment/cd.md) - Automatisches Deployment

View file

@ -0,0 +1,282 @@
# Neuen Raspberry Pi Client hinzufügen
Diese Anleitung beschreibt das Hinzufügen eines **neuen Raspberry Pi Clients** zur Infrastruktur.
## Übersicht: Der Ablauf
```
1. Konfiguration erstellen ──► Template kopieren, anpassen
2. Zur Image-Pipeline hinzufügen ──► Workflow-Matrix erweitern
3. Push auf main ──► Forgejo baut automatisch SD-Image
4. Image flashen & booten ──► SD-Karte beschreiben, Pi starten
5. SOPS konfigurieren ──► Age-Key holen, Secrets erstellen
6. Finales Deployment ──► Tailscale etc. aktivieren
```
## Voraussetzungen
- SSH-Zugang zu cryodev-main (für Tailscale Auth-Key)
- Entwicklungsrechner mit Repository-Zugriff
- SD-Karte (mindestens 8 GB)
---
## Schritt 1: Tailscale Auth-Key generieren
**Auf cryodev-main** (per SSH):
```bash
sudo headscale preauthkeys create --expiration 99y --reusable --user default
```
**Ausgabe notieren!** (z.B. `tskey-preauth-abc123...`)
---
## Schritt 2: Host-Konfiguration erstellen
### 2.1 Template kopieren
```bash
cp -r templates/raspberry-pi hosts/neuer-pi
```
### 2.2 Hostname setzen
`hosts/neuer-pi/networking.nix`:
```nix
{
networking.hostName = "neuer-pi";
}
```
### 2.3 In flake.nix registrieren
```nix
nixosConfigurations = {
# ... bestehende Hosts ...
neuer-pi = mkNixosConfiguration "aarch64-linux" [ ./hosts/neuer-pi ];
};
```
### 2.4 In constants.nix eintragen
```nix
{
hosts = {
# ... bestehende Hosts ...
neuer-pi = {
ip = "100.64.0.X"; # Wird von Headscale vergeben
};
};
}
```
### 2.5 Placeholder secrets.yaml erstellen
```bash
touch hosts/neuer-pi/secrets.yaml
```
### 2.6 SOPS temporär deaktivieren
In `hosts/neuer-pi/default.nix` die `sops.secrets.*` Referenzen auskommentieren, damit das Image ohne Secrets gebaut werden kann.
---
## Schritt 3: Zur Image-Pipeline hinzufügen
Bearbeite `.forgejo/workflows/build-pi-image.yml`:
```yaml
jobs:
build-pi-images:
strategy:
matrix:
# Neuen Host hier hinzufügen:
host: [cryodev-pi, neuer-pi]
```
---
## Schritt 4: Push und Image bauen lassen
```bash
git add .
git commit -m "Add neuer-pi host configuration"
git push
```
Der Forgejo Workflow baut jetzt automatisch ein SD-Image für `neuer-pi`.
**Warten** bis der Workflow fertig ist (30-60 Minuten). Status prüfen unter:
`https://git.cryodev.xyz/steffen/cryodev-server/actions`
---
## Schritt 5: Image flashen
### 5.1 Image herunterladen
Nach erfolgreichem Build unter **Releases**:
```bash
wget https://git.cryodev.xyz/steffen/cryodev-server/releases/latest/download/neuer-pi-sd-image.img.zst
```
### 5.2 Dekomprimieren
```bash
zstd -d neuer-pi-sd-image.img.zst -o neuer-pi.img
```
### 5.3 Auf SD-Karte schreiben
**Achtung:** `/dev/sdX` durch das richtige Gerät ersetzen!
```bash
lsblk # Richtiges Gerät finden
sudo dd if=neuer-pi.img of=/dev/sdX bs=4M conv=fsync status=progress
```
### 5.4 Booten
1. SD-Karte in den Raspberry Pi einlegen
2. Ethernet anschließen
3. Strom anschließen
4. Warten bis gebootet (ca. 2 Minuten)
---
## Schritt 6: SOPS konfigurieren
### 6.1 IP-Adresse finden
Der Pi sollte per DHCP eine IP bekommen. Prüfe deinen Router oder scanne das Netzwerk:
```bash
nmap -sn 192.168.1.0/24 | grep -B2 "Raspberry"
```
### 6.2 SSH verbinden
```bash
ssh steffen@<IP> # oder der konfigurierte User
```
Standard-Passwort siehe `hosts/neuer-pi/users.nix`.
### 6.3 Age-Key ermitteln
Auf dem Pi:
```bash
nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age'
```
**Ausgabe notieren!** (z.B. `age1xyz...`)
### 6.4 .sops.yaml aktualisieren
Auf dem Entwicklungsrechner:
```yaml
keys:
- &admin_key age1e8p35795htf7twrejyugpzw0qja2v33awcw76y4gp6acnxnkzq0s935t4t
- &neuer_pi_key age1xyz... # Der neue Key
creation_rules:
# ... bestehende Regeln ...
- path_regex: hosts/neuer-pi/secrets.yaml$
key_groups:
- age:
- *admin_key
- *neuer_pi_key
```
### 6.5 Secrets erstellen
```bash
sops hosts/neuer-pi/secrets.yaml
```
Inhalt:
```yaml
tailscale:
auth-key: "tskey-preauth-abc123..." # Key aus Schritt 1
netdata:
stream:
child-uuid: "..." # uuidgen
```
### 6.6 SOPS-Referenzen aktivieren
Die in Schritt 2.6 auskommentierten `sops.secrets.*` Referenzen wieder aktivieren.
---
## Schritt 7: Finales Deployment
```bash
git add .
git commit -m "Configure SOPS secrets for neuer-pi"
git push
```
Da Comin auf dem Pi läuft, wird er die neue Konfiguration automatisch pullen.
Alternativ manuell:
```bash
nixos-rebuild switch --flake .#neuer-pi --target-host root@<IP>
```
---
## Schritt 8: Verifizieren
### Tailscale-Verbindung
```bash
# Auf dem Pi
tailscale status
# Auf cryodev-main
sudo headscale nodes list
```
### Netdata-Streaming
Prüfe ob der neue Client im Netdata-Dashboard erscheint:
`https://netdata.cryodev.xyz`
---
## Checkliste
- [ ] Tailscale Auth-Key auf cryodev-main generiert
- [ ] Host-Konfiguration erstellt (Template, flake.nix, constants.nix)
- [ ] Host zur Workflow-Matrix hinzugefügt
- [ ] Gepusht und auf Image-Build gewartet
- [ ] SD-Karte geflasht und Pi gebootet
- [ ] Age-Key ermittelt und in .sops.yaml eingetragen
- [ ] secrets.yaml erstellt (Tailscale-Key, Netdata-UUID)
- [ ] SOPS-Referenzen aktiviert und deployed
- [ ] Tailscale-Verbindung funktioniert
- [ ] Netdata-Streaming funktioniert

View file

@ -0,0 +1,63 @@
# Prerequisites
## Required Tools
Ensure you have the following tools installed on your local machine:
| Tool | Purpose |
|------|---------|
| `nix` | Package manager with flakes enabled |
| `sops` | Secret encryption/decryption |
| `age` | Encryption backend for sops |
| `ssh` | Remote access |
### Installing Nix
Follow the [official Nix installation guide](https://nixos.org/download/).
Enable flakes by adding to `~/.config/nix/nix.conf`:
```
experimental-features = nix-command flakes
```
### Installing Other Tools
With Nix:
```bash
nix-shell -p sops age
```
Or install globally via home-manager or system configuration.
## Repository Access
Clone the repository:
```bash
git clone https://git.cryodev.xyz/steffen/cryodev-server.git
cd cryodev-server
```
## Development Shell
Enter the development shell with all required tools:
```bash
nix develop
```
## Verifying Setup
Check that the flake is valid:
```bash
nix flake check
```
Build a host configuration (dry run):
```bash
nix build .#nixosConfigurations.cryodev-main.config.system.build.toplevel --dry-run
```

View file

@ -0,0 +1,183 @@
# Neuinstallation (Reinstall)
Diese Anleitung beschreibt die **Neuinstallation** eines bestehenden Hosts, z.B. nach Hardwarewechsel oder bei Problemen.
## Unterschied zur Erstinstallation
| Aspekt | Erstinstallation | Neuinstallation |
|--------|------------------|-----------------|
| SOPS-Secrets | Noch nicht vorhanden | Bereits konfiguriert |
| SSH-Host-Key | Neu generiert | **Muss wiederhergestellt werden!** |
| Disk-IDs | Neu ermitteln | Oft geändert (neue Hardware) |
| secrets.yaml | Wird erstellt | Bereits vorhanden |
## Wichtig: SSH-Host-Key Problem
Bei einer Neuinstallation wird ein **neuer SSH-Host-Key** generiert. Dieser stimmt nicht mehr mit dem Age-Key in `.sops.yaml` überein!
### Lösungsmöglichkeiten
**Option A: Alten Host-Key sichern und wiederherstellen** (empfohlen)
**Option B: Neuen Key generieren und SOPS aktualisieren**
## Voraussetzungen
- Backup des alten SSH-Host-Keys (falls Option A)
- Zugriff auf `.sops.yaml` und die Admin-Age-Keys
- Bootbares NixOS ISO
## Schritt 1: Vorbereitung (vor der Installation)
### 1.1 Alten SSH-Host-Key sichern (Option A)
Falls der alte Host noch läuft:
```bash
# Auf dem alten Host
sudo cat /etc/ssh/ssh_host_ed25519_key > ~/ssh_host_ed25519_key.backup
sudo cat /etc/ssh/ssh_host_ed25519_key.pub > ~/ssh_host_ed25519_key.pub.backup
```
Dateien sicher auf den Entwicklungsrechner kopieren.
### 1.2 Disk-IDs ermitteln
**Bei neuer Hardware** ändern sich die Disk-IDs!
```bash
# Im NixOS Live-System
lsblk -o NAME,SIZE,MODEL,SERIAL
ls -la /dev/disk/by-id/
```
Die neue Disk-ID in `hosts/<hostname>/disks.sh` oder `disks.nix` eintragen:
```bash
# Beispiel disks.sh
DISK="/dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_XXXXX"
```
## Schritt 2: Installation durchführen
### 2.1 NixOS ISO booten
Von USB/CD booten, Root-Passwort setzen, per SSH verbinden.
### 2.2 Repository klonen
```bash
sudo -i
nix-shell -p git
git clone <GIT_REPO_URL> /tmp/nixos
cd /tmp/nixos
```
### 2.3 Disk-Konfiguration prüfen
```bash
# Aktuelle Disk-IDs anzeigen
ls -la /dev/disk/by-id/
# Mit Konfiguration vergleichen
cat hosts/<hostname>/disks.sh | grep DISK
```
**Falls nötig:** Disk-ID in der Konfiguration anpassen.
### 2.4 Install-Script ausführen
```bash
bash scripts/install.sh -n <hostname>
```
### 2.5 SSH-Host-Key wiederherstellen (Option A)
**Vor dem Reboot!**
```bash
# Host-Key vom Backup wiederherstellen
cp /path/to/ssh_host_ed25519_key.backup /mnt/etc/ssh/ssh_host_ed25519_key
cp /path/to/ssh_host_ed25519_key.pub.backup /mnt/etc/ssh/ssh_host_ed25519_key.pub
chmod 600 /mnt/etc/ssh/ssh_host_ed25519_key
chmod 644 /mnt/etc/ssh/ssh_host_ed25519_key.pub
```
### 2.6 Reboot
```bash
umount -Rl /mnt
reboot
```
## Schritt 3: Nach dem Reboot
### Bei Option A (Key wiederhergestellt)
SOPS-Secrets sollten automatisch funktionieren. Testen:
```bash
sudo cat /run/secrets/tailscale/auth-key
```
### Bei Option B (Neuer Key)
Der Host kann die Secrets nicht entschlüsseln. Neuen Key konfigurieren:
```bash
# Neuen Age-Key ermitteln
nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age'
```
Auf dem Entwicklungsrechner:
```bash
# .sops.yaml aktualisieren mit neuem Key
vim .sops.yaml
# Secrets mit neuem Key neu verschlüsseln
sops updatekeys hosts/<hostname>/secrets.yaml
```
Dann Konfiguration neu deployen:
```bash
nixos-rebuild switch --flake .#<hostname> --target-host root@<IP>
```
## Häufige Probleme
### "No secret key available"
SOPS kann die Secrets nicht entschlüsseln. Ursache:
- SSH-Host-Key stimmt nicht mit Age-Key in `.sops.yaml` überein
Lösung: Option B durchführen (neuen Key konfigurieren).
### "Device not found" beim Partitionieren
Disk-ID in `disks.sh`/`disks.nix` ist falsch.
```bash
# Richtige ID finden
ls -la /dev/disk/by-id/
```
### Hardware-Config veraltet
Bei neuer Hardware muss `hardware.nix` neu generiert werden:
```bash
# Install-Script generiert automatisch neu, falls Datei fehlt
rm hosts/<hostname>/hardware.nix
bash scripts/install.sh -n <hostname>
```
## Checkliste
- [ ] Alten SSH-Host-Key gesichert (falls möglich)
- [ ] Disk-IDs in Konfiguration geprüft/aktualisiert
- [ ] Installation durchgeführt
- [ ] SSH-Host-Key wiederhergestellt ODER neuen Key in SOPS konfiguriert
- [ ] Secrets funktionieren (`sudo cat /run/secrets/...`)
- [ ] Tailscale verbunden (`tailscale status`)

View file

@ -0,0 +1,116 @@
# SD-Karten-Images für Raspberry Pi
Das Repository baut automatisch SD-Karten-Images für alle konfigurierten Raspberry Pi Hosts.
## Automatischer Build
Bei Änderungen an `main` werden automatisch Images für alle Pi-Hosts gebaut und als Release veröffentlicht.
**Download:** [Releases auf Forgejo](https://git.cryodev.xyz/steffen/cryodev-server/releases)
## Verfügbare Images
| Host | Image-Name |
|------|------------|
| `cryodev-pi` | `cryodev-pi-sd-image.img.zst` |
Neue Hosts werden automatisch gebaut, wenn sie zur Workflow-Matrix hinzugefügt werden.
## Image flashen
### 1. Herunterladen
```bash
wget https://git.cryodev.xyz/.../releases/latest/download/<hostname>-sd-image.img.zst
wget https://git.cryodev.xyz/.../releases/latest/download/<hostname>-sd-image.img.zst.sha256
# Checksum prüfen
sha256sum -c <hostname>-sd-image.img.zst.sha256
```
### 2. Dekomprimieren
```bash
zstd -d <hostname>-sd-image.img.zst -o <hostname>.img
```
### 3. Auf SD-Karte schreiben
```bash
# Richtiges Gerät finden
lsblk
# Schreiben (ACHTUNG: richtiges Gerät wählen!)
sudo dd if=<hostname>.img of=/dev/sdX bs=4M conv=fsync status=progress
```
Alternativ: `balenaEtcher` oder `Raspberry Pi Imager` verwenden.
## Was ist im Image?
- Vollständige NixOS-Installation für den spezifischen Host
- Alle konfigurierten Services (außer Secrets)
- SSH-Server aktiviert
- Automatische Root-Partition-Erweiterung beim ersten Boot
- Comin für automatische Updates
## Was fehlt?
**SOPS-Secrets** können nicht im Image enthalten sein (Henne-Ei-Problem mit SSH-Host-Key).
Nach dem ersten Boot:
1. Age-Key vom Pi holen
2. `.sops.yaml` aktualisieren
3. `secrets.yaml` erstellen
4. Konfiguration deployen
Siehe [Neuen Client hinzufügen](new-client.md) für die vollständige Anleitung.
## Neuen Host zur Pipeline hinzufügen
1. Host-Konfiguration in `hosts/<hostname>/` erstellen
2. In `.forgejo/workflows/build-pi-image.yml` zur Matrix hinzufügen:
```yaml
matrix:
host: [cryodev-pi, neuer-host] # <- hier hinzufügen
```
3. Push auf `main` → Image wird automatisch gebaut
## Manuell bauen
```bash
# Auf aarch64 (z.B. anderem Pi)
nix build .#nixosConfigurations.<hostname>.config.system.build.sdImage
# Auf x86_64 mit QEMU-Emulation (langsam)
nix build .#nixosConfigurations.<hostname>.config.system.build.sdImage \
--extra-platforms aarch64-linux
```
Voraussetzung auf x86_64:
```nix
{
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
}
```
## Troubleshooting
### Workflow schlägt fehl
- Prüfe ob `sd-image.nix` in der Host-Konfiguration importiert wird
- Prüfe ob binfmt auf cryodev-main aktiviert ist
### Image bootet nicht
- SD-Karte korrekt beschrieben?
- Andere SD-Karte versuchen
- Stromversorgung prüfen (min. 3A für Pi 4)
### Kein Netzwerk
- Ethernet-Kabel prüfen
- DHCP-Server im Netzwerk?

94
docs/index.md Normal file
View file

@ -0,0 +1,94 @@
# Cryodev NixOS Configuration Documentation
Willkommen zur Dokumentation der **cryodev** NixOS-Infrastruktur.
## Quick Links
### Getting Started
- [Voraussetzungen](getting-started/prerequisites.md) - Benötigte Tools
- [Neuen Raspberry Pi hinzufügen](getting-started/new-client.md) - Kompletter Workflow für neue Clients
- [SD-Image Referenz](getting-started/sd-image.md) - Details zum Image-Build
- [Erstinstallation (Server)](getting-started/first-install.md) - Bootstrap für x86_64 Hosts
- [Neuinstallation](getting-started/reinstall.md) - Reinstall mit Hardware-Änderungen
### Services
- [SOPS Secrets](services/sops.md) - Geheimnisverwaltung mit sops-nix
- [Headscale](services/headscale.md) - Self-hosted Tailscale Server
- [Headplane](services/headplane.md) - Web-UI für Headscale
- [Tailscale](services/tailscale.md) - Mesh-VPN Client
- [Mailserver](services/mailserver.md) - E-Mail Stack (Postfix/Dovecot)
- [Forgejo](services/forgejo.md) - Git-Hosting mit CI/CD
- [Netdata](services/netdata.md) - Monitoring und Alerting
### Deployment
- [Continuous Deployment](deployment/cd.md) - Push- und Pull-basiertes Deployment
- [DNS-Konfiguration](deployment/dns.md) - Benötigte DNS-Einträge
## Architektur
```
Internet
|
cryodev.xyz
|
+-------------------+
| cryodev-main |
| (x86_64 Server) |
+-------------------+
| - Headscale |
| - Headplane |
| - Forgejo |
| - Mailserver |
| - Netdata Parent |
+-------------------+
|
Tailscale Mesh VPN
|
+-------------------+
| cryodev-pi |
| (Raspberry Pi 4) |
+-------------------+
| - Tailscale |
| - Netdata Child |
| - Comin (GitOps) |
+-------------------+
```
## Installations-Szenarien
| Szenario | Beschreibung | Anleitung |
|----------|--------------|-----------|
| **Neuer Raspberry Pi** | Config erstellen → Image bauen → Flashen | [new-client.md](getting-started/new-client.md) |
| **Erstinstallation (Server)** | x86_64 Host, manuelle Installation | [first-install.md](getting-started/first-install.md) |
| **Neuinstallation** | Bestehender Host, neue Hardware | [reinstall.md](getting-started/reinstall.md) |
Für Raspberry Pi: [SD-Image Referenz](getting-started/sd-image.md)
## Verzeichnisstruktur
```
.
├── flake.nix # Entry point, inputs and outputs
├── constants.nix # Zentrale Config (Domains, IPs, Ports)
├── hosts/ # Host-spezifische Konfigurationen
│ ├── cryodev-main/
│ └── cryodev-pi/
├── modules/ # Wiederverwendbare NixOS-Module
│ └── nixos/
├── pkgs/ # Eigene Pakete
├── overlays/ # Nixpkgs Overlays
├── templates/ # Templates für neue Hosts
├── scripts/ # Helper-Scripts (install.sh)
├── apps/ # Nix Apps (rebuild)
└── lib/ # Helper-Funktionen (utils.nix)
```
## Deployment-Strategien
| Host | Strategie | Tool | Beschreibung |
|------|-----------|------|--------------|
| `cryodev-main` | Push-basiert | deploy-rs via Forgejo Actions | Sofortige Updates bei Push |
| `cryodev-pi` | Pull-basiert | Comin | Pollt Repository auf Änderungen |

149
docs/services/forgejo.md Normal file
View file

@ -0,0 +1,149 @@
# Forgejo
Forgejo is a self-hosted Git service (fork of Gitea) with built-in CI/CD Actions.
## References
- [Forgejo Documentation](https://forgejo.org/docs/)
- [Forgejo Actions](https://forgejo.org/docs/latest/user/actions/)
## Setup
### DNS
Set a CNAME record for `git.cryodev.xyz` pointing to your main domain.
### Configuration
```nix
# hosts/cryodev-main/services/forgejo.nix
{ config, ... }:
{
services.forgejo = {
enable = true;
settings = {
server = {
DOMAIN = "git.cryodev.xyz";
ROOT_URL = "https://git.cryodev.xyz";
};
mailer = {
ENABLED = true;
FROM = "forgejo@cryodev.xyz";
};
};
};
}
```
## Forgejo Runner
The runner executes CI/CD pipelines defined in `.forgejo/workflows/`.
### Get Runner Token
1. Go to Forgejo Admin Panel
2. Navigate to Actions > Runners
3. Create a new runner and copy the token
### Add to Secrets
```bash
sops hosts/cryodev-main/secrets.yaml
```
```yaml
forgejo-runner:
token: "your-runner-token"
```
### Configuration
```nix
{
sops.secrets."forgejo-runner/token" = { };
services.gitea-actions-runner = {
instances.default = {
enable = true;
url = "https://git.cryodev.xyz";
tokenFile = config.sops.secrets."forgejo-runner/token".path;
labels = [ "ubuntu-latest:docker://node:20" ];
};
};
}
```
## CI/CD Workflows
### deploy-rs Workflow
`.forgejo/workflows/deploy.yaml`:
```yaml
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v24
- name: Deploy
env:
SSH_PRIVATE_KEY: ${{ secrets.DEPLOY_SSH_KEY }}
run: |
mkdir -p ~/.ssh
echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
nix run .#deploy
```
## Administration
### Create Admin User
```bash
sudo -u forgejo forgejo admin user create \
--username admin \
--password changeme \
--email admin@cryodev.xyz \
--admin
```
### Reset User Password
```bash
sudo -u forgejo forgejo admin user change-password \
--username USER \
--password NEWPASS
```
## Troubleshooting
### Check Service Status
```bash
sudo systemctl status forgejo
sudo systemctl status gitea-runner-default
```
### View Logs
```bash
sudo journalctl -u forgejo -f
sudo journalctl -u gitea-runner-default -f
```
### Database Issues
Forgejo uses SQLite by default. Database location:
```bash
ls -la /var/lib/forgejo/data/
```

107
docs/services/headplane.md Normal file
View file

@ -0,0 +1,107 @@
# Headplane
Headplane is a web-based admin interface for Headscale.
## References
- [GitHub](https://github.com/tale/headplane)
## Setup
### DNS
Set a CNAME record for `headplane.cryodev.xyz` pointing to your main domain.
### Generate Secrets
**Cookie Secret** (for session management):
```bash
nix-shell -p openssl --run 'openssl rand -hex 16'
```
**Agent Pre-Auth Key** (for Headplane's built-in agent):
```bash
# First, create a dedicated user
sudo headscale users create headplane-agent
# Then create a reusable pre-auth key
sudo headscale preauthkeys create --expiration 99y --reusable --user headplane-agent
```
### Add to Secrets
Edit `hosts/cryodev-main/secrets.yaml`:
```bash
sops hosts/cryodev-main/secrets.yaml
```
```yaml
headplane:
cookie_secret: "your-generated-hex-string"
agent_pre_authkey: "your-preauth-key"
```
### Configuration
```nix
# hosts/cryodev-main/services/headplane.nix
{ config, ... }:
{
sops.secrets."headplane/cookie_secret" = { };
sops.secrets."headplane/agent_pre_authkey" = { };
services.headplane = {
enable = true;
settings = {
server = {
cookie_secret_file = config.sops.secrets."headplane/cookie_secret".path;
};
headscale = {
url = "https://headscale.cryodev.xyz";
};
agent = {
enable = true;
authkey_file = config.sops.secrets."headplane/agent_pre_authkey".path;
};
};
};
}
```
## Usage
Access Headplane at `https://headplane.cryodev.xyz`.
### Features
- View and manage users
- View connected nodes
- Manage routes and exit nodes
- View pre-auth keys
## Troubleshooting
### Check Service Status
```bash
sudo systemctl status headplane
```
### View Logs
```bash
sudo journalctl -u headplane -f
```
### Agent Not Connecting
Verify the agent pre-auth key is valid:
```bash
sudo headscale preauthkeys list --user headplane-agent
```
If expired, create a new one and update the secrets file.

116
docs/services/headscale.md Normal file
View file

@ -0,0 +1,116 @@
# Headscale
Headscale is an open-source, self-hosted implementation of the Tailscale control server.
## References
- [Website](https://headscale.net/stable/)
- [GitHub](https://github.com/juanfont/headscale)
- [Example configuration](https://github.com/juanfont/headscale/blob/main/config-example.yaml)
## Setup
### DNS
Set a CNAME record for `headscale.cryodev.xyz` pointing to your main domain.
### Configuration
```nix
# hosts/cryodev-main/services/headscale.nix
{
services.headscale = {
enable = true;
openFirewall = true;
};
}
```
## Usage
### Create a User
```bash
sudo headscale users create <USERNAME>
```
### List Users
```bash
sudo headscale users list
```
### Create Pre-Auth Key
```bash
sudo headscale preauthkeys create --expiration 99y --reusable --user <USER_ID>
```
The pre-auth key is used by clients to automatically authenticate and join the tailnet.
### List Nodes
```bash
sudo headscale nodes list
```
### Delete a Node
```bash
sudo headscale nodes delete -i <NODE_ID>
```
### Rename a Node
```bash
sudo headscale nodes rename -i <NODE_ID> new-name
```
## ACL Configuration
Access Control Lists define which nodes can communicate with each other.
### Validate ACL File
```bash
sudo headscale policy check --file /path/to/acl.hujson
```
### Example ACL
```json
{
"acls": [
{
"action": "accept",
"src": ["*"],
"dst": ["*:*"]
}
]
}
```
## Troubleshooting
### Check Service Status
```bash
sudo systemctl status headscale
```
### View Logs
```bash
sudo journalctl -u headscale -f
```
### Test DERP Connectivity
```bash
curl -I https://headscale.cryodev.xyz/derp
```
## Integration
- [Headplane](headplane.md) - Web UI for managing Headscale
- [Tailscale Client](tailscale.md) - Connect clients to Headscale

147
docs/services/mailserver.md Normal file
View file

@ -0,0 +1,147 @@
# Mailserver
NixOS mailserver module providing a complete email stack with Postfix and Dovecot.
## References
- [Simple NixOS Mailserver](https://gitlab.com/simple-nixos-mailserver/nixos-mailserver)
## Setup
### DNS Records
| Type | Hostname | Value |
|------|----------|-------|
| A | `mail` | `<SERVER_IP>` |
| AAAA | `mail` | `<SERVER_IPV6>` |
| MX | `@` | `10 mail.cryodev.xyz.` |
| TXT | `@` | `"v=spf1 mx ~all"` |
| TXT | `_dmarc` | `"v=DMARC1; p=none"` |
DKIM records are generated automatically after first deployment.
### Generate Password Hashes
```bash
nix-shell -p mkpasswd --run 'mkpasswd -sm bcrypt'
```
### Add to Secrets
```bash
sops hosts/cryodev-main/secrets.yaml
```
```yaml
mailserver:
accounts:
admin: "$2y$05$..."
forgejo: "$2y$05$..."
```
### Configuration
```nix
# hosts/cryodev-main/services/mailserver.nix
{ config, ... }:
{
sops.secrets."mailserver/accounts/admin" = { };
sops.secrets."mailserver/accounts/forgejo" = { };
mailserver = {
enable = true;
fqdn = "mail.cryodev.xyz";
domains = [ "cryodev.xyz" ];
loginAccounts = {
"admin@cryodev.xyz" = {
hashedPasswordFile = config.sops.secrets."mailserver/accounts/admin".path;
};
"forgejo@cryodev.xyz" = {
hashedPasswordFile = config.sops.secrets."mailserver/accounts/forgejo".path;
sendOnly = true;
};
};
};
}
```
## DKIM Setup
After first deployment, get the DKIM public key:
```bash
sudo cat /var/dkim/cryodev.xyz.mail.txt
```
Add this as a TXT record:
| Type | Hostname | Value |
|------|----------|-------|
| TXT | `mail._domainkey` | `v=DKIM1; k=rsa; p=...` |
## Testing
### Send Test Email
```bash
echo "Test" | mail -s "Test Subject" recipient@example.com
```
### Check Mail Queue
```bash
sudo postqueue -p
```
### View Logs
```bash
sudo journalctl -u postfix -f
sudo journalctl -u dovecot2 -f
```
### Test SMTP
```bash
openssl s_client -connect mail.cryodev.xyz:587 -starttls smtp
```
### Verify DNS Records
- [MXToolbox](https://mxtoolbox.com/)
- [Mail-tester](https://www.mail-tester.com/)
## Troubleshooting
### Emails Not Sending
Check Postfix status:
```bash
sudo systemctl status postfix
```
Check firewall (ports 25, 465, 587 must be open):
```bash
sudo iptables -L -n | grep -E '25|465|587'
```
### DKIM Failing
Verify the DNS record matches the generated key:
```bash
dig TXT mail._domainkey.cryodev.xyz
```
### SPF Failing
Verify SPF record:
```bash
dig TXT cryodev.xyz
```
Should return: `"v=spf1 mx ~all"`

181
docs/services/netdata.md Normal file
View file

@ -0,0 +1,181 @@
# Netdata Monitoring
Netdata provides real-time performance monitoring with parent/child streaming.
## Architecture
```
┌─────────────────┐ Stream over ┌─────────────────┐
│ cryodev-pi │ ───────────────────>│ cryodev-main │
│ (Child Node) │ Tailscale VPN │ (Parent Node) │
└─────────────────┘ └─────────────────┘
v
https://netdata.cryodev.xyz
```
## References
- [Netdata Documentation](https://learn.netdata.cloud/)
- [Streaming Configuration](https://learn.netdata.cloud/docs/streaming/streaming-configuration-reference)
## Parent Node (cryodev-main)
### DNS
Set a CNAME record for `netdata.cryodev.xyz` pointing to your main domain.
### Generate Stream API Key
```bash
uuidgen
```
### Configuration
```nix
# hosts/cryodev-main/services/netdata.nix
{ config, ... }:
{
sops.secrets."netdata/stream-api-key" = { };
sops.templates."netdata-stream.conf" = {
content = ''
[${config.sops.placeholder."netdata/stream-api-key"}]
enabled = yes
default history = 3600
default memory mode = ram
health enabled by default = auto
allow from = *
'';
owner = "netdata";
};
services.netdata = {
enable = true;
configDir."stream.conf" = config.sops.templates."netdata-stream.conf".path;
};
}
```
## Child Node (cryodev-pi)
### Generate Child UUID
```bash
uuidgen
```
### Add to Secrets
```bash
sops hosts/cryodev-pi/secrets.yaml
```
```yaml
netdata:
stream:
child-uuid: "your-generated-uuid"
```
Note: The stream API key must match the parent's key. You can either:
1. Share the same secret between hosts (complex with SOPS)
2. Hardcode a known API key in both configurations
### Configuration
```nix
# hosts/cryodev-pi/services/netdata.nix
{ config, constants, ... }:
{
sops.secrets."netdata/stream/child-uuid" = { };
sops.templates."netdata-stream.conf" = {
content = ''
[stream]
enabled = yes
destination = ${constants.hosts.cryodev-main.ip}:19999
api key = YOUR_STREAM_API_KEY
send charts matching = *
'';
owner = "netdata";
};
services.netdata = {
enable = true;
configDir."stream.conf" = config.sops.templates."netdata-stream.conf".path;
};
}
```
## Email Alerts
Configure Netdata to send alerts via the mailserver:
```nix
{
services.netdata.configDir."health_alarm_notify.conf" = pkgs.writeText "notify.conf" ''
SEND_EMAIL="YES"
EMAIL_SENDER="netdata@cryodev.xyz"
DEFAULT_RECIPIENT_EMAIL="admin@cryodev.xyz"
'';
}
```
## Usage
### Access Dashboard
Open `https://netdata.cryodev.xyz` in your browser.
### View Child Nodes
Child nodes appear in the left sidebar under "Nodes".
### Check Streaming Status
On parent:
```bash
curl -s http://localhost:19999/api/v1/info | jq '.hosts'
```
On child:
```bash
curl -s http://localhost:19999/api/v1/info | jq '.streaming'
```
## Troubleshooting
### Check Service Status
```bash
sudo systemctl status netdata
```
### View Logs
```bash
sudo journalctl -u netdata -f
```
### Child Not Streaming
1. Verify network connectivity:
```bash
tailscale ping cryodev-main
nc -zv <parent-ip> 19999
```
2. Check API key matches between parent and child
3. Verify firewall allows port 19999 on parent
### High Memory Usage
Adjust history settings in `netdata.conf`:
```ini
[global]
history = 1800 # seconds to retain
memory mode = ram
```

174
docs/services/sops.md Normal file
View file

@ -0,0 +1,174 @@
# SOPS Secret Management
Atomic secret provisioning for NixOS using [sops-nix](https://github.com/Mic92/sops-nix).
## Overview
Secrets are encrypted with `age` using SSH host keys, ensuring:
- No plaintext secrets in the repository
- Secrets are decrypted at activation time
- Each host can only decrypt its own secrets
## Setup
### 1. Get Host's Age Public Key
After a host is installed, extract its age key from the SSH host key:
```bash
nix-shell -p ssh-to-age --run 'ssh-keyscan -t ed25519 <HOST_IP> | ssh-to-age'
```
Or locally on the host:
```bash
nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age'
```
### 2. Configure .sops.yaml
Add the host key to `.sops.yaml`:
```yaml
keys:
- &admin_key age1e8p35795htf7twrejyugpzw0qja2v33awcw76y4gp6acnxnkzq0s935t4t
- &main_key age1... # cryodev-main
- &pi_key age1... # cryodev-pi
creation_rules:
- path_regex: hosts/cryodev-main/secrets.yaml$
key_groups:
- age:
- *admin_key
- *main_key
- path_regex: hosts/cryodev-pi/secrets.yaml$
key_groups:
- age:
- *admin_key
- *pi_key
```
### 3. Create Secrets File
```bash
sops hosts/<hostname>/secrets.yaml
```
This opens your editor. Add secrets in YAML format:
```yaml
tailscale:
auth-key: "tskey-..."
some-service:
password: "secret123"
```
## Usage in Modules
### Declaring Secrets
```nix
{ config, ... }:
{
sops.secrets.my-secret = {
# Optional: set owner/group
owner = "myservice";
group = "myservice";
};
}
```
### Using Secrets
Reference the secret path in service configuration:
```nix
{
services.myservice = {
passwordFile = config.sops.secrets.my-secret.path;
};
}
```
### Using Templates
For secrets that need to be embedded in config files:
```nix
{
sops.secrets."netdata/stream-api-key" = { };
sops.templates."netdata-stream.conf" = {
content = ''
[stream]
enabled = yes
api key = ${config.sops.placeholder."netdata/stream-api-key"}
'';
owner = "netdata";
};
services.netdata.configDir."stream.conf" =
config.sops.templates."netdata-stream.conf".path;
}
```
## Common Secrets
### cryodev-main
```yaml
mailserver:
accounts:
forgejo: "$2y$05$..." # bcrypt hash
admin: "$2y$05$..."
forgejo-runner:
token: "..."
headplane:
cookie_secret: "..." # openssl rand -hex 16
agent_pre_authkey: "..." # headscale preauthkey
tailscale:
auth-key: "tskey-..."
```
### cryodev-pi
```yaml
tailscale:
auth-key: "tskey-..."
netdata:
stream:
child-uuid: "..." # uuidgen
```
## Generating Secret Values
| Secret | Command |
|--------|---------|
| Mailserver password | `nix-shell -p mkpasswd --run 'mkpasswd -sm bcrypt'` |
| Random hex token | `nix-shell -p openssl --run 'openssl rand -hex 16'` |
| UUID | `uuidgen` |
| Tailscale preauth | `sudo headscale preauthkeys create --expiration 99y --reusable --user default` |
## Updating Keys
After modifying `.sops.yaml`, update existing secrets files:
```bash
sops --config .sops.yaml updatekeys hosts/<hostname>/secrets.yaml
```
## Troubleshooting
### "No matching keys found"
Ensure the host's age key is in `.sops.yaml` and you've run `updatekeys`.
### Secret not decrypting on host
Check that `/etc/ssh/ssh_host_ed25519_key` exists and matches the public key in `.sops.yaml`.

117
docs/services/tailscale.md Normal file
View file

@ -0,0 +1,117 @@
# Tailscale Client
Tailscale clients connect to the self-hosted Headscale server to join the mesh VPN.
## References
- [Tailscale Documentation](https://tailscale.com/kb)
- [Headscale Client Setup](https://headscale.net/running-headscale-linux/)
## Setup
### Generate Auth Key
On the Headscale server (cryodev-main):
```bash
sudo headscale preauthkeys create --expiration 99y --reusable --user default
```
### Add to Secrets
```bash
sops hosts/<hostname>/secrets.yaml
```
```yaml
tailscale:
auth-key: "your-preauth-key"
```
### Configuration
```nix
# In your host configuration
{ config, ... }:
{
sops.secrets."tailscale/auth-key" = { };
services.tailscale = {
enable = true;
authKeyFile = config.sops.secrets."tailscale/auth-key".path;
extraUpFlags = [
"--login-server=https://headscale.cryodev.xyz"
];
};
}
```
## Usage
### Check Status
```bash
tailscale status
```
### View IP Address
```bash
tailscale ip
```
### Ping Another Node
```bash
tailscale ping <hostname>
```
### SSH to Another Node
```bash
ssh user@<hostname>
# or using Tailscale IP
ssh user@100.64.0.X
```
## MagicDNS
With Headscale's MagicDNS enabled, you can reach nodes by hostname:
```bash
ping cryodev-pi
ssh steffen@cryodev-main
```
## Troubleshooting
### Check Service Status
```bash
sudo systemctl status tailscaled
```
### View Logs
```bash
sudo journalctl -u tailscaled -f
```
### Re-authenticate
If the node is not connecting:
```bash
sudo tailscale up --login-server=https://headscale.cryodev.xyz --force-reauth
```
### Node Not Appearing in Headscale
Check the auth key is valid:
```bash
# On Headscale server
sudo headscale preauthkeys list --user default
```
Verify the login server URL is correct in the client configuration.