Directory structure: └── cryodev-server/ ├── README.md ├── AGENTS.md ├── constants.nix ├── deploy.json ├── flake.lock ├── flake.nix ├── .sops.yaml ├── apps/ │ ├── create/ │ │ ├── create.sh │ │ └── default.nix │ ├── deploy/ │ │ ├── default.nix │ │ └── deploy.sh │ ├── install/ │ │ ├── default.nix │ │ └── install.sh │ └── rebuild/ │ ├── default.nix │ └── rebuild.sh ├── docs/ │ ├── index.md │ ├── deployment/ │ │ ├── cd.md │ │ └── dns.md │ ├── getting-started/ │ │ ├── first-install.md │ │ ├── new-client.md │ │ ├── prerequisites.md │ │ ├── reinstall.md │ │ └── sd-image.md │ └── services/ │ ├── forgejo.md │ ├── headplane.md │ ├── headscale.md │ ├── mailserver.md │ ├── netdata.md │ ├── sops.md │ └── tailscale.md ├── hosts/ │ ├── cryodev-main/ │ │ ├── binfmt.nix │ │ ├── boot.nix │ │ ├── default.nix │ │ ├── disks.sh │ │ ├── hardware.nix │ │ ├── networking.nix │ │ ├── packages.nix │ │ ├── secrets.yaml │ │ ├── users.nix │ │ └── services/ │ │ ├── comin.nix │ │ ├── default.nix │ │ ├── forgejo-runner.nix │ │ ├── forgejo.nix │ │ ├── headplane.nix │ │ ├── headscale.nix │ │ ├── mailserver.nix │ │ ├── netdata.nix │ │ ├── nginx.nix │ │ ├── openssh.nix │ │ ├── sops.nix │ │ └── tailscale.nix │ └── cryodev-pi/ │ ├── boot.nix │ ├── default.nix │ ├── disks.sh │ ├── hardware.nix │ ├── networking.nix │ ├── packages.nix │ ├── sd-image.nix │ ├── secrets.yaml │ ├── users.nix │ └── services/ │ ├── comin.nix │ ├── default.nix │ ├── netdata.nix │ ├── nginx.nix │ ├── openssh.nix │ └── tailscale.nix ├── lib/ │ └── utils.nix ├── modules/ │ └── nixos/ │ ├── default.nix │ ├── comin/ │ │ └── default.nix │ ├── common/ │ │ ├── default.nix │ │ ├── environment.nix │ │ ├── htop.nix │ │ ├── nationalization.nix │ │ ├── networking.nix │ │ ├── nix.nix │ │ ├── overlays.nix │ │ ├── sudo.nix │ │ ├── well-known.nix │ │ ├── zsh.nix │ │ └── shared/ │ │ ├── default.nix │ │ └── nix.nix │ ├── forgejo/ │ │ └── default.nix │ ├── forgejo-runner/ │ │ └── default.nix │ ├── headplane/ │ │ └── default.nix │ ├── headscale/ │ │ ├── acl.hujson │ │ └── default.nix │ ├── mailserver/ │ │ └── default.nix │ ├── nginx/ │ │ └── default.nix │ ├── nixvim/ │ │ ├── default.nix │ │ ├── keymaps.nix │ │ ├── spellfiles.nix │ │ └── plugins/ │ │ ├── cmp.nix │ │ ├── default.nix │ │ ├── lsp.nix │ │ ├── lualine.nix │ │ ├── telescope.nix │ │ ├── treesitter.nix │ │ └── trouble.nix │ ├── normalUsers/ │ │ └── default.nix │ ├── openssh/ │ │ └── default.nix │ ├── sops/ │ │ └── default.nix │ └── tailscale/ │ └── default.nix ├── overlays/ │ └── default.nix ├── pkgs/ │ └── default.nix ├── scripts/ │ └── install.sh ├── templates/ │ ├── generic-server/ │ │ ├── boot.nix │ │ ├── default.nix │ │ ├── disks.sh │ │ ├── flake.nix │ │ ├── hardware.nix │ │ ├── networking.nix │ │ ├── packages.nix │ │ ├── users.nix │ │ └── services/ │ │ ├── comin.nix │ │ ├── default.nix │ │ ├── netdata.nix │ │ ├── nginx.nix │ │ ├── openssh.nix │ │ └── tailscale.nix │ └── raspberry-pi/ │ ├── boot.nix │ ├── default.nix │ ├── disks.sh │ ├── flake.nix │ ├── hardware.nix │ ├── networking.nix │ ├── packages.nix │ ├── users.nix │ └── services/ │ ├── comin.nix │ ├── default.nix │ ├── netdata.nix │ ├── nginx.nix │ ├── openssh.nix │ └── tailscale.nix ├── users/ │ ├── benjamin/ │ │ └── default.nix │ ├── ralph/ │ │ └── default.nix │ └── steffen/ │ ├── default.nix │ └── pubkeys/ │ └── X670E.pub └── .forgejo/ └── workflows/ ├── ci.yml └── deploy.yml ================================================ FILE: README.md ================================================ # cryodev NixOS Configuration Declarative NixOS infrastructure for the **cryodev** environment, managed with Nix Flakes. ## Quick Start ```bash # Clone repository git clone https://git.cryodev.xyz/steffen/cryodev.git cd cryodev # Check configuration nix flake check # Build a host nix build .#nixosConfigurations.cryodev-main.config.system.build.toplevel ``` ## Hosts | Host | Architecture | Deployment | Description | |------|--------------|------------|-------------| | `cryodev-main` | x86_64 | Pull (Comin) | Main server | | `cryodev-pi` | aarch64 | Pull (Comin) | Raspberry Pi client | ## Services | Service | Domain | Description | |---------|--------|-------------| | Headscale | `headscale.cryodev.xyz` | Self-hosted Tailscale server | | Headplane | `headplane.cryodev.xyz` | Headscale web UI | | Forgejo | `git.cryodev.xyz` | Git hosting with CI/CD | | Netdata | `netdata.cryodev.xyz` | Monitoring dashboard | | Mail | `mail.cryodev.xyz` | Email (Postfix/Dovecot) | ## Raspberry Pi SD Images SD card images for Raspberry Pi clients are **built automatically** on every push to `main`. Download from: [Releases](https://git.cryodev.xyz/steffen/cryodev/releases) ```bash # Flash to SD card zstd -d cryodev-pi-sd-image.img.zst sudo dd if=cryodev-pi-sd-image.img of=/dev/sdX bs=4M status=progress ``` See [Adding a new Raspberry Pi](docs/getting-started/new-client.md) for the full workflow. ## Documentation Full documentation is available in the [`docs/`](docs/index.md) directory: - [Prerequisites](docs/getting-started/prerequisites.md) - [New Raspberry Pi Client](docs/getting-started/new-client.md) - [SD Image Reference](docs/getting-started/sd-image.md) - [Server Installation](docs/getting-started/first-install.md) - [Reinstallation](docs/getting-started/reinstall.md) - [Services](docs/services/) - [Deployment](docs/deployment/cd.md) ## Directory Structure ``` . ├── flake.nix # Flake entry point ├── constants.nix # Central configuration ├── hosts/ # Host configurations ├── modules/ # Reusable NixOS modules ├── pkgs/ # Custom packages ├── overlays/ # Nixpkgs overlays ├── templates/ # Host templates ├── scripts/ # Helper scripts ├── apps/ # Nix apps (rebuild) ├── lib/ # Helper functions └── docs/ # Documentation ``` ## Commands ```bash # Format code nix fmt # Run checks nix flake check # Update dependencies nix flake update # Enter dev shell nix develop # Build Pi SD image locally nix build .#nixosConfigurations.cryodev-pi.config.system.build.sdImage ``` ## License Private repository. ================================================ FILE: AGENTS.md ================================================ # Agent Guidelines for NixOS Configuration ## Project Overview NixOS infrastructure managed with Nix Flakes. Two hosts, reusable modules, SOPS secrets, Comin auto-deployment. - **Hosts**: `cryodev-main` (x86_64 server), `cryodev-pi` (aarch64 Raspberry Pi 4) - **Modules**: Reusable NixOS modules in `modules/nixos/` - **Apps**: `create`, `deploy`, `install`, `rebuild` in `apps/` - **Templates**: `raspberry-pi`, `generic-server` for bootstrapping new hosts ## Build & Development Commands ```bash # Format code (required before committing, runs nixfmt via pre-commit) nix fmt # Run all checks (formatting, package builds, overlay builds) nix flake check # Quick evaluation test (faster than full build, use to validate changes) nix eval .#nixosConfigurations.cryodev-main.config.system.build.toplevel.name nix eval .#nixosConfigurations.cryodev-pi.config.system.build.toplevel.name # Full build nix build .#nixosConfigurations.cryodev-main.config.system.build.toplevel nix build .#nixosConfigurations.cryodev-pi.config.system.build.toplevel # Build Raspberry Pi SD image (requires binfmt on x86_64) nix build .#nixosConfigurations.cryodev-pi.config.system.build.sdImage # Update flake inputs nix flake update # Enter development shell nix develop ``` ### Deployment Both hosts use **Comin** for automatic pull-based deployment (polls the git repo). Manual deployment is only needed for initial setup or emergencies: ```bash # Deploy via deploy app (uses deploy.json, SSH port 2299, asks sudo password) nix run .#deploy -- -n cryodev-main # Manual deployment via nixos-rebuild NIX_SSHOPTS="-p 2299" nixos-rebuild switch --flake .# \ --target-host @ --sudo --ask-sudo-password ``` ### Apps ```bash nix run .#create -- -t generic-server -n # Scaffold new host nix run .#install -- -n -r # Install from NixOS ISO nix run .#deploy -- -n # Deploy to host nix run .#rebuild -- nixos # Rebuild locally ``` ## Code Style & Conventions ### Formatting - **Tool**: `nixfmt` via `git-hooks.nix` (pre-commit) - **Run**: `nix fmt` before every commit - **Indentation**: 2 spaces (enforced by formatter) ### Naming Conventions | Type | Convention | Example | |------|------------|---------| | Files | kebab-case | `hardware-configuration.nix` | | NixOS options | camelCase | `services.myService.enable` | | Let bindings | camelCase | `let myValue = ...;` | | Hosts | kebab-case | `cryodev-main`, `cryodev-pi` | | Secret paths | kebab-case with `/` | `forgejo-runner/token` | ### Module Pattern ```nix { config, lib, ... }: let cfg = config.services.myService; inherit (lib) mkDefault mkEnableOption mkIf mkOption types; in { options.services.myService = { enable = mkEnableOption "My service"; port = mkOption { type = types.port; default = 8080; description = "Port to listen on"; }; }; config = mkIf cfg.enable { assertions = [ { assertion = cfg.port > 1024; message = "Port must be > 1024"; } ]; # Implementation }; } ``` ### Key Rules - **Use `lib.mkDefault`** for all module defaults (allows host-level overrides) - **Use `constants.nix`** for domains, IPs, ports -- never hardcode these - **Use `lib.utils`** helpers: `mkReverseProxyOption`, `mkVirtualHost`, `mkUrl` - **Secrets via SOPS** only, never plaintext. Reference: `config.sops.secrets."path".path` - **Imports**: relative paths for local files, `outputs.nixosModules.*` for shared modules, `inputs.*` for external - **Assertions** for invalid configurations, `warnings` for non-critical issues - **`lib.inherit`** pattern: extract needed functions in `let` block ### Host Service Files Each service gets its own file in `hosts//services/`: ```nix # hosts/cryodev-main/services/myservice.nix { outputs, constants, ... }: { imports = [ outputs.nixosModules.myservice ]; services.myservice = { enable = true; port = constants.services.myservice.port; }; services.nginx.virtualHosts."${constants.services.myservice.fqdn}" = { forceSSL = true; enableACME = true; locations."/" = { proxyPass = "http://127.0.0.1:${toString constants.services.myservice.port}"; }; }; } ``` ### Special Args Available in Modules - `inputs` -- flake inputs (nixpkgs, sops-nix, comin, headplane, etc.) - `outputs` -- this flake's outputs (nixosModules, packages) - `constants` -- values from `constants.nix` (domain, hosts, services) - `lib` -- nixpkgs.lib extended with `lib.utils` ## Directory Structure ``` . ├── flake.nix # Entry point, inputs/outputs, mkNixosConfiguration ├── constants.nix # Central config (domains, IPs, ports) ├── hosts/ │ ├── cryodev-main/ # x86_64 server (services/, secrets.yaml, binfmt.nix) │ └── cryodev-pi/ # aarch64 RPi (services/, secrets.yaml, sd-image.nix) ├── modules/nixos/ # Reusable modules (common, forgejo, headscale, ...) ├── users/ # User definitions (steffen, ralph, benjamin) ├── apps/ # Nix apps (create, deploy, install, rebuild) ├── lib/utils.nix # Helper functions (mkUrl, mkVirtualHost, ...) ├── pkgs/ # Custom packages ├── overlays/ # Nixpkgs overlays ├── templates/ # Host templates (generic-server, raspberry-pi) ├── deploy.json # Deploy app config (hosts, SSH port) ├── .sops.yaml # SOPS encryption rules (age keys per host) ├── .forgejo/workflows/ # CI pipelines (ci.yml, deploy.yml) └── docs/ # Documentation (English) ``` ## Verification Checklist Before committing: - [ ] `nix fmt` passes - [ ] `nix flake check` passes (or at least `nix eval` works for both hosts) - [ ] New hosts added to `flake.nix` nixosConfigurations - [ ] Constants in `constants.nix`, not hardcoded - [ ] Secrets use SOPS, not plaintext - [ ] New services have their own file in `hosts//services/` - [ ] New modules registered in `modules/nixos/default.nix` - [ ] Documentation in English ================================================ FILE: constants.nix ================================================ { # Domain domain = "cryodev.xyz"; # Hosts hosts = { cryodev-main = { ip = "100.64.0.1"; # Tailscale IP example }; cryodev-pi = { ip = "100.64.0.2"; # Tailscale IP example }; }; # Services services = { forgejo = { fqdn = "git.cryodev.xyz"; port = 3000; }; headscale = { fqdn = "headscale.cryodev.xyz"; port = 8080; }; headplane = { fqdn = "headplane.cryodev.xyz"; port = 3001; }; netdata = { fqdn = "netdata.cryodev.xyz"; port = 19999; }; mail = { fqdn = "mail.cryodev.xyz"; port = 587; }; }; } ================================================ FILE: deploy.json ================================================ { "sshPort": "2299", "buildHost": "localhost", "hosts": [ { "name": "cryodev-main", "address": "steffen@cryodev.xyz" } ] } ================================================ FILE: flake.lock ================================================ { "nodes": { "blobs": { "flake": false, "locked": { "lastModified": 1604995301, "narHash": "sha256-wcLzgLec6SGJA8fx1OEN1yV/Py5b+U5iyYpksUY/yLw=", "owner": "simple-nixos-mailserver", "repo": "blobs", "rev": "2cccdf1ca48316f2cfd1c9a0017e8de5a7156265", "type": "gitlab" }, "original": { "owner": "simple-nixos-mailserver", "repo": "blobs", "type": "gitlab" } }, "comin": { "inputs": { "flake-compat": "flake-compat", "nixpkgs": [ "nixpkgs" ], "treefmt-nix": "treefmt-nix" }, "locked": { "lastModified": 1772962094, "narHash": "sha256-9+/PHrDNDUy9iiN7seOhcxq3KoVlCAmCim6HXuKTI24=", "owner": "nlewo", "repo": "comin", "rev": "269ef4334f202b226eef804c0be0201891fb9c5d", "type": "github" }, "original": { "owner": "nlewo", "repo": "comin", "type": "github" } }, "devshell": { "inputs": { "nixpkgs": [ "headplane", "nixpkgs" ] }, "locked": { "lastModified": 1768818222, "narHash": "sha256-460jc0+CZfyaO8+w8JNtlClB2n4ui1RbHfPTLkpwhU8=", "owner": "numtide", "repo": "devshell", "rev": "255a2b1725a20d060f566e4755dbf571bbbb5f76", "type": "github" }, "original": { "owner": "numtide", "repo": "devshell", "type": "github" } }, "flake-compat": { "flake": false, "locked": { "lastModified": 1765121682, "narHash": "sha256-4VBOP18BFeiPkyhy9o4ssBNQEvfvv1kXkasAYd0+rrA=", "owner": "NixOS", "repo": "flake-compat", "rev": "65f23138d8d09a92e30f1e5c87611b23ef451bf3", "type": "github" }, "original": { "owner": "NixOS", "repo": "flake-compat", "type": "github" } }, "flake-compat_2": { "flake": false, "locked": { "lastModified": 1767039857, "narHash": "sha256-vNpUSpF5Nuw8xvDLj2KCwwksIbjua2LZCqhV1LNRDns=", "owner": "NixOS", "repo": "flake-compat", "rev": "5edf11c44bc78a0d334f6334cdaf7d60d732daab", "type": "github" }, "original": { "owner": "NixOS", "repo": "flake-compat", "type": "github" } }, "flake-compat_3": { "flake": false, "locked": { "lastModified": 1767039857, "narHash": "sha256-vNpUSpF5Nuw8xvDLj2KCwwksIbjua2LZCqhV1LNRDns=", "owner": "NixOS", "repo": "flake-compat", "rev": "5edf11c44bc78a0d334f6334cdaf7d60d732daab", "type": "github" }, "original": { "owner": "NixOS", "repo": "flake-compat", "type": "github" } }, "flake-parts": { "inputs": { "nixpkgs-lib": [ "nixvim", "nixpkgs" ] }, "locked": { "lastModified": 1768135262, "narHash": "sha256-PVvu7OqHBGWN16zSi6tEmPwwHQ4rLPU9Plvs8/1TUBY=", "owner": "hercules-ci", "repo": "flake-parts", "rev": "80daad04eddbbf5a4d883996a73f3f542fa437ac", "type": "github" }, "original": { "owner": "hercules-ci", "repo": "flake-parts", "type": "github" } }, "flake-utils": { "inputs": { "systems": "systems" }, "locked": { "lastModified": 1731533236, "narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=", "owner": "numtide", "repo": "flake-utils", "rev": "11707dc2f618dd54ca8739b309ec4fc024de578b", "type": "github" }, "original": { "owner": "numtide", "repo": "flake-utils", "type": "github" } }, "flake-utils_2": { "inputs": { "systems": "systems_2" }, "locked": { "lastModified": 1731533236, "narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=", "owner": "numtide", "repo": "flake-utils", "rev": "11707dc2f618dd54ca8739b309ec4fc024de578b", "type": "github" }, "original": { "owner": "numtide", "repo": "flake-utils", "type": "github" } }, "git-hooks": { "inputs": { "flake-compat": "flake-compat_2", "gitignore": "gitignore", "nixpkgs": [ "nixpkgs" ] }, "locked": { "lastModified": 1772893680, "narHash": "sha256-JDqZMgxUTCq85ObSaFw0HhE+lvdOre1lx9iI6vYyOEs=", "owner": "cachix", "repo": "git-hooks.nix", "rev": "8baab586afc9c9b57645a734c820e4ac0a604af9", "type": "github" }, "original": { "owner": "cachix", "repo": "git-hooks.nix", "type": "github" } }, "git-hooks_2": { "inputs": { "flake-compat": [ "nixos-mailserver", "flake-compat" ], "gitignore": "gitignore_2", "nixpkgs": [ "nixos-mailserver", "nixpkgs" ] }, "locked": { "lastModified": 1772893680, "narHash": "sha256-JDqZMgxUTCq85ObSaFw0HhE+lvdOre1lx9iI6vYyOEs=", "owner": "cachix", "repo": "git-hooks.nix", "rev": "8baab586afc9c9b57645a734c820e4ac0a604af9", "type": "github" }, "original": { "owner": "cachix", "repo": "git-hooks.nix", "type": "github" } }, "gitignore": { "inputs": { "nixpkgs": [ "git-hooks", "nixpkgs" ] }, "locked": { "lastModified": 1709087332, "narHash": "sha256-HG2cCnktfHsKV0s4XW83gU3F57gaTljL9KNSuG6bnQs=", "owner": "hercules-ci", "repo": "gitignore.nix", "rev": "637db329424fd7e46cf4185293b9cc8c88c95394", "type": "github" }, "original": { "owner": "hercules-ci", "repo": "gitignore.nix", "type": "github" } }, "gitignore_2": { "inputs": { "nixpkgs": [ "nixos-mailserver", "git-hooks", "nixpkgs" ] }, "locked": { "lastModified": 1709087332, "narHash": "sha256-HG2cCnktfHsKV0s4XW83gU3F57gaTljL9KNSuG6bnQs=", "owner": "hercules-ci", "repo": "gitignore.nix", "rev": "637db329424fd7e46cf4185293b9cc8c88c95394", "type": "github" }, "original": { "owner": "hercules-ci", "repo": "gitignore.nix", "type": "github" } }, "headplane": { "inputs": { "devshell": "devshell", "flake-utils": "flake-utils", "nixpkgs": "nixpkgs_2" }, "locked": { "lastModified": 1773108598, "narHash": "sha256-y80AABZv5n1vQua8mn1T79QB4pRnBTo+hPdmPa+J0yA=", "owner": "tale", "repo": "headplane", "rev": "6470f5a821e3ee5b4937a858bf13fb294bd38a7c", "type": "github" }, "original": { "owner": "tale", "repo": "headplane", "type": "github" } }, "ixx": { "inputs": { "flake-utils": [ "nixvim", "nuschtosSearch", "flake-utils" ], "nixpkgs": [ "nixvim", "nuschtosSearch", "nixpkgs" ] }, "locked": { "lastModified": 1754860581, "narHash": "sha256-EM0IE63OHxXCOpDHXaTyHIOk2cNvMCGPqLt/IdtVxgk=", "owner": "NuschtOS", "repo": "ixx", "rev": "babfe85a876162c4acc9ab6fb4483df88fa1f281", "type": "github" }, "original": { "owner": "NuschtOS", "ref": "v0.1.1", "repo": "ixx", "type": "github" } }, "nixos-mailserver": { "inputs": { "blobs": "blobs", "flake-compat": "flake-compat_3", "git-hooks": "git-hooks_2", "nixpkgs": [ "nixpkgs" ] }, "locked": { "lastModified": 1773194666, "narHash": "sha256-YbsbqtTB3q0JjP7/G7GO58ea49cps1+8sb95/Bt7oVs=", "owner": "simple-nixos-mailserver", "repo": "nixos-mailserver", "rev": "489fbc4e0ef987cfdce700476abafe3269ebf3e5", "type": "gitlab" }, "original": { "owner": "simple-nixos-mailserver", "repo": "nixos-mailserver", "type": "gitlab" } }, "nixpkgs": { "locked": { "lastModified": 1770107345, "narHash": "sha256-tbS0Ebx2PiA1FRW8mt8oejR0qMXmziJmPaU1d4kYY9g=", "owner": "nixos", "repo": "nixpkgs", "rev": "4533d9293756b63904b7238acb84ac8fe4c8c2c4", "type": "github" }, "original": { "owner": "nixos", "ref": "nixpkgs-unstable", "repo": "nixpkgs", "type": "github" } }, "nixpkgs-old-stable": { "locked": { "lastModified": 1767313136, "narHash": "sha256-16KkgfdYqjaeRGBaYsNrhPRRENs0qzkQVUooNHtoy2w=", "owner": "nixos", "repo": "nixpkgs", "rev": "ac62194c3917d5f474c1a844b6fd6da2db95077d", "type": "github" }, "original": { "owner": "nixos", "ref": "nixos-25.05", "repo": "nixpkgs", "type": "github" } }, "nixpkgs-unstable": { "locked": { "lastModified": 1772963539, "narHash": "sha256-9jVDGZnvCckTGdYT53d/EfznygLskyLQXYwJLKMPsZs=", "owner": "nixos", "repo": "nixpkgs", "rev": "9dcb002ca1690658be4a04645215baea8b95f31d", "type": "github" }, "original": { "owner": "nixos", "ref": "nixos-unstable", "repo": "nixpkgs", "type": "github" } }, "nixpkgs_2": { "locked": { "lastModified": 1772736753, "narHash": "sha256-au/m3+EuBLoSzWUCb64a/MZq6QUtOV8oC0D9tY2scPQ=", "owner": "nixos", "repo": "nixpkgs", "rev": "917fec990948658ef1ccd07cef2a1ef060786846", "type": "github" }, "original": { "owner": "nixos", "ref": "nixpkgs-unstable", "repo": "nixpkgs", "type": "github" } }, "nixpkgs_3": { "locked": { "lastModified": 1773068389, "narHash": "sha256-vMrm7Pk2hjBRPnCSjhq1pH0bg350Z+pXhqZ9ICiqqCs=", "owner": "nixos", "repo": "nixpkgs", "rev": "44bae273f9f82d480273bab26f5c50de3724f52f", "type": "github" }, "original": { "owner": "nixos", "ref": "nixos-25.11", "repo": "nixpkgs", "type": "github" } }, "nixvim": { "inputs": { "flake-parts": "flake-parts", "nixpkgs": [ "nixpkgs" ], "nuschtosSearch": "nuschtosSearch", "systems": "systems_3" }, "locked": { "lastModified": 1769049374, "narHash": "sha256-h0Os2qqNyycDY1FyZgtbn28VF1ySP74/n0f+LDd8j+w=", "owner": "nix-community", "repo": "nixvim", "rev": "b8f76bf5751835647538ef8784e4e6ee8deb8f95", "type": "github" }, "original": { "owner": "nix-community", "ref": "nixos-25.11", "repo": "nixvim", "type": "github" } }, "nuschtosSearch": { "inputs": { "flake-utils": "flake-utils_2", "ixx": "ixx", "nixpkgs": [ "nixvim", "nixpkgs" ] }, "locked": { "lastModified": 1768249818, "narHash": "sha256-ANfn5OqIxq3HONPIXZ6zuI5sLzX1sS+2qcf/Pa0kQEc=", "owner": "NuschtOS", "repo": "search", "rev": "b6f77b88e9009bfde28e2130e218e5123dc66796", "type": "github" }, "original": { "owner": "NuschtOS", "repo": "search", "type": "github" } }, "root": { "inputs": { "comin": "comin", "git-hooks": "git-hooks", "headplane": "headplane", "nixos-mailserver": "nixos-mailserver", "nixpkgs": "nixpkgs_3", "nixpkgs-old-stable": "nixpkgs-old-stable", "nixpkgs-unstable": "nixpkgs-unstable", "nixvim": "nixvim", "sops-nix": "sops-nix" } }, "sops-nix": { "inputs": { "nixpkgs": [ "nixpkgs" ] }, "locked": { "lastModified": 1773096132, "narHash": "sha256-M3zEnq9OElB7zqc+mjgPlByPm1O5t2fbUrH3t/Hm5Ag=", "owner": "Mic92", "repo": "sops-nix", "rev": "d1ff3b1034d5bab5d7d8086a7803c5a5968cd784", "type": "github" }, "original": { "owner": "Mic92", "repo": "sops-nix", "type": "github" } }, "systems": { "locked": { "lastModified": 1681028828, "narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=", "owner": "nix-systems", "repo": "default", "rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e", "type": "github" }, "original": { "owner": "nix-systems", "repo": "default", "type": "github" } }, "systems_2": { "locked": { "lastModified": 1681028828, "narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=", "owner": "nix-systems", "repo": "default", "rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e", "type": "github" }, "original": { "owner": "nix-systems", "repo": "default", "type": "github" } }, "systems_3": { "locked": { "lastModified": 1681028828, "narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=", "owner": "nix-systems", "repo": "default", "rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e", "type": "github" }, "original": { "owner": "nix-systems", "repo": "default", "type": "github" } }, "treefmt-nix": { "inputs": { "nixpkgs": "nixpkgs" }, "locked": { "lastModified": 1770228511, "narHash": "sha256-wQ6NJSuFqAEmIg2VMnLdCnUc0b7vslUohqqGGD+Fyxk=", "owner": "numtide", "repo": "treefmt-nix", "rev": "337a4fe074be1042a35086f15481d763b8ddc0e7", "type": "github" }, "original": { "owner": "numtide", "repo": "treefmt-nix", "type": "github" } } }, "root": "root", "version": 7 } ================================================ FILE: flake.nix ================================================ { inputs = { nixpkgs.url = "github:nixos/nixpkgs/nixos-25.11"; nixpkgs-unstable.url = "github:nixos/nixpkgs/nixos-unstable"; nixpkgs-old-stable.url = "github:nixos/nixpkgs/nixos-25.05"; sops-nix.url = "github:Mic92/sops-nix"; sops-nix.inputs.nixpkgs.follows = "nixpkgs"; nixos-mailserver.url = "gitlab:simple-nixos-mailserver/nixos-mailserver"; nixos-mailserver.inputs.nixpkgs.follows = "nixpkgs"; headplane.url = "github:tale/headplane"; comin.url = "github:nlewo/comin"; comin.inputs.nixpkgs.follows = "nixpkgs"; nixvim.url = "github:nix-community/nixvim/nixos-25.11"; nixvim.inputs.nixpkgs.follows = "nixpkgs"; git-hooks.url = "github:cachix/git-hooks.nix"; git-hooks.inputs.nixpkgs.follows = "nixpkgs"; }; outputs = { self, nixpkgs, ... }@inputs: let inherit (self) outputs; supportedSystems = [ "x86_64-linux" "aarch64-linux" ]; forAllSystems = nixpkgs.lib.genAttrs supportedSystems; # Extend nixpkgs.lib with our custom utils lib = nixpkgs.lib.extend (final: prev: self.lib or { }); constants = import ./constants.nix; mkNixosConfiguration = system: modules: nixpkgs.lib.nixosSystem { inherit system modules; specialArgs = { inherit inputs outputs lib constants ; }; }; in { # Custom library functions lib = { utils = import ./lib/utils.nix { lib = nixpkgs.lib; }; }; # Apps apps = forAllSystems ( system: let pkgs = nixpkgs.legacyPackages.${system}; mkApp = name: { type = "app"; program = pkgs.lib.getExe (pkgs.callPackage ./apps/${name} { }); }; in { create = mkApp "create"; deploy = mkApp "deploy"; install = mkApp "install"; rebuild = mkApp "rebuild"; } ); packages = forAllSystems (system: import ./pkgs nixpkgs.legacyPackages.${system}); overlays = import ./overlays { inherit inputs; }; nixosModules = import ./modules/nixos; nixosConfigurations = { cryodev-main = mkNixosConfiguration "x86_64-linux" [ ./hosts/cryodev-main ]; cryodev-pi = mkNixosConfiguration "aarch64-linux" [ ./hosts/cryodev-pi ]; }; templates = { raspberry-pi = { path = ./templates/raspberry-pi; description = "Raspberry Pi 4 Client"; }; generic-server = { path = ./templates/generic-server; description = "Generic x86_64 Customer Server"; }; }; formatter = forAllSystems ( system: let pkgs = nixpkgs.legacyPackages.${system}; config = self.checks.${system}.pre-commit-check.config; inherit (config) package configFile; script = '' ${pkgs.lib.getExe package} run --all-files --config ${configFile} ''; in pkgs.writeShellScriptBin "pre-commit-run" script ); checks = forAllSystems ( system: let pkgs = nixpkgs.legacyPackages.${system}; flakePkgs = self.packages.${system}; overlaidPkgs = import nixpkgs { inherit system; overlays = [ self.overlays.modifications ]; }; in { pre-commit-check = inputs.git-hooks.lib.${system}.run { src = ./.; hooks = { nixfmt.enable = true; }; }; build-packages = pkgs.linkFarm "flake-packages-${system}" flakePkgs; build-overlays = pkgs.linkFarm "flake-overlays-${system}" { # package = overlaidPkgs.package; }; } ); }; } ================================================ FILE: .sops.yaml ================================================ keys: - &steffen_key age1e8p35795htf7twrejyugpzw0qja2v33awcw76y4gp6acnxnkzq0s935t4t # steffen (local) - &cryodev-main_key age1y6hushuapy0k04mrvvpev0t8lq44w904r596jus44nhkflky0yhqgq2xx6 creation_rules: - path_regex: hosts/cryodev-main/secrets.yaml$ key_groups: - age: - *steffen_key - *cryodev-main_key - path_regex: hosts/cryodev-pi/secrets.yaml$ key_groups: - age: - *steffen_key # - *cryodev-pi_key # Add after Pi installation ================================================ FILE: apps/create/create.sh ================================================ #!/usr/bin/env bash # Create a new host from a template FLAKE_DIR="." TEMPLATE="" HOSTNAME="" SYSTEM="" SEPARATOR="________________________________________" usage() { cat <&2 exit 1 } while [[ $# -gt 0 ]]; do case "$1" in -t|--template) TEMPLATE="$2"; shift 2 ;; -n|--hostname) HOSTNAME="$2"; shift 2 ;; -s|--system) SYSTEM="$2"; shift 2 ;; -f|--flake) FLAKE_DIR="$2"; shift 2 ;; -h|--help) usage; exit 0 ;; *) error "Unknown option: $1" ;; esac done # Validate [[ -z "$TEMPLATE" ]] && error "Template is required (-t)" [[ -z "$HOSTNAME" ]] && error "Hostname is required (-n)" TEMPLATE_DIR="$FLAKE_DIR/templates/$TEMPLATE" HOST_DIR="$FLAKE_DIR/hosts/$HOSTNAME" [[ ! -d "$TEMPLATE_DIR" ]] && error "Template '$TEMPLATE' not found in $TEMPLATE_DIR" [[ -d "$HOST_DIR" ]] && error "Host '$HOSTNAME' already exists in $HOST_DIR" # Derive system from template if not specified if [[ -z "$SYSTEM" ]]; then case "$TEMPLATE" in generic-server) SYSTEM="x86_64-linux" ;; raspberry-pi) SYSTEM="aarch64-linux" ;; *) error "Cannot derive system for template '$TEMPLATE'. Use -s to specify." ;; esac fi echo "$SEPARATOR" echo "Creating host '$HOSTNAME' from template '$TEMPLATE'" echo " System: $SYSTEM" echo " Target: $HOST_DIR" echo "$SEPARATOR" # Copy template cp -r "$TEMPLATE_DIR" "$HOST_DIR" # Remove template flake.nix (not needed in host dir) rm -f "$HOST_DIR/flake.nix" # Replace hostname in networking.nix sed -i "s/networking.hostName = \".*\"/networking.hostName = \"$HOSTNAME\"/" "$HOST_DIR/networking.nix" # Create empty secrets.yaml placeholder touch "$HOST_DIR/secrets.yaml" # Add to git git -C "$FLAKE_DIR" add "$HOST_DIR" echo "$SEPARATOR" echo "Host '$HOSTNAME' created successfully." echo "" echo "Next steps:" echo " 1. Add to flake.nix:" echo "" echo " $HOSTNAME = mkNixosConfiguration \"$SYSTEM\" [ ./hosts/$HOSTNAME ];" echo "" echo " 2. Update hardware.nix and disks.sh for your hardware" echo " 3. Update .sops.yaml with creation rules for hosts/$HOSTNAME/secrets.yaml" echo " 4. Follow the first-install guide: docs/getting-started/first-install.md" ================================================ FILE: apps/create/default.nix ================================================ { writeShellApplication, git, gnused, ... }: let name = "create"; text = builtins.readFile ./${name}.sh; in writeShellApplication { inherit name text; meta.mainProgram = name; runtimeInputs = [ git gnused ]; } ================================================ FILE: apps/deploy/default.nix ================================================ { writeShellApplication, jq, ... }: let name = "deploy"; text = builtins.readFile ./${name}.sh; in writeShellApplication { inherit name text; meta.mainProgram = name; runtimeInputs = [ jq ]; } ================================================ FILE: apps/deploy/deploy.sh ================================================ #!/usr/bin/env bash # defaults FLAKE_URI="." CONFIG_FILE="./deploy.json" ACTION="switch" USE_SUDO=true DO_BUILD=true FILTER_HOSTS=() usage() { cat < $1\033[0m"; } success() { echo -e "\033[0;32m$1\033[0m"; } error() { echo -e "\033[0;31mError: $1\033[0m" >&2; exit 1; } while [[ $# -gt 0 ]]; do case "$1" in switch|boot|test) ACTION="$1"; shift ;; -n|--host) FILTER_HOSTS+=("$2"); shift 2 ;; -f|--flake) FLAKE_URI="$2"; shift 2 ;; -c|--config) CONFIG_FILE="$2"; shift 2 ;; --no-sudo) USE_SUDO=false; shift ;; --skip-build) DO_BUILD=false; shift ;; -h|--help) usage; exit 0 ;; *) error "Invalid argument '$1'" ;; esac done command -v jq &> /dev/null || error "jq is not installed." [ -f "$CONFIG_FILE" ] || error "Config '$CONFIG_FILE' not found." BUILD_HOST=$(jq -r '.buildHost // "localhost"' "$CONFIG_FILE") [[ "$BUILD_HOST" =~ ^(127\.0\.0\.1|::1)$ ]] && BUILD_HOST="localhost" SSH_PORT=$(jq -r '.sshPort // "22"' "$CONFIG_FILE") export NIX_SSHOPTS="-p $SSH_PORT" mapfile -t ALL_ENTRIES < <(jq -r '.hosts[] | "\(.name) \(.address)"' "$CONFIG_FILE") [ ${#ALL_ENTRIES[@]} -eq 0 ] && error "No hosts defined in $CONFIG_FILE" # Filter hosts if -n was provided HOST_ENTRIES=() if [ ${#FILTER_HOSTS[@]} -gt 0 ]; then for entry in "${ALL_ENTRIES[@]}"; do read -r name _address <<< "$entry" for filter in "${FILTER_HOSTS[@]}"; do if [[ "$name" == "$filter" ]]; then HOST_ENTRIES+=("$entry") break fi done done # Check for unknown hosts for filter in "${FILTER_HOSTS[@]}"; do found=false for entry in "${ALL_ENTRIES[@]}"; do read -r name _ <<< "$entry" [[ "$name" == "$filter" ]] && found=true && break done [[ "$found" == false ]] && error "Host '$filter' not found in $CONFIG_FILE" done [ ${#HOST_ENTRIES[@]} -eq 0 ] && error "No matching hosts found" else HOST_ENTRIES=("${ALL_ENTRIES[@]}") fi echo "Action: $ACTION" echo "Flake: $FLAKE_URI" echo "Builder: $BUILD_HOST" echo "SSH Port: $SSH_PORT" echo "Hosts: $(printf '%s ' "${HOST_ENTRIES[@]}" | sed 's/ [^ ]*//g; s/ */, /g')" if [ "$DO_BUILD" = true ]; then _status "Building configurations..." for entry in "${HOST_ENTRIES[@]}"; do read -r name address <<< "$entry" echo "------------------------------------------------" echo "Building host '$name':" CMD=("nixos-rebuild" "build" "--flake" "${FLAKE_URI}#${name}") [[ "$BUILD_HOST" != "localhost" ]] && CMD+=("--build-host" "$BUILD_HOST") "${CMD[@]}" || error "Build failed for $name" success "Build for host '$name' successful." done fi _status "Deploying to targets..." for entry in "${HOST_ENTRIES[@]}"; do read -r name address <<< "$entry" echo "------------------------------------------------" echo "Deploying to host '$name' ($address):" CMD=("nixos-rebuild" "$ACTION" "--flake" "${FLAKE_URI}#${name}" "--target-host" "$address") [[ "$BUILD_HOST" != "localhost" ]] && CMD+=("--build-host" "$BUILD_HOST") [[ "$USE_SUDO" = true ]] && CMD+=("--sudo" "--ask-sudo-password") "${CMD[@]}" || error "Activation failed for $name" success "Host '$name' updated." done success "Deployment complete." ================================================ FILE: apps/install/default.nix ================================================ { writeShellApplication, git, ... }: let name = "install"; text = builtins.readFile ./${name}.sh; in writeShellApplication { inherit name text; meta.mainProgram = name; runtimeInputs = [ git ]; } ================================================ FILE: apps/install/install.sh ================================================ #!/usr/bin/env bash # NixOS install script ### VARIABLES ### ASK_VERIFICATION=1 # Default to ask for verification CONFIG_DIR="/tmp/nixos" # Directory to copy flake to / clone flake into GIT_BRANCH="main" # Default Git branch GIT_REPO="" # Git repository URL HOSTNAME="" # Hostname MNT="/mnt" # root mount point SEPARATOR="________________________________________" # line separator ### FUNCTIONS ### # Function to display help information Show_help() { echo "Usage: $0 [-r REPO] [-n HOSTNAME] [-b BRANCH] [-y] [-h]" echo echo "Options:" echo " -r, --repo REPO Your NixOS configuration Git repository URL" echo " -n, --hostname HOSTNAME Specify the hostname for the NixOS configuration" echo " -b, --branch BRANCH Specify the Git branch to use (default: $GIT_BRANCH)" echo " -y, --yes Do not ask for user verification before proceeding" echo " -h, --help Show this help message and exit" } # Function to format, partition, and mount disks for $HOSTNAME using disko Run_disko() { echo "$SEPARATOR" echo "Running disko..." nix --experimental-features "nix-command flakes" run github:nix-community/disko/latest -- --mode disko "$CONFIG_DIR"/hosts/"$HOSTNAME"/disks.nix } # Function to format, partition, and mount disks for $HOSTNAME using a partitioning script Run_script() { echo "$SEPARATOR" echo "Running partitioning script..." bash "$CONFIG_DIR"/hosts/"$HOSTNAME"/disks.sh } # Function to check mount points and partitioning Check_partitioning() { echo "$SEPARATOR" echo "Printing mount points and partitioning..." mount | grep "$MNT" lsblk -f [[ "$ASK_VERIFICATION" == 1 ]] && read -rp "Verify the mount points and partitioning. Press Ctrl+c to cancel or Enter to continue..." } # Function to generate hardware configuration Generate_hardware_config() { [[ "$ASK_VERIFICATION" == 1 ]] && read -rp "No hardware configuration found. Press Ctrl+c to cancel or Enter to generate one..." echo "$SEPARATOR" echo "Generating hardware configuration..." nixos-generate-config --root "$MNT" --show-hardware-config > "$CONFIG_DIR"/hosts/"$HOSTNAME"/hardware.nix # Check if hardware configuration has been generated if [[ ! -f "$CONFIG_DIR"/hosts/"$HOSTNAME"/hardware.nix ]]; then echo "Error: Hardware configuration cannot be generated." exit 1 fi # Add configuration to git git -C "$CONFIG_DIR" add hosts/"$HOSTNAME"/hardware.nix echo "Hardware configuration generated successfully." } # Function to install configuration for $HOSTNAME Install() { # Check if hardware configuration exists [[ ! -f "$CONFIG_DIR"/hosts/"$HOSTNAME"/hardware.nix ]] && Generate_hardware_config echo "$SEPARATOR" echo "Installing NixOS..." nixos-install --root "$MNT" --no-root-password --flake "$CONFIG_DIR"#"$HOSTNAME" && echo "You can reboot the system now." } ### PARSE ARGUMENTS ### while [[ "$#" -gt 0 ]]; do case $1 in -r|--repo) GIT_REPO="$2"; shift ;; -b|--branch) GIT_BRANCH="$2"; shift ;; -y|--yes) ASK_VERIFICATION=0 ;; -h|--help) Show_help; exit 0 ;; -n|--hostname) HOSTNAME="$2"; shift ;; *) echo "Unknown option: $1"; Show_help; exit 1 ;; esac shift done ### PREREQUISITES ### echo "$SEPARATOR" mkdir -p "$CONFIG_DIR" # Clone NixOS configuration from $GIT_REPO if provided if [[ -n "$GIT_REPO" ]]; then # Clone Git repo if directory is empty if [[ -z "$(ls -A "$CONFIG_DIR" 2>/dev/null)" ]]; then echo "Cloning NixOS configuration repo..." git clone --depth 1 -b "$GIT_BRANCH" "$GIT_REPO" "$CONFIG_DIR" # Check if git repository has been cloned if [[ ! -d "$CONFIG_DIR"/.git ]]; then echo "Error: Git repository could not be cloned." exit 1 fi else echo "$CONFIG_DIR is not empty. Skip cloning $GIT_REPO." fi fi if [[ ! -f "$CONFIG_DIR"/flake.nix ]]; then echo "Error: $CONFIG_DIR does not contain 'flake.nix'." exit 1 fi ### CHOOSE CONFIG ### # If hostname is not provided via options, prompt the user if [[ -z "$HOSTNAME" ]]; then # Get list of available hostnames HOSTNAMES=$(ls "$CONFIG_DIR"/hosts) echo "$SEPARATOR" echo "Please choose a hostname to install its NixOS configuration." echo "$HOSTNAMES" read -rp "Enter hostname: " HOSTNAME # Check if hostname is empty if [[ -z "$HOSTNAME" ]]; then echo "Error: Hostname cannot be empty." exit 1 fi fi ### INSTALLATION ### # Check if NixOS configuration exists if [[ -d "$CONFIG_DIR"/hosts/"$HOSTNAME" ]]; then # Check for existing disko configuration if [[ -f "$CONFIG_DIR"/hosts/"$HOSTNAME"/disks.nix ]]; then Run_disko || ( echo "Error: disko failed." && exit 1 ) # Check for partitioning script elif [[ -f "$CONFIG_DIR"/hosts/"$HOSTNAME"/disks.sh ]]; then Run_script || ( echo "Error: Partitioning script failed." && exit 1 ) else echo "Error: No disko configuration (disks.nix) or partitioning script (disks.sh) found for host '$HOSTNAME'." exit 1 fi Check_partitioning Install || ( echo "Error: Installation failed." && exit 1 ) else echo "Error: Configuration for host '$HOSTNAME' does not exist." exit 1 fi ================================================ FILE: apps/rebuild/default.nix ================================================ { writeShellApplication, coreutils, gnugrep, gnused, home-manager, hostname, nix, nixos-rebuild, ... }: let name = "rebuild"; text = builtins.readFile ./${name}.sh; in writeShellApplication { inherit name text; meta.mainProgram = name; runtimeInputs = [ coreutils gnugrep gnused home-manager hostname nix nixos-rebuild ]; } ================================================ FILE: apps/rebuild/rebuild.sh ================================================ # NixOS and standalone Home Manager rebuild script # Defaults FLAKE_PATH="$HOME/.config/nixos" # Default flake path HOME_USER="$(whoami)" # Default username. Used to identify the Home Manager configuration NIXOS_HOST="$(hostname)" # Default hostname. Used to identify the NixOS and Home Manager configuration BUILD_HOST="" # Default build host. Empty means localhost TARGET_HOST="" # Default target host. Empty means localhost UPDATE=0 # Default to not update flake repositories UPDATE_INPUTS="" # Default list of inputs to update. Empty means all ROLLBACK=0 # Default to not rollback SHOW_TRACE=0 # Default to not show detailed error messages # Function to display the help message Help() { echo "Wrapper script for 'nixos-rebuild switch' and 'home-manager switch' commands." echo "Usage: rebuild [OPTIONS]" echo echo "Commands:" echo " nixos Rebuild NixOS configuration" echo " home Rebuild Home Manager configuration" echo " all Rebuild both NixOS and Home Manager configurations" echo " help Show this help message" echo echo "Options (for NixOS and Home Manager):" echo " -H, --host Specify the hostname (as in 'nixosConfiguraions.'). Default: $NIXOS_HOST" echo " -p, --path Set the path to the flake directory. Default: $FLAKE_PATH" echo " -U, --update [inputs] Update all flake inputs. Optionally provide comma-separated list of inputs to update instead." echo " -r, --rollback Don't build the new configuration, but use the previous generation instead" echo " -t, --show-trace Show detailed error messages" echo echo "NixOS only options:" echo " -B, --build-host Use a remote host for building the configuration via SSH" echo " -T, --target-host Deploy the configuration to a remote host via SSH. If '--host' is specified, it will be used as the target host." echo echo "Home Manager only options:" echo " -u, --user Specify the username (as in 'homeConfigurations.@'). Default: $HOME_USER" } # Function to handle errors error() { echo "Error: $1" exit 1 } # Function to rebuild NixOS configuration Rebuild_nixos() { local FLAKE="$FLAKE_PATH#$NIXOS_HOST" # Construct rebuild command local CMD=("nixos-rebuild" "switch" "--sudo") [[ -n "$TARGET_HOST" || -n "$BUILD_HOST" ]] && CMD+=("--ask-sudo-password") CMD+=("--flake" "$FLAKE") [ "$ROLLBACK" = 1 ] && CMD+=("--rollback") [ "$SHOW_TRACE" = 1 ] && CMD+=("--show-trace") [ -n "$BUILD_HOST" ] && CMD+=("--build-host" "$BUILD_HOST") if [ "$NIXOS_HOST" != "$(hostname)" ] && [ -z "$TARGET_HOST" ]; then TARGET_HOST="$NIXOS_HOST" echo "Using '$TARGET_HOST' as target host." fi [ -n "$TARGET_HOST" ] && CMD+=("--target-host" "$TARGET_HOST") # Rebuild NixOS configuration if [ "$ROLLBACK" = 0 ]; then echo "Rebuilding NixOS configuration '$FLAKE'..." else echo "Rolling back to last NixOS generation..." fi echo "Executing command: ${CMD[*]}" "${CMD[@]}" || error "NixOS rebuild failed" echo "NixOS rebuild completed successfully." } # Function to rebuild Home Manager configuration Rebuild_home() { local FLAKE="$FLAKE_PATH#$HOME_USER@$NIXOS_HOST" if [ -n "$BUILD_HOST" ] || [ -n "$TARGET_HOST" ]; then error "Remote building is not supported for Home Manager." fi # Construct rebuild command local CMD=() if [ "$ROLLBACK" = 1 ]; then local rollback_path rollback_path=$(home-manager generations | sed -n '2p' | grep -o '/nix/store[^ ]*') CMD+=("$rollback_path/activate") else CMD=("home-manager" "switch" "--flake" "$FLAKE") [ "$SHOW_TRACE" = 1 ] && CMD+=("--show-trace") fi # Rebuild Home Manager configuration if [ "$ROLLBACK" = 0 ]; then echo "Rebuilding Home Manager configuration '$FLAKE'..." else echo "Rolling back to last Home Manager generation..." fi echo "Executing command: ${CMD[*]}" "${CMD[@]}" || error "Home Manager rebuild failed" echo "Home Manager rebuild completed successfully." } # Function to Update flake repositories Update() { echo "Updating flake inputs..." # Construct update command as an array local CMD=("nix" "flake" "update" "--flake" "$FLAKE_PATH") if [ -n "$UPDATE_INPUTS" ]; then # Split comma-separated inputs and pass them to nix flake update IFS=',' read -ra INPUTS <<< "$UPDATE_INPUTS" for input in "${INPUTS[@]}"; do CMD+=("$input") done fi echo "Executing command: ${CMD[*]}" "${CMD[@]}" || error "Failed to update flake repositories" echo "Flake repositories updated successfully." } # Parse command-line options if [[ -z "${1:-}" ]]; then echo "Error: No command specified. Printing help page." Help exit 1 fi COMMAND=$1 shift # Handle help command early if [ "$COMMAND" = "help" ] || [ "$COMMAND" = "--help" ] || [ "$COMMAND" = "-h" ]; then Help exit 0 fi while [ $# -gt 0 ]; do case "${1:-}" in -H|--host) if [ -n "${2:-}" ]; then NIXOS_HOST="$2" shift 2 else error "-H|--host option requires an argument" fi ;; -u|--user) if [ -n "${2:-}" ]; then HOME_USER="$2" shift 2 else error "-u|--user option requires an argument" fi ;; -p|--path) if [ -n "${2:-}" ]; then FLAKE_PATH="$2" shift 2 else error "-p|--path option requires an argument" fi ;; -U|--update) UPDATE=1 # Check if next argument is a non-option if [ $# -gt 1 ] && [ "${2#-}" = "${2:-}" ]; then UPDATE_INPUTS="$2" shift 2 else shift fi ;; -r|--rollback) ROLLBACK=1 shift ;; -t|--show-trace) SHOW_TRACE=1 shift ;; -B|--build-host) if [ -n "${2:-}" ]; then BUILD_HOST="$2" shift 2 else error "-B|--build-host option requires an argument" fi ;; -T|--target-host) if [ -n "${2:-}" ]; then TARGET_HOST="$2" shift 2 else error "-T|--target-host option requires an argument" fi ;; *) echo "Error: Unknown option '$1'" Help exit 1 ;; esac done # Check if script is run with sudo if [ "$EUID" -eq 0 ]; then error "Do not run this script with sudo." fi # Check if flake path exists if [ ! -d "$FLAKE_PATH" ]; then error "Flake path '$FLAKE_PATH' does not exist" fi # Ignore trailing slash in flake path FLAKE_PATH="${FLAKE_PATH%/}" # Check if flake.nix exists if [ ! -f "$FLAKE_PATH/flake.nix" ]; then error "flake.nix does not exist in '$FLAKE_PATH'" fi # Execute updates and rebuilds based on the command [ "$UPDATE" = 1 ] && Update case "$COMMAND" in nixos) Rebuild_nixos ;; home) Rebuild_home ;; all) Rebuild_nixos Rebuild_home ;; *) echo "Error: Unknown command '$COMMAND'" echo "Printing help page:" Help exit 1 ;; esac ================================================ FILE: docs/index.md ================================================ # Cryodev NixOS Configuration Documentation Welcome to the documentation for the **cryodev** NixOS infrastructure. ## Quick Links ### Getting Started - [Prerequisites](getting-started/prerequisites.md) - Required tools - [Adding a New Raspberry Pi](getting-started/new-client.md) - Complete workflow for new clients - [SD Image Reference](getting-started/sd-image.md) - Details on image building - [First Installation (Server)](getting-started/first-install.md) - Bootstrap for x86_64 hosts - [Reinstallation](getting-started/reinstall.md) - Reinstall with hardware changes ### Services - [SOPS Secrets](services/sops.md) - Secret management with sops-nix - [Headscale](services/headscale.md) - Self-hosted Tailscale server - [Headplane](services/headplane.md) - Web UI for Headscale - [Tailscale](services/tailscale.md) - Mesh VPN client - [Mailserver](services/mailserver.md) - Email stack (Postfix/Dovecot) - [Forgejo](services/forgejo.md) - Git hosting with CI/CD - [Netdata](services/netdata.md) - Monitoring and alerting ### Deployment - [Continuous Deployment](deployment/cd.md) - Push- and pull-based deployment - [DNS Configuration](deployment/dns.md) - Required DNS records ## Architecture ``` Internet | cryodev.xyz | +-------------------+ | cryodev-main | | (x86_64 Server) | +-------------------+ | - Headscale | | - Headplane | | - Forgejo | | - Mailserver | | - Netdata Parent | +-------------------+ | Tailscale Mesh VPN | +-------------------+ | cryodev-pi | | (Raspberry Pi 4) | +-------------------+ | - Tailscale | | - Netdata Child | | - Comin (GitOps) | +-------------------+ ``` ## Installation Scenarios | Scenario | Description | Guide | |----------|-------------|-------| | **New Raspberry Pi** | Create config, build image, flash | [new-client.md](getting-started/new-client.md) | | **First Installation (Server)** | x86_64 host, manual installation | [first-install.md](getting-started/first-install.md) | | **Reinstallation** | Existing host, new hardware | [reinstall.md](getting-started/reinstall.md) | For Raspberry Pi: [SD Image Reference](getting-started/sd-image.md) ## Directory Structure ``` . ├── flake.nix # Entry point, inputs and outputs ├── constants.nix # Central configuration (domains, IPs, ports) ├── hosts/ # Host-specific configurations │ ├── cryodev-main/ │ └── cryodev-pi/ ├── modules/ # Reusable NixOS modules │ └── nixos/ ├── pkgs/ # Custom packages ├── overlays/ # Nixpkgs overlays ├── templates/ # Templates for new hosts ├── scripts/ # Helper scripts (install.sh) ├── apps/ # Nix apps (rebuild) └── lib/ # Helper functions (utils.nix) ``` ## Deployment Strategies | Host | Strategy | Tool | Description | |------|----------|------|-------------| | `cryodev-main` | Pull-based | Comin | Polls the repository for changes | | `cryodev-pi` | Pull-based | Comin | Polls the repository for changes | ================================================ FILE: docs/deployment/cd.md ================================================ # Continuous Deployment All hosts use **Comin** (pull-based) for automatic deployment. ## Overview | Host | Strategy | Tool | Trigger | |------|----------|------|---------| | `cryodev-main` | Pull-based | Comin | Automatic polling | | `cryodev-pi` | Pull-based | Comin | Automatic polling | ## How It Works 1. Developer pushes to `main` branch 2. CI (Forgejo Actions) runs flake-check and builds all hosts 3. Comin on each host periodically polls the Git repository 4. On changes, Comin builds and activates the new configuration ## Configuration ```nix # hosts//services/comin.nix { services.comin = { enable = true; remotes = [{ name = "origin"; url = "https://git.cryodev.xyz/steffen/cryodev.git"; branches.main.name = "main"; }]; }; } ``` ## Monitoring Check Comin status: ```bash sudo systemctl status comin sudo journalctl -u comin -f ``` Force immediate update: ```bash sudo systemctl restart comin ``` ## Troubleshooting If Comin fails to build: ```bash # Check logs sudo journalctl -u comin --since "1 hour ago" # Manual build test cd /var/lib/comin/repo nix build .#nixosConfigurations..config.system.build.toplevel ``` ## Rollback ```bash # List generations sudo nix-env -p /nix/var/nix/profiles/system --list-generations # Rollback to previous sudo nixos-rebuild switch --rollback ``` ## Manual Deployment For initial setup or emergencies: ```bash # Using the deploy app nix run .#deploy -- -n # Or manually with nixos-rebuild NIX_SSHOPTS="-p 2299" nixos-rebuild switch --flake .# \ --target-host @ --sudo --ask-sudo-password ``` ## Testing Changes Before pushing, always verify: ```bash # Check flake validity nix flake check # Build configuration (dry-run) nix build .#nixosConfigurations..config.system.build.toplevel --dry-run # Full build nix build .#nixosConfigurations..config.system.build.toplevel ``` ================================================ FILE: docs/deployment/dns.md ================================================ # DNS Configuration Required DNS records for the cryodev infrastructure. ## Primary Domain (cryodev.xyz) ### A/AAAA Records | Hostname | Type | Value | Purpose | |----------|------|-------|---------| | `@` | A | `` | Main server | | `@` | AAAA | `` | Main server (IPv6) | | `www` | A | `` | www redirect | | `www` | AAAA | `` | www redirect (IPv6) | | `mail` | A | `` | Mail server | | `mail` | AAAA | `` | Mail server (IPv6) | ### CNAME Records | Hostname | Type | Value | Purpose | |----------|------|-------|---------| | `git` | CNAME | `@` | Forgejo | | `headscale` | CNAME | `@` | Headscale | | `headplane` | CNAME | `@` | Headplane | | `netdata` | CNAME | `@` | Netdata Monitoring | ### Mail Records | Hostname | Type | Value | Purpose | |----------|------|-------|---------| | `@` | MX | `10 mail.cryodev.xyz.` | Mail delivery | | `@` | TXT | `"v=spf1 mx ~all"` | SPF | | `_dmarc` | TXT | `"v=DMARC1; p=none"` | DMARC | | `mail._domainkey` | TXT | *(see below)* | DKIM | ### Reverse DNS (PTR) For reliable mail delivery, a **PTR record** must be configured at the hosting provider (not in the domain's DNS panel): | IP | PTR Value | |----|-----------| | `` | `mail.cryodev.xyz` | | `` | `mail.cryodev.xyz` | #### Hetzner Robot (Dedicated Server) 1. [robot.hetzner.com](https://robot.hetzner.com) > **Server** > Select the server 2. **IPs** tab 3. Click the **pencil icon** next to the IPv4 address 4. Enter `mail.cryodev.xyz` and save 5. For IPv6: Under **Subnets**, repeat the same for the primary IPv6 address #### Hetzner Cloud 1. [cloud.hetzner.com](https://cloud.hetzner.com) > Select the server 2. **Networking** tab 3. Under "Primary IP", click the IP > **Reverse DNS** 4. Enter `mail.cryodev.xyz` (for both IPv4 and IPv6) ## Getting the DKIM Key After deploying the mailserver, retrieve the DKIM public key: ```bash sudo cat /var/dkim/cryodev.xyz.mail.txt ``` Add this as a TXT record for `mail._domainkey.cryodev.xyz`. ## Complete Checklist - [ ] A/AAAA for `@` (root domain) - [ ] A/AAAA for `www` - [ ] A/AAAA for `mail` - [ ] CNAME for `git`, `headscale`, `headplane`, `netdata` - [ ] MX record - [ ] TXT for SPF (`v=spf1 mx ~all`) - [ ] TXT for DMARC (`v=DMARC1; p=none`) - [ ] TXT for DKIM (`mail._domainkey` -- after first deployment) - [ ] PTR record at hosting provider (reverse DNS) ## Verification ### Check DNS Propagation ```bash # A record dig A cryodev.xyz # MX record dig MX cryodev.xyz # SPF dig TXT cryodev.xyz # DKIM dig TXT mail._domainkey.cryodev.xyz # DMARC dig TXT _dmarc.cryodev.xyz # Reverse DNS dig -x ``` ### Online Tools - [MXToolbox](https://mxtoolbox.com/) - Comprehensive DNS/mail testing - [Mail-tester](https://www.mail-tester.com/) - Email deliverability testing - [DMARC Analyzer](https://dmarcanalyzer.com/) - DMARC record validation ## TTL Recommendations For initial setup, use low TTLs (300 seconds) to allow quick changes. After verification, increase to: - A/AAAA records: 3600 (1 hour) - CNAME records: 3600 (1 hour) - MX records: 3600 (1 hour) - TXT records: 3600 (1 hour) ## Firewall Requirements Ensure these ports are open on `cryodev-main`: | Port | Protocol | Service | |------|----------|---------| | 2299 | TCP | SSH | | 80 | TCP | HTTP (ACME/redirect) | | 443 | TCP | HTTPS | | 25 | TCP | SMTP | | 465 | TCP | SMTPS | | 587 | TCP | SMTP Submission | | 993 | TCP | IMAPS | ================================================ FILE: docs/getting-started/first-install.md ================================================ # Initial Installation (x86_64 Server) This guide describes the **initial installation** of a new x86_64 server (e.g. cryodev-main). > **For Raspberry Pi:** See [Creating an SD Image](sd-image.md). ## Overview During initial installation there is a chicken-and-egg problem: - SOPS secrets are encrypted with the SSH host key - The SSH host key is only generated during installation - Therefore: **Install without secrets first, then configure secrets** ### Process ``` 1. Disable services (that require secrets) 2. Install NixOS 3. Extract SSH host key, configure SOPS, create immediately available secrets 4. Enable stage-1 services and deploy (Headscale, Forgejo, Mail, Nginx) 5. Generate remaining secrets (Tailscale, Headplane, Forgejo Runner) 6. Enable stage-2 services and perform final deployment ``` ## Step 1: Prepare Host Configuration > If the host already exists in `hosts/` and `flake.nix`, skip 1.1-1.2. ### 1.1 Create Host from Template ```bash nix run .#create -- -t generic-server -n ``` The script: - Copies the template to `hosts//` - Sets the hostname in `networking.nix` - Creates an empty `secrets.yaml` - Adds the files to Git ### 1.2 Register in flake.nix ```nix nixosConfigurations = { = mkNixosConfiguration "x86_64-linux" [ ./hosts/ ]; }; ``` Also adjust `hardware.nix` and `disks.sh` for the target hardware. ### 1.4 Temporarily Disable Services All services that reference SOPS secrets must be disabled for the initial installation. Otherwise the installation will fail because the secrets cannot yet be decrypted. In `hosts//services/default.nix`, comment out the corresponding imports: ```nix { imports = [ # Disabled until SOPS secrets are configured: # ./forgejo.nix # requires: forgejo-runner/token, forgejo/mail-pw # ./headplane.nix # requires: headplane/cookie_secret, headplane/agent_pre_authkey # ./mailserver.nix # requires: mailserver/accounts/* # ./tailscale.nix # requires: tailscale/auth-key # These services do not require secrets: ./headscale.nix ./netdata.nix ./nginx.nix ./openssh.nix ./sops.nix ]; } ``` Additionally, in `hosts//services/sops.nix`, comment out the secret definitions: ```nix sops = { defaultSopsFile = ../secrets.yaml; # secrets = { # "forgejo-runner/token" = { }; # "tailscale/auth-key" = { }; # }; }; ``` ### 1.5 Test the Configuration ```bash nix eval .#nixosConfigurations..config.system.build.toplevel.name ``` ## Step 2: Perform Installation ### 2.1 Boot NixOS ISO Boot from the [NixOS Minimal ISO](https://nixos.org/download/#nixos-iso) (USB/CD). ### 2.2 Set Up Network and SSH ```bash passwd # Set root password for SSH access ip a # Determine IP address ``` Optionally connect via SSH (more convenient): ```bash ssh -o StrictHostKeyChecking=no root@ ``` ### 2.3 Install ```bash nix --experimental-features "nix-command flakes" run \ git+#apps.x86_64-linux.install -- \ -n \ -r ``` Alternatively, if the repository has already been cloned to `/tmp/nixos`: ```bash nix --experimental-features "nix-command flakes" run /tmp/nixos#install -- -n ``` > **Note:** The disk ID in `hosts//disks.sh` must match the hardware. > Verify with `ls -la /dev/disk/by-id/`. The script: 1. Clones the repository (when using `-r`) 2. Partitions the disk (via `disks.nix` or `disks.sh`) 3. Generates `hardware.nix` (if not present) 4. Installs NixOS ### 2.4 Reboot ```bash reboot ``` ## Step 3: Configure SOPS Secrets After the first boot, log in (password: `changeme`, change immediately with `passwd`). ### 3.1 Convert SSH Host Key to Age Key On the **new server**: ```bash nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age' ``` Note the output (e.g. `age1abc123...`). Alternatively, remotely: ```bash nix-shell -p ssh-to-age --run 'ssh-keyscan -p 2299 -t ed25519 | ssh-to-age' ``` ### 3.2 Update .sops.yaml On the **development machine**, add the new host key to `.sops.yaml`: ```yaml keys: - &steffen_key age1e8p... # steffen (local) - &hostname_key age1abc... # Key from step 3.1 creation_rules: - path_regex: hosts//secrets.yaml$ key_groups: - age: - *steffen_key - *hostname_key ``` ### 3.3 Create Secrets Open the secrets file: ```bash sops hosts//secrets.yaml ``` The following table shows all secrets for **cryodev-main** and how they are generated: #### Immediately Available Secrets These secrets have no dependencies and can be generated directly: | Secret | Command | |--------|---------| | `headplane/cookie_secret` | `openssl rand -hex 16` | | `mailserver/accounts/admin` | `mkpasswd -sm bcrypt` (remember the password!) | | `mailserver/accounts/forgejo` | `mkpasswd -sm bcrypt` (remember the password!) | | `forgejo/mail-pw` | Plaintext password matching the bcrypt hash of `mailserver/accounts/forgejo` | #### Secrets That Require Running Services These secrets can only be created after step 4. **Do not add them yet** -- they will be added later. | Secret | Command | Prerequisite | |--------|---------|--------------| | `tailscale/auth-key` | See steps 4.1-4.2 | Headscale is running | | `headplane/agent_pre_authkey` | See steps 4.1-4.2 | Headscale is running | | `forgejo-runner/token` | Forgejo Admin Panel > Actions > Runners > Create Runner | Forgejo is running | #### Example secrets.yaml (Plaintext Before Encryption) ```yaml headplane: cookie_secret: "a1b2c3d4e5f6..." mailserver: accounts: admin: "$2b$05$..." forgejo: "$2b$05$..." forgejo: mail-pw: "the-plaintext-password" ``` ### 3.4 Gradually Re-enable Services -- Stage 1 > **Important:** Services that require Headscale or Forgejo secrets (Tailscale, > Headplane, Forgejo Runner) must **not** be enabled yet, as these > secrets can only be generated once those services are running. On the **development machine**, in `hosts//services/default.nix`, enable the services **without external dependencies**: ```nix { imports = [ # Stage 1: Services without external dependencies ./forgejo.nix ./headscale.nix ./mailserver.nix ./netdata.nix ./nginx.nix ./openssh.nix ./sops.nix # Stage 2: Enable only after step 4 # ./forgejo-runner.nix # requires: forgejo-runner/token (Forgejo) # ./headplane.nix # requires: headplane/agent_pre_authkey (Headscale) # ./tailscale.nix # requires: tailscale/auth-key (Headscale) ]; } ``` ### 3.5 Deploy (Stage 1) ```bash nix run .#deploy -- -n ``` This uses the configuration from `deploy.json`. Alternatively, deploy manually: ```bash NIX_SSHOPTS="-p 2299" nixos-rebuild switch --flake .# \ --target-host @ --sudo --ask-sudo-password ``` After this deployment, Headscale, Forgejo, Mailserver, and Nginx are running. ### 3.6 Create Forgejo Admin Account On first start, Forgejo has no users. Create an admin account via CLI (on the **server**): ```bash forgejo admin user create \ --username \ --email @ \ --password \ --admin ``` > **Note:** The `forgejo` shell alias is provided by the module and automatically > runs the command as the `forgejo` user with the correct config. > If the alias is not available, start a new shell (`bash` or `zsh`). > > Since `DISABLE_REGISTRATION = true` is set, new accounts > can only be created via CLI. ## Step 4: Generate Remaining Secrets and Enable All Services After the server is running with Headscale and Forgejo: 1. **Create Headscale users** (on the server): ```bash sudo headscale users create default sudo headscale users create headplane-agent ``` 2. **Determine user IDs** (needed for the preauth keys): ```bash sudo headscale users list ``` The output shows the numeric IDs (e.g. `1` for default, `2` for headplane-agent). 3. **Generate preauth keys** (using the IDs from step 2): ```bash # For Tailscale (use the user ID of "default") sudo headscale preauthkeys create --expiration 99y --reusable --user # For Headplane Agent (use the user ID of "headplane-agent") sudo headscale preauthkeys create --expiration 99y --user ``` 4. **Create the Forgejo Runner token** via the Forgejo Admin Panel: Administration > Actions > Runners > Create new Runner 5. **Add the remaining secrets**: ```bash sops hosts//secrets.yaml ``` Add the missing secrets: ```yaml tailscale: auth-key: "tskey-..." forgejo-runner: token: "..." headplane: agent_pre_authkey: "..." ``` 6. **Enable stage-2 services** in `hosts//services/default.nix`: ```nix { imports = [ ./forgejo.nix ./forgejo-runner.nix ./headplane.nix ./headscale.nix ./mailserver.nix ./netdata.nix ./nginx.nix ./openssh.nix ./sops.nix ./tailscale.nix ]; } ``` 6. **Deploy again**: ```bash nix run .#deploy -- -n ``` ## Next Steps - [SOPS Reference](../services/sops.md) -- Detailed documentation on secret management - [Creating an SD Image](sd-image.md) -- Install Raspberry Pi - [Set Up CD](../deployment/cd.md) -- Automatic deployment ================================================ FILE: docs/getting-started/new-client.md ================================================ # Adding a New Raspberry Pi Client This guide describes how to add a **new Raspberry Pi client** to the infrastructure. ## Overview: The Process ``` 1. Create configuration ──► Copy template, customize │ ▼ 2. Add to image pipeline ──► Extend workflow matrix │ ▼ 3. Push to main ──► Forgejo automatically builds SD image │ ▼ 4. Flash image & boot ──► Write SD card, start Pi │ ▼ 5. Configure SOPS ──► Retrieve age key, create secrets │ ▼ 6. Final deployment ──► Activate Tailscale etc. ``` ## Prerequisites - SSH access to cryodev-main (for Tailscale auth key) - Development machine with repository access - SD card (at least 8 GB) --- ## Step 1: Generate Tailscale Auth Key **On cryodev-main** (via SSH): ```bash # Determine user ID sudo headscale users list # Create preauth key (use user ID of "default") sudo headscale preauthkeys create --expiration 99y --reusable --user ``` **Take note of the output!** (e.g. `tskey-preauth-abc123...`) --- ## Step 2: Create Host Configuration ### 2.1 Copy Template ```bash cp -r templates/raspberry-pi hosts/neuer-pi ``` ### 2.2 Set Hostname `hosts/neuer-pi/networking.nix`: ```nix { networking.hostName = "neuer-pi"; } ``` ### 2.3 Register in flake.nix ```nix nixosConfigurations = { # ... existing hosts ... neuer-pi = mkNixosConfiguration "aarch64-linux" [ ./hosts/neuer-pi ]; }; ``` ### 2.4 Add to constants.nix ```nix { hosts = { # ... existing hosts ... neuer-pi = { ip = "100.64.0.X"; # Assigned by Headscale }; }; } ``` ### 2.5 Create Placeholder secrets.yaml ```bash touch hosts/neuer-pi/secrets.yaml ``` ### 2.6 Temporarily Disable SOPS In `hosts/neuer-pi/default.nix`, comment out the `sops.secrets.*` references so the image can be built without secrets. --- ## Step 3: Add to Image Pipeline Edit `.forgejo/workflows/build-pi-image.yml`: ```yaml jobs: build-pi-images: strategy: matrix: # Add new host here: host: [cryodev-pi, neuer-pi] ``` --- ## Step 4: Push and Build Image ```bash git add . git commit -m "Add neuer-pi host configuration" git push ``` The Forgejo workflow will now automatically build an SD image for `neuer-pi`. **Wait** until the workflow completes (30-60 minutes). Check the status at: `https://git.cryodev.xyz/steffen/cryodev-server/actions` --- ## Step 5: Flash Image ### 5.1 Download Image After a successful build, find the image under **Releases**: ```bash wget https://git.cryodev.xyz/steffen/cryodev-server/releases/latest/download/neuer-pi-sd-image.img.zst ``` ### 5.2 Decompress ```bash zstd -d neuer-pi-sd-image.img.zst -o neuer-pi.img ``` ### 5.3 Write to SD Card **Warning:** Replace `/dev/sdX` with the correct device! ```bash lsblk # Identify the correct device sudo dd if=neuer-pi.img of=/dev/sdX bs=4M conv=fsync status=progress ``` ### 5.4 Boot 1. Insert the SD card into the Raspberry Pi 2. Connect Ethernet 3. Connect power 4. Wait until booted (approximately 2 minutes) --- ## Step 6: Configure SOPS ### 6.1 Find IP Address The Pi should receive an IP address via DHCP. Check your router or scan the network: ```bash nmap -sn 192.168.1.0/24 | grep -B2 "Raspberry" ``` ### 6.2 Connect via SSH ```bash ssh steffen@ # or the configured user ``` For the default password, see `hosts/neuer-pi/users.nix`. ### 6.3 Determine Age Key On the Pi: ```bash nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age' ``` **Take note of the output!** (e.g. `age1xyz...`) ### 6.4 Update .sops.yaml On the development machine: ```yaml keys: - &steffen_key age1e8p35795htf7twrejyugpzw0qja2v33awcw76y4gp6acnxnkzq0s935t4t # steffen (local) - &neuer_pi_key age1xyz... # The new key creation_rules: # ... existing rules ... - path_regex: hosts/neuer-pi/secrets.yaml$ key_groups: - age: - *steffen_key - *neuer_pi_key ``` ### 6.5 Create Secrets ```bash sops hosts/neuer-pi/secrets.yaml ``` Contents: ```yaml tailscale: auth-key: "tskey-preauth-abc123..." # Key from Step 1 netdata: stream: child-uuid: "..." # uuidgen ``` ### 6.6 Activate SOPS References Re-enable the `sops.secrets.*` references that were commented out in Step 2.6. --- ## Step 7: Final Deployment ```bash git add . git commit -m "Configure SOPS secrets for neuer-pi" git push ``` Since Comin is running on the Pi, it will automatically pull the new configuration. Alternatively, deploy manually: ```bash NIX_SSHOPTS="-p 2299" nixos-rebuild switch --flake .#neuer-pi \ --target-host @ --sudo --ask-sudo-password ``` --- ## Step 8: Verify ### Tailscale Connection ```bash # On the Pi tailscale status # On cryodev-main sudo headscale nodes list ``` ### Netdata Streaming Check whether the new client appears in the Netdata dashboard: `https://netdata.cryodev.xyz` --- ## Checklist - [ ] Tailscale auth key generated on cryodev-main - [ ] Host configuration created (template, flake.nix, constants.nix) - [ ] Host added to workflow matrix - [ ] Pushed and waited for image build - [ ] SD card flashed and Pi booted - [ ] Age key determined and added to .sops.yaml - [ ] secrets.yaml created (Tailscale key, Netdata UUID) - [ ] SOPS references activated and deployed - [ ] Tailscale connection working - [ ] Netdata streaming working ================================================ FILE: docs/getting-started/prerequisites.md ================================================ # Prerequisites ## Required Tools Ensure you have the following tools installed on your local machine: | Tool | Purpose | |------|---------| | `nix` | Package manager with flakes enabled | | `sops` | Secret encryption/decryption | | `age` | Encryption backend for sops | | `ssh` | Remote access | ### Installing Nix Follow the [official Nix installation guide](https://nixos.org/download/). Enable flakes by adding to `~/.config/nix/nix.conf`: ``` experimental-features = nix-command flakes ``` ### Installing Other Tools With Nix: ```bash nix-shell -p sops age ``` Or install globally via home-manager or system configuration. ## Repository Access Clone the repository: ```bash git clone https://git.cryodev.xyz/steffen/cryodev-server.git cd cryodev-server ``` ## Development Shell Enter the development shell with all required tools: ```bash nix develop ``` ## Verifying Setup Check that the flake is valid: ```bash nix flake check ``` Build a host configuration (dry run): ```bash nix build .#nixosConfigurations.cryodev-main.config.system.build.toplevel --dry-run ``` ================================================ FILE: docs/getting-started/reinstall.md ================================================ # Reinstallation This guide describes the **reinstallation** of an existing host, e.g. after a hardware change or in case of issues. ## Difference from Initial Installation | Aspect | Initial Installation | Reinstallation | |--------|----------------------|----------------| | SOPS Secrets | Not yet present | Already configured | | SSH Host Key | Newly generated | **Must be restored!** | | Disk IDs | Newly determined | Often changed (new hardware) | | secrets.yaml | Will be created | Already exists | ## Important: SSH Host Key Issue During a reinstallation, a **new SSH host key** is generated. This key will no longer match the age key in `.sops.yaml`! ### Possible Solutions **Option A: Back up and restore the old host key** (recommended) **Option B: Generate a new key and update SOPS** ## Prerequisites - Backup of the old SSH host key (if using Option A) - Access to `.sops.yaml` and the admin age keys - Bootable NixOS ISO ## Step 1: Preparation (before the installation) ### 1.1 Back Up the Old SSH Host Key (Option A) If the old host is still running: ```bash # On the old host sudo cat /etc/ssh/ssh_host_ed25519_key > ~/ssh_host_ed25519_key.backup sudo cat /etc/ssh/ssh_host_ed25519_key.pub > ~/ssh_host_ed25519_key.pub.backup ``` Copy the files securely to the development machine. ### 1.2 Determine Disk IDs **With new hardware**, the disk IDs will change! ```bash # In the NixOS live system lsblk -o NAME,SIZE,MODEL,SERIAL ls -la /dev/disk/by-id/ ``` Enter the new disk ID in `hosts//disks.sh` or `disks.nix`: ```bash # Example disks.sh DISK="/dev/disk/by-id/nvme-Samsung_SSD_970_EVO_Plus_XXXXX" ``` ## Step 2: Perform the Installation ### 2.1 Boot the NixOS ISO Boot from USB/CD, set a root password, and connect via SSH. ### 2.2 Clone the Repository ```bash sudo -i nix-shell -p git git clone /tmp/nixos cd /tmp/nixos ``` ### 2.3 Verify the Disk Configuration ```bash # Display current disk IDs ls -la /dev/disk/by-id/ # Compare with the configuration cat hosts//disks.sh | grep DISK ``` **If necessary:** Update the disk ID in the configuration. ### 2.4 Run the Install Script ```bash bash scripts/install.sh -n ``` ### 2.5 Restore the SSH Host Key (Option A) **Before rebooting!** ```bash # Restore the host key from backup cp /path/to/ssh_host_ed25519_key.backup /mnt/etc/ssh/ssh_host_ed25519_key cp /path/to/ssh_host_ed25519_key.pub.backup /mnt/etc/ssh/ssh_host_ed25519_key.pub chmod 600 /mnt/etc/ssh/ssh_host_ed25519_key chmod 644 /mnt/etc/ssh/ssh_host_ed25519_key.pub ``` ### 2.6 Reboot ```bash umount -Rl /mnt reboot ``` ## Step 3: After the Reboot ### Option A (Key Restored) SOPS secrets should work automatically. Verify: ```bash sudo cat /run/secrets/tailscale/auth-key ``` ### Option B (New Key) The host cannot decrypt the secrets. Configure the new key: ```bash # Determine the new age key nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age' ``` On the development machine: ```bash # Update .sops.yaml with the new key vim .sops.yaml # Re-encrypt secrets with the new key sops updatekeys hosts//secrets.yaml ``` Then redeploy the configuration: ```bash NIX_SSHOPTS="-p 2299" nixos-rebuild switch --flake .# \ --target-host @ --sudo --ask-sudo-password ``` ## Common Issues ### "No secret key available" SOPS cannot decrypt the secrets. Cause: - SSH host key does not match the age key in `.sops.yaml` Solution: Follow Option B (configure the new key). ### "Device not found" during partitioning The disk ID in `disks.sh`/`disks.nix` is incorrect. ```bash # Find the correct ID ls -la /dev/disk/by-id/ ``` ### Outdated Hardware Config With new hardware, `hardware.nix` must be regenerated: ```bash # The install script regenerates automatically if the file is missing rm hosts//hardware.nix bash scripts/install.sh -n ``` ## Checklist - [ ] Old SSH host key backed up (if possible) - [ ] Disk IDs in configuration verified/updated - [ ] Installation completed - [ ] SSH host key restored OR new key configured in SOPS - [ ] Secrets are functional (`sudo cat /run/secrets/...`) - [ ] Tailscale connected (`tailscale status`) ================================================ FILE: docs/getting-started/sd-image.md ================================================ # SD Card Images for Raspberry Pi The repository automatically builds SD card images for all configured Raspberry Pi hosts. ## Automatic Build When changes are pushed to `main`, images are automatically built for all Pi hosts and published as a release. **Download:** [Releases on Forgejo](https://git.cryodev.xyz/steffen/cryodev-server/releases) ## Available Images | Host | Image Name | |------|------------| | `cryodev-pi` | `cryodev-pi-sd-image.img.zst` | New hosts are built automatically once they are added to the workflow matrix. ## Flashing the Image ### 1. Download ```bash wget https://git.cryodev.xyz/.../releases/latest/download/-sd-image.img.zst wget https://git.cryodev.xyz/.../releases/latest/download/-sd-image.img.zst.sha256 # Verify checksum sha256sum -c -sd-image.img.zst.sha256 ``` ### 2. Decompress ```bash zstd -d -sd-image.img.zst -o .img ``` ### 3. Write to SD Card ```bash # Identify the correct device lsblk # Write (WARNING: make sure to select the correct device!) sudo dd if=.img of=/dev/sdX bs=4M conv=fsync status=progress ``` Alternatively, use `balenaEtcher` or `Raspberry Pi Imager`. ## What Is Included in the Image? - Complete NixOS installation for the specific host - All configured services (except secrets) - SSH server enabled - Automatic root partition expansion on first boot - Comin for automatic updates ## What Is Missing? **SOPS secrets** cannot be included in the image (chicken-and-egg problem with the SSH host key). After the first boot: 1. Retrieve the age key from the Pi 2. Update `.sops.yaml` 3. Create `secrets.yaml` 4. Deploy the configuration See [Adding a New Client](new-client.md) for the complete guide. ## Adding a New Host to the Pipeline 1. Create the host configuration in `hosts//` 2. Add it to the matrix in `.forgejo/workflows/build-pi-image.yml`: ```yaml matrix: host: [cryodev-pi, new-host] # <- add here ``` 3. Push to `main` -- the image will be built automatically ## Building Manually ```bash # On aarch64 (e.g., another Pi) nix build .#nixosConfigurations..config.system.build.sdImage # On x86_64 with QEMU emulation (slow) nix build .#nixosConfigurations..config.system.build.sdImage \ --extra-platforms aarch64-linux ``` Prerequisite on x86_64: ```nix { boot.binfmt.emulatedSystems = [ "aarch64-linux" ]; } ``` ## Troubleshooting ### Workflow Fails - Check whether `sd-image.nix` is imported in the host configuration - Check whether binfmt is enabled on cryodev-main ### Image Does Not Boot - Was the SD card written correctly? - Try a different SD card - Check the power supply (minimum 3A for Pi 4) ### No Network - Check the Ethernet cable - Is there a DHCP server on the network? ================================================ FILE: docs/services/forgejo.md ================================================ # Forgejo Forgejo is a self-hosted Git service (fork of Gitea) with built-in CI/CD Actions. ## References - [Forgejo Documentation](https://forgejo.org/docs/) - [Forgejo Actions](https://forgejo.org/docs/latest/user/actions/) ## Setup ### DNS Set a CNAME record for `git.cryodev.xyz` pointing to your main domain. ### Configuration ```nix # hosts/cryodev-main/services/forgejo.nix { config, ... }: { services.forgejo = { enable = true; settings = { server = { DOMAIN = "git.cryodev.xyz"; ROOT_URL = "https://git.cryodev.xyz"; }; mailer = { ENABLED = true; FROM = "forgejo@cryodev.xyz"; }; }; }; } ``` ## Forgejo Runner The runner executes CI/CD pipelines defined in `.forgejo/workflows/`. ### Get Runner Token 1. Go to Forgejo Admin Panel 2. Navigate to Actions > Runners 3. Create a new runner and copy the token ### Add to Secrets ```bash sops hosts/cryodev-main/secrets.yaml ``` ```yaml forgejo-runner: token: "your-runner-token" ``` ### Configuration ```nix { sops.secrets."forgejo-runner/token" = { }; services.gitea-actions-runner = { instances.default = { enable = true; url = "https://git.cryodev.xyz"; tokenFile = config.sops.secrets."forgejo-runner/token".path; labels = [ "ubuntu-latest:docker://node:20" ]; }; }; } ``` ## CI/CD Workflows CI runs on every push to `main` via Forgejo Actions: 1. **flake-check** -- validates the flake 2. **build-hosts** -- builds all host configurations Deployment is handled by **Comin** (pull-based), not by CI. See [CD documentation](../deployment/cd.md) for details. ## Administration ### Create Admin User ```bash forgejo admin user create \ --username \ --email @ \ --password \ --admin ``` ### Reset User Password ```bash sudo -u forgejo forgejo admin user change-password \ --username USER \ --password NEWPASS ``` ## Troubleshooting ### Check Service Status ```bash sudo systemctl status forgejo sudo systemctl status gitea-runner-default ``` ### View Logs ```bash sudo journalctl -u forgejo -f sudo journalctl -u gitea-runner-default -f ``` ### Database Issues Forgejo uses SQLite by default. Database location: ```bash ls -la /var/lib/forgejo/data/ ``` ================================================ FILE: docs/services/headplane.md ================================================ # Headplane Headplane is a web-based admin interface for Headscale. ## References - [GitHub](https://github.com/tale/headplane) ## Setup ### DNS Set a CNAME record for `headplane.cryodev.xyz` pointing to your main domain. ### Generate Secrets **Cookie Secret** (for session management): ```bash nix-shell -p openssl --run 'openssl rand -hex 16' ``` **Agent Pre-Auth Key** (for Headplane's built-in agent): ```bash # First, create a dedicated user sudo headscale users create headplane-agent # Find the user ID sudo headscale users list # Then create a reusable pre-auth key (use the ID of headplane-agent) sudo headscale preauthkeys create --expiration 99y --reusable --user ``` ### Add to Secrets Edit `hosts/cryodev-main/secrets.yaml`: ```bash sops hosts/cryodev-main/secrets.yaml ``` ```yaml headplane: cookie_secret: "your-generated-hex-string" agent_pre_authkey: "your-preauth-key" ``` ### Configuration ```nix # hosts/cryodev-main/services/headplane.nix { config, ... }: { sops.secrets."headplane/cookie_secret" = { }; sops.secrets."headplane/agent_pre_authkey" = { }; services.headplane = { enable = true; settings = { server = { cookie_secret_file = config.sops.secrets."headplane/cookie_secret".path; }; headscale = { url = "https://headscale.cryodev.xyz"; }; agent = { enable = true; authkey_file = config.sops.secrets."headplane/agent_pre_authkey".path; }; }; }; } ``` ## Usage Access Headplane at `https://headplane.cryodev.xyz`. ### Features - View and manage users - View connected nodes - Manage routes and exit nodes - View pre-auth keys ## Troubleshooting ### Check Service Status ```bash sudo systemctl status headplane ``` ### View Logs ```bash sudo journalctl -u headplane -f ``` ### Agent Not Connecting Verify the agent pre-auth key is valid: ```bash sudo headscale preauthkeys list --user ``` If expired, create a new one and update the secrets file. ================================================ FILE: docs/services/headscale.md ================================================ # Headscale Headscale is an open-source, self-hosted implementation of the Tailscale control server. ## References - [Website](https://headscale.net/stable/) - [GitHub](https://github.com/juanfont/headscale) - [Example configuration](https://github.com/juanfont/headscale/blob/main/config-example.yaml) ## Setup ### DNS Set a CNAME record for `headscale.cryodev.xyz` pointing to your main domain. ### Configuration ```nix # hosts/cryodev-main/services/headscale.nix { services.headscale = { enable = true; openFirewall = true; }; } ``` ## Usage ### Create a User ```bash sudo headscale users create ``` ### List Users ```bash sudo headscale users list ``` ### Create Pre-Auth Key ```bash sudo headscale preauthkeys create --expiration 99y --reusable --user ``` The pre-auth key is used by clients to automatically authenticate and join the tailnet. ### List Nodes ```bash sudo headscale nodes list ``` ### Delete a Node ```bash sudo headscale nodes delete -i ``` ### Rename a Node ```bash sudo headscale nodes rename -i new-name ``` ## ACL Configuration Access Control Lists define which nodes can communicate with each other. ### Validate ACL File ```bash sudo headscale policy check --file /path/to/acl.hujson ``` ### Example ACL ```json { "acls": [ { "action": "accept", "src": ["*"], "dst": ["*:*"] } ] } ``` ## Troubleshooting ### Check Service Status ```bash sudo systemctl status headscale ``` ### View Logs ```bash sudo journalctl -u headscale -f ``` ### Test DERP Connectivity ```bash curl -I https://headscale.cryodev.xyz/derp ``` ## Integration - [Headplane](headplane.md) - Web UI for managing Headscale - [Tailscale Client](tailscale.md) - Connect clients to Headscale ================================================ FILE: docs/services/mailserver.md ================================================ # Mailserver NixOS mailserver module providing a complete email stack with Postfix and Dovecot. ## References - [Simple NixOS Mailserver](https://gitlab.com/simple-nixos-mailserver/nixos-mailserver) ## Setup ### DNS Records | Type | Hostname | Value | |------|----------|-------| | A | `mail` | `` | | AAAA | `mail` | `` | | MX | `@` | `10 mail.cryodev.xyz.` | | TXT | `@` | `"v=spf1 mx ~all"` | | TXT | `_dmarc` | `"v=DMARC1; p=none"` | DKIM records are generated automatically after first deployment. ### Generate Password Hashes ```bash nix-shell -p mkpasswd --run 'mkpasswd -sm bcrypt' ``` ### Add to Secrets ```bash sops hosts/cryodev-main/secrets.yaml ``` ```yaml mailserver: accounts: admin: "$2y$05$..." forgejo: "$2y$05$..." ``` ### Configuration ```nix # hosts/cryodev-main/services/mailserver.nix { config, ... }: { sops.secrets."mailserver/accounts/admin" = { }; sops.secrets."mailserver/accounts/forgejo" = { }; mailserver = { enable = true; fqdn = "mail.cryodev.xyz"; domains = [ "cryodev.xyz" ]; loginAccounts = { "admin@cryodev.xyz" = { hashedPasswordFile = config.sops.secrets."mailserver/accounts/admin".path; }; "forgejo@cryodev.xyz" = { hashedPasswordFile = config.sops.secrets."mailserver/accounts/forgejo".path; sendOnly = true; }; }; }; } ``` ## DKIM Setup After first deployment, get the DKIM public key: ```bash sudo cat /var/dkim/cryodev.xyz.mail.txt ``` Add this as a TXT record: | Type | Hostname | Value | |------|----------|-------| | TXT | `mail._domainkey` | `v=DKIM1; k=rsa; p=...` | ## Testing ### Send Test Email ```bash echo "Test" | mail -s "Test Subject" recipient@example.com ``` ### Check Mail Queue ```bash sudo postqueue -p ``` ### View Logs ```bash sudo journalctl -u postfix -f sudo journalctl -u dovecot2 -f ``` ### Test SMTP ```bash openssl s_client -connect mail.cryodev.xyz:587 -starttls smtp ``` ### Verify DNS Records - [MXToolbox](https://mxtoolbox.com/) - [Mail-tester](https://www.mail-tester.com/) ## Troubleshooting ### Emails Not Sending Check Postfix status: ```bash sudo systemctl status postfix ``` Check firewall (ports 25, 465, 587 must be open): ```bash sudo iptables -L -n | grep -E '25|465|587' ``` ### DKIM Failing Verify the DNS record matches the generated key: ```bash dig TXT mail._domainkey.cryodev.xyz ``` ### SPF Failing Verify SPF record: ```bash dig TXT cryodev.xyz ``` Should return: `"v=spf1 mx ~all"` ================================================ FILE: docs/services/netdata.md ================================================ # Netdata Monitoring Netdata provides real-time performance monitoring with parent/child streaming. ## Architecture ``` ┌─────────────────┐ Stream over ┌─────────────────┐ │ cryodev-pi │ ───────────────────>│ cryodev-main │ │ (Child Node) │ Tailscale VPN │ (Parent Node) │ └─────────────────┘ └─────────────────┘ │ v https://netdata.cryodev.xyz ``` ## References - [Netdata Documentation](https://learn.netdata.cloud/) - [Streaming Configuration](https://learn.netdata.cloud/docs/streaming/streaming-configuration-reference) ## Parent Node (cryodev-main) ### DNS Set a CNAME record for `netdata.cryodev.xyz` pointing to your main domain. ### Generate Stream API Key ```bash uuidgen ``` ### Configuration ```nix # hosts/cryodev-main/services/netdata.nix { config, ... }: { sops.secrets."netdata/stream-api-key" = { }; sops.templates."netdata-stream.conf" = { content = '' [${config.sops.placeholder."netdata/stream-api-key"}] enabled = yes default history = 3600 default memory mode = ram health enabled by default = auto allow from = * ''; owner = "netdata"; }; services.netdata = { enable = true; configDir."stream.conf" = config.sops.templates."netdata-stream.conf".path; }; } ``` ## Child Node (cryodev-pi) ### Generate Child UUID ```bash uuidgen ``` ### Add to Secrets ```bash sops hosts/cryodev-pi/secrets.yaml ``` ```yaml netdata: stream: child-uuid: "your-generated-uuid" ``` Note: The stream API key must match the parent's key. You can either: 1. Share the same secret between hosts (complex with SOPS) 2. Hardcode a known API key in both configurations ### Configuration ```nix # hosts/cryodev-pi/services/netdata.nix { config, constants, ... }: { sops.secrets."netdata/stream/child-uuid" = { }; sops.templates."netdata-stream.conf" = { content = '' [stream] enabled = yes destination = ${constants.hosts.cryodev-main.ip}:19999 api key = YOUR_STREAM_API_KEY send charts matching = * ''; owner = "netdata"; }; services.netdata = { enable = true; configDir."stream.conf" = config.sops.templates."netdata-stream.conf".path; }; } ``` ## Email Alerts Configure Netdata to send alerts via the mailserver: ```nix { services.netdata.configDir."health_alarm_notify.conf" = pkgs.writeText "notify.conf" '' SEND_EMAIL="YES" EMAIL_SENDER="netdata@cryodev.xyz" DEFAULT_RECIPIENT_EMAIL="admin@cryodev.xyz" ''; } ``` ## Usage ### Access Dashboard Open `https://netdata.cryodev.xyz` in your browser. ### View Child Nodes Child nodes appear in the left sidebar under "Nodes". ### Check Streaming Status On parent: ```bash curl -s http://localhost:19999/api/v1/info | jq '.hosts' ``` On child: ```bash curl -s http://localhost:19999/api/v1/info | jq '.streaming' ``` ## Troubleshooting ### Check Service Status ```bash sudo systemctl status netdata ``` ### View Logs ```bash sudo journalctl -u netdata -f ``` ### Child Not Streaming 1. Verify network connectivity: ```bash tailscale ping cryodev-main nc -zv 19999 ``` 2. Check API key matches between parent and child 3. Verify firewall allows port 19999 on parent ### High Memory Usage Adjust history settings in `netdata.conf`: ```ini [global] history = 1800 # seconds to retain memory mode = ram ``` ================================================ FILE: docs/services/sops.md ================================================ # SOPS Secret Management Atomic secret provisioning for NixOS using [sops-nix](https://github.com/Mic92/sops-nix). ## Overview Secrets are encrypted with `age` using SSH host keys, ensuring: - No plaintext secrets in the repository - Secrets are decrypted at activation time - Each host can only decrypt its own secrets ## Setup ### 1. Get Host's Age Public Key After a host is installed, extract its age key from the SSH host key: ```bash nix-shell -p ssh-to-age --run 'ssh-keyscan -t ed25519 | ssh-to-age' ``` Or locally on the host: ```bash nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age' ``` ### 2. Configure .sops.yaml Add the host key to `.sops.yaml`: ```yaml keys: - &steffen_key age1e8p35795htf7twrejyugpzw0qja2v33awcw76y4gp6acnxnkzq0s935t4t # steffen (local) - &main_key age1... # cryodev-main - &pi_key age1... # cryodev-pi creation_rules: - path_regex: hosts/cryodev-main/secrets.yaml$ key_groups: - age: - *steffen_key - *main_key - path_regex: hosts/cryodev-pi/secrets.yaml$ key_groups: - age: - *steffen_key - *pi_key ``` ### 3. Create Secrets File ```bash sops hosts//secrets.yaml ``` This opens your editor. Add secrets in YAML format: ```yaml tailscale: auth-key: "tskey-..." some-service: password: "secret123" ``` ## Usage in Modules ### Declaring Secrets ```nix { config, ... }: { sops.secrets.my-secret = { # Optional: set owner/group owner = "myservice"; group = "myservice"; }; } ``` ### Using Secrets Reference the secret path in service configuration: ```nix { services.myservice = { passwordFile = config.sops.secrets.my-secret.path; }; } ``` ### Using Templates For secrets that need to be embedded in config files: ```nix { sops.secrets."netdata/stream-api-key" = { }; sops.templates."netdata-stream.conf" = { content = '' [stream] enabled = yes api key = ${config.sops.placeholder."netdata/stream-api-key"} ''; owner = "netdata"; }; services.netdata.configDir."stream.conf" = config.sops.templates."netdata-stream.conf".path; } ``` ## Common Secrets ### cryodev-main ```yaml mailserver: accounts: forgejo: "$2y$05$..." # bcrypt hash admin: "$2y$05$..." forgejo-runner: token: "..." headplane: cookie_secret: "..." # openssl rand -hex 16 agent_pre_authkey: "..." # headscale preauthkey tailscale: auth-key: "tskey-..." ``` ### cryodev-pi ```yaml tailscale: auth-key: "tskey-..." netdata: stream: child-uuid: "..." # uuidgen ``` ## Generating Secret Values | Secret | Command | |--------|---------| | Mailserver password | `nix-shell -p mkpasswd --run 'mkpasswd -sm bcrypt'` | | Random hex token | `nix-shell -p openssl --run 'openssl rand -hex 16'` | | UUID | `uuidgen` | | Tailscale preauth | `sudo headscale preauthkeys create --expiration 99y --reusable --user ` | ## Updating Keys After modifying `.sops.yaml`, update existing secrets files: ```bash sops --config .sops.yaml updatekeys hosts//secrets.yaml ``` ## Troubleshooting ### "No matching keys found" Ensure the host's age key is in `.sops.yaml` and you've run `updatekeys`. ### Secret not decrypting on host Check that `/etc/ssh/ssh_host_ed25519_key` exists and matches the public key in `.sops.yaml`. ================================================ FILE: docs/services/tailscale.md ================================================ # Tailscale Client Tailscale clients connect to the self-hosted Headscale server to join the mesh VPN. ## References - [Tailscale Documentation](https://tailscale.com/kb) - [Headscale Client Setup](https://headscale.net/running-headscale-linux/) ## Setup ### Generate Auth Key On the Headscale server (cryodev-main): ```bash # Look up user ID sudo headscale users list # Create preauth key (use the user ID for "default") sudo headscale preauthkeys create --expiration 99y --reusable --user ``` ### Add to Secrets ```bash sops hosts//secrets.yaml ``` ```yaml tailscale: auth-key: "your-preauth-key" ``` ### Configuration ```nix # In your host configuration { config, ... }: { sops.secrets."tailscale/auth-key" = { }; services.tailscale = { enable = true; authKeyFile = config.sops.secrets."tailscale/auth-key".path; extraUpFlags = [ "--login-server=https://headscale.cryodev.xyz" ]; }; } ``` ## Usage ### Check Status ```bash tailscale status ``` ### View IP Address ```bash tailscale ip ``` ### Ping Another Node ```bash tailscale ping ``` ### SSH to Another Node ```bash ssh user@ # or using Tailscale IP ssh user@100.64.0.X ``` ## MagicDNS With Headscale's MagicDNS enabled, you can reach nodes by hostname: ```bash ping cryodev-pi ssh steffen@cryodev-main ``` ## Troubleshooting ### Check Service Status ```bash sudo systemctl status tailscaled ``` ### View Logs ```bash sudo journalctl -u tailscaled -f ``` ### Re-authenticate If the node is not connecting: ```bash sudo tailscale up --login-server=https://headscale.cryodev.xyz --force-reauth ``` ### Node Not Appearing in Headscale Check the auth key is valid: ```bash # On Headscale server sudo headscale preauthkeys list --user ``` Verify the login server URL is correct in the client configuration. ================================================ FILE: hosts/cryodev-main/binfmt.nix ================================================ # Enable QEMU emulation for aarch64 to build Raspberry Pi images { boot.binfmt.emulatedSystems = [ "aarch64-linux" ]; } ================================================ FILE: hosts/cryodev-main/boot.nix ================================================ { boot.loader.systemd-boot = { enable = true; configurationLimit = 10; }; boot.loader.efi.canTouchEfiVariables = true; } ================================================ FILE: hosts/cryodev-main/default.nix ================================================ { inputs, lib, outputs, ... }: { imports = [ ./binfmt.nix ./boot.nix ./hardware.nix ./networking.nix ./packages.nix ./services ./users.nix outputs.nixosModules.common outputs.nixosModules.nixvim ]; # Allow unfree packages (netdata has changed to gpl3Plus ncul1 license) nixpkgs.config.allowUnfreePredicate = pkg: builtins.elem (lib.getName pkg) [ "netdata" ]; system.stateVersion = "25.11"; } ================================================ FILE: hosts/cryodev-main/disks.sh ================================================ #!/usr/bin/env bash SSD='/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_113509103' MNT='/mnt' SWAP_GB=4 # Helper function to wait for devices wait_for_device() { local device=$1 echo "Waiting for device: $device ..." while [[ ! -e $device ]]; do sleep 1 done echo "Device $device is ready." } # Function to install a package if it's not already installed install_if_missing() { local cmd="$1" local package="$2" if ! command -v "$cmd" &> /dev/null; then echo "$cmd not found, installing $package..." nix-env -iA "nixos.$package" fi } install_if_missing "sgdisk" "gptfdisk" install_if_missing "partprobe" "parted" wait_for_device $SSD echo "Wiping filesystem on $SSD..." wipefs -a $SSD echo "Clearing partition table on $SSD..." sgdisk --zap-all $SSD echo "Partitioning $SSD..." sgdisk -n1:1M:+1G -t1:EF00 -c1:BOOT $SSD sgdisk -n2:0:+"$SWAP_GB"G -t2:8200 -c2:SWAP $SSD sgdisk -n3:0:0 -t3:8304 -c3:ROOT $SSD partprobe -s $SSD udevadm settle wait_for_device ${SSD}-part1 wait_for_device ${SSD}-part2 wait_for_device ${SSD}-part3 echo "Formatting partitions..." mkfs.vfat -F 32 -n BOOT "${SSD}-part1" mkswap -L SWAP "${SSD}-part2" mkfs.ext4 -L ROOT "${SSD}-part3" echo "Mounting partitions..." mount -o X-mount.mkdir "${SSD}-part3" "$MNT" mkdir -p "$MNT/boot" mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1 "${SSD}-part1" "$MNT/boot" echo "Enabling swap..." swapon "${SSD}-part2" echo "Partitioning and setup complete:" lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL ================================================ FILE: hosts/cryodev-main/hardware.nix ================================================ { config, lib, pkgs, modulesPath, ... }: { imports = [ (modulesPath + "/installer/scan/not-detected.nix") ]; boot.initrd.availableKernelModules = [ "ahci" "nvme" "sd_mod" "sdhci_pci" "sr_mod" "usb_storage" "virtio_pci" "virtio_scsi" "xhci_pci" ]; boot.initrd.kernelModules = [ ]; boot.kernelModules = [ ]; boot.extraModulePackages = [ ]; fileSystems."/" = { device = "/dev/disk/by-label/ROOT"; fsType = "ext4"; }; fileSystems."/boot" = { device = "/dev/disk/by-label/BOOT"; fsType = "vfat"; options = [ "fmask=0022" "dmask=0022" ]; }; swapDevices = [ { device = "/dev/disk/by-label/SWAP"; } ]; networking.useDHCP = lib.mkDefault true; nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux"; } ================================================ FILE: hosts/cryodev-main/networking.nix ================================================ { networking.hostName = "cryodev-main"; networking.domain = "cryodev.xyz"; } ================================================ FILE: hosts/cryodev-main/packages.nix ================================================ { pkgs, ... }: { environment.systemPackages = with pkgs; [ ]; } ================================================ FILE: hosts/cryodev-main/secrets.yaml ================================================ tailscale: auth-key: ENC[AES256_GCM,data:v5C3DqYJsDKq6oUa/3G6WKxyKeIK4EJLNxWMbKjSbwe5MPtS4sZjFszMviKcEVGW,iv:4G8irABGuVhOYnK15EjbpNQ4B9VY/NdwCrfz+YAMzvA=,tag:0Vhq/TJgx+48frRy30yKFg==,type:str] forgejo-runner: token: ENC[AES256_GCM,data:sdnJcyRiTLxXoZDNbEzJAjpiK+iSUH0gV0XwbEQf94IE/6IZz5/zHw==,iv:py+qqp3VAwBGEpYiQwft3jnQS943JaBlrcckColv4f8=,tag:rtmRwW8rpXB6Pv+LSkp+Fw==,type:str] headplane: cookie_secret: ENC[AES256_GCM,data:HICF31i6yCLZGNeOFYTR3Bp0a7i0UKOvGAvx/pD3NB4=,iv:ZtK8r1YUWnf5Af0Ls341k0w1mZm+D5Rb0E1uS5z/Gdo=,tag:vwM9+4dpcmnjn/wR6Ty/MQ==,type:str] agent_pre_authkey: ENC[AES256_GCM,data:QvhPi2lhyP7w6HTeOSS8660NzIY9Q6AOhlOGQXnvz+qYu9vOAMQPOFMZfie5+e8g,iv:X60wVOEUIsTiMHrrd4lId0VpR7VfFDr74p8RGka3+18=,tag:kIvaHrOWIM+VQ+Qz1GiheQ==,type:str] mailserver: accounts: admin: ENC[AES256_GCM,data:gY2k3x3sA98yGNLcSWUr9aC0566MJM2UXhwLtWPUL3PRvxQt0XOzjeiC7ddgbqTAol4dBNeaV0zbFInD,iv:rxp0M9kHMgD73K+RDC562sUpXaJ067eU1CeciAke+LM=,tag:VKobduo/ZULAk17M9LD3bw==,type:str] forgejo: ENC[AES256_GCM,data:brpyVL8THAQcwt7pVYnWviX3PZg1TzfnNEZw9rO/DuFj4sbzLPSPuxxfe6Jj2pwZ4IVoWmastKV3oTnr,iv:Imo6VPA4tqC4Ta8OEniCv0M+UCqQm8YcmE3kIG7G9aY=,tag:uoQ9o2cigN4XwRFnSvC5Lg==,type:str] forgejo: mail-pw: ENC[AES256_GCM,data:ol8dGa6KydnxDR8ybEro6wOcsi6iwu3IMfgO5xTpz34=,iv:SHmku32PdtXjueIjakCTstspzTzCN+iQg7K5DUEJoYk=,tag:yW/Z84q+kUzfPhLQiwGxGA==,type:str] sops: age: - recipient: age1e8p35795htf7twrejyugpzw0qja2v33awcw76y4gp6acnxnkzq0s935t4t enc: | -----BEGIN AGE ENCRYPTED FILE----- YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSA1QytQUlNqSlNPaEd6Mlp0 UVo2WnNyamhxelBod2ZoRERaa1Z3L2NtbVFZCllHZGYxMWtqMGpxemI2bnlpMG5k MklyMFkrdjd5eTlEUWJFMDBlRk1hQkEKLS0tIDhHWG9NVnd2czdBQVJ3VmdMOWNu RVNlZVYxOGdZYnpSalF4WHo0SUVhakEKE7CyGNSk03dbSfXrw9n6fi87PYoqEAxI t74NY/MxQt5gg0fJjtRbOj/cer0gaX86MvMSMJzREPEch5Q52gqKUw== -----END AGE ENCRYPTED FILE----- - recipient: age1y6hushuapy0k04mrvvpev0t8lq44w904r596jus44nhkflky0yhqgq2xx6 enc: | -----BEGIN AGE ENCRYPTED FILE----- YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBMU1pUR1dxS3BHOGxiL3pj emFaWGdNWmRxTVo4dkc1VDF4Sm8xVnVFQkJrCkRNcnNWODhGNHoxVGtGZWVpc1hn N1JVbUY4c043b0JZVC84NlQzSGhnVzQKLS0tIG1EL3J1aWY0ZG95V0s4TTJmRnUy MEpGbGlQbVRsM1NxN1JxY2J1MVNTTE0KuIvuM2c1VIXKv0LGLb0NwqtSyBYcRcb1 uiIjNV0UzEt/WvnCeUTMPgIXBHk6jWcaKe13v6MHeha+/CVZ9Su/Lw== -----END AGE ENCRYPTED FILE----- lastmodified: "2026-03-14T11:38:57Z" mac: ENC[AES256_GCM,data:gmxyp3XaHeU/CT2lgo14wIbJsKs/JrZmUPhgHwo1XRN5Sf/Su6lHOpVlQS1M6R3+ZlBnS/oEur+y0gydCCqhJK1C3Y5YuUfPlOWOeQWMVxQBqxWkyemvz5KgGseDc9nG09FpoGEYa4sSeuD1J6vRsGcZiOStaA6s8NICWivdWcQ=,iv:cYILLrScr7cFiLx5INbc9z3BT7LaCjLnCH0wdn3lZ1k=,tag:IIRb/Tu8YqWNiHXH7CSOfQ==,type:str] unencrypted_suffix: _unencrypted version: 3.11.0 ================================================ FILE: hosts/cryodev-main/users.nix ================================================ { inputs, outputs, ... }: { imports = [ outputs.nixosModules.normalUsers ../../users/steffen ../../users/ralph ../../users/benjamin ]; } ================================================ FILE: hosts/cryodev-main/services/comin.nix ================================================ { config, pkgs, outputs, constants, ... }: { imports = [ outputs.nixosModules.comin ]; services.comin = { enable = true; remotes = [ { name = "origin"; url = "https://${constants.services.forgejo.fqdn}/steffen/cryodev.git"; branches.main.name = "main"; } ]; }; } ================================================ FILE: hosts/cryodev-main/services/default.nix ================================================ { imports = [ ./comin.nix ./forgejo.nix ./forgejo-runner.nix ./headplane.nix ./headscale.nix ./mailserver.nix ./netdata.nix ./nginx.nix ./openssh.nix ./sops.nix ./tailscale.nix ]; } ================================================ FILE: hosts/cryodev-main/services/forgejo-runner.nix ================================================ { config, outputs, constants, ... }: { imports = [ outputs.nixosModules.forgejo-runner ]; services.forgejo-runner = { enable = true; url = "http://127.0.0.1:${toString constants.services.forgejo.port}"; tokenFile = config.sops.templates."forgejo-runner-token".path; }; sops.secrets."forgejo-runner/token" = { mode = "0400"; }; sops.templates."forgejo-runner-token" = { content = '' TOKEN=${config.sops.placeholder."forgejo-runner/token"} ''; }; } ================================================ FILE: hosts/cryodev-main/services/forgejo.nix ================================================ { config, outputs, constants, ... }: { imports = [ outputs.nixosModules.forgejo ]; services.forgejo = { enable = true; settings = { server = { DOMAIN = constants.services.forgejo.fqdn; ROOT_URL = "https://${constants.services.forgejo.fqdn}/"; HTTP_PORT = constants.services.forgejo.port; }; service = { DISABLE_REGISTRATION = true; }; mailer = { ENABLED = true; FROM = "forgejo@${constants.domain}"; SMTP_ADDR = constants.services.mail.fqdn; SMTP_PORT = constants.services.mail.port; USER = "forgejo@${constants.domain}"; }; }; }; services.nginx.virtualHosts."${constants.services.forgejo.fqdn}" = { forceSSL = true; enableACME = true; locations."/" = { proxyPass = "http://127.0.0.1:${toString constants.services.forgejo.port}"; }; }; } ================================================ FILE: hosts/cryodev-main/services/headplane.nix ================================================ { outputs, constants, ... }: { imports = [ outputs.nixosModules.headplane ]; services.headplane = { enable = true; port = constants.services.headplane.port; settings = { headscale = { url = "http://127.0.0.1:${toString constants.services.headscale.port}"; public_url = "https://${constants.services.headscale.fqdn}"; config_strict = false; }; }; }; services.nginx.virtualHosts."${constants.services.headplane.fqdn}" = { forceSSL = true; enableACME = true; locations."/" = { proxyPass = "http://127.0.0.1:${toString constants.services.headplane.port}"; }; }; } ================================================ FILE: hosts/cryodev-main/services/headscale.nix ================================================ { outputs, constants, ... }: { imports = [ outputs.nixosModules.headscale ]; services.headscale = { enable = true; address = "127.0.0.1"; port = constants.services.headscale.port; settings = { server_url = "https://${constants.services.headscale.fqdn}"; # dns.base_domain must be different from the server domain # Using "tail" for internal Tailscale DNS (e.g., host.tail) dns.base_domain = "tail"; }; }; services.nginx.virtualHosts."${constants.services.headscale.fqdn}" = { forceSSL = true; enableACME = true; locations."/" = { proxyPass = "http://127.0.0.1:${toString constants.services.headscale.port}"; proxyWebsockets = true; }; }; } ================================================ FILE: hosts/cryodev-main/services/mailserver.nix ================================================ { outputs, constants, ... }: { imports = [ outputs.nixosModules.mailserver ]; mailserver = { enable = true; fqdn = constants.services.mail.fqdn; domains = [ constants.domain ]; accounts = { forgejo = { }; admin = { aliases = [ "postmaster" ]; }; }; x509.useACMEHost = constants.services.mail.fqdn; }; # ACME certificate for mail server security.acme.certs.${constants.services.mail.fqdn} = { }; } ================================================ FILE: hosts/cryodev-main/services/netdata.nix ================================================ { config, pkgs, constants, ... }: { services.netdata = { enable = true; package = pkgs.netdata.override { withCloudUi = true; }; config = { global = { "debug log" = "syslog"; "access log" = "syslog"; "error log" = "syslog"; "bind to" = "127.0.0.1"; }; }; }; services.nginx.virtualHosts."${constants.services.netdata.fqdn}" = { forceSSL = true; enableACME = true; locations."/" = { proxyPass = "http://127.0.0.1:${toString constants.services.netdata.port}"; proxyWebsockets = true; # Basic Auth can be added here if desired, or restrict by IP # extraConfig = "allow 100.64.0.0/10; deny all;"; # Example for Tailscale only }; }; } ================================================ FILE: hosts/cryodev-main/services/nginx.nix ================================================ { inputs, outputs, lib, config, pkgs, ... }: { imports = [ outputs.nixosModules.nginx ]; services.nginx = { enable = true; forceSSL = true; # Force SSL for all vhosts by default if configured to use this option openFirewall = true; recommendedOptimisation = true; recommendedGzipSettings = true; recommendedProxySettings = true; recommendedTlsSettings = true; }; } ================================================ FILE: hosts/cryodev-main/services/openssh.nix ================================================ { outputs, ... }: { imports = [ outputs.nixosModules.openssh ]; services.openssh.enable = true; } ================================================ FILE: hosts/cryodev-main/services/sops.nix ================================================ { config, pkgs, outputs, ... }: { imports = [ outputs.nixosModules.sops ]; sops = { defaultSopsFile = ../secrets.yaml; # age.keyFile is not set, sops-nix defaults to using /etc/ssh/ssh_host_ed25519_key # Secrets fuer Stufe-2-Services werden in deren eigenen Dateien definiert: # forgejo-runner/token -> forgejo-runner.nix # tailscale/auth-key -> tailscale.nix (via Modul) }; } ================================================ FILE: hosts/cryodev-main/services/tailscale.nix ================================================ { config, pkgs, outputs, constants, ... }: { imports = [ outputs.nixosModules.tailscale ]; services.tailscale = { enable = true; # Connect to our own headscale instance loginServer = "https://${constants.services.headscale.fqdn}"; # Allow SSH access over Tailscale enableSSH = true; # Use MagicDNS names acceptDNS = true; }; } ================================================ FILE: hosts/cryodev-pi/boot.nix ================================================ { boot = { loader = { grub.enable = false; generic-extlinux-compatible.enable = true; }; }; } ================================================ FILE: hosts/cryodev-pi/default.nix ================================================ { inputs, lib, outputs, ... }: { imports = [ ./boot.nix ./hardware.nix ./networking.nix ./packages.nix ./sd-image.nix ./services ./users.nix outputs.nixosModules.common outputs.nixosModules.nixvim outputs.nixosModules.sops ]; # Allow unfree packages (netdata has changed to gpl3Plus ncul1 license) nixpkgs.config.allowUnfreePredicate = pkg: builtins.elem (lib.getName pkg) [ "netdata" ]; system.stateVersion = "25.11"; } ================================================ FILE: hosts/cryodev-pi/disks.sh ================================================ #!/usr/bin/env bash SSD='/dev/disk/by-id/FIXME' MNT='/mnt' SWAP_GB=4 # Helper function to wait for devices wait_for_device() { local device=$1 echo "Waiting for device: $device ..." while [[ ! -e $device ]]; do sleep 1 done echo "Device $device is ready." } # Function to install a package if it's not already installed install_if_missing() { local cmd="$1" local package="$2" if ! command -v "$cmd" &> /dev/null; then echo "$cmd not found, installing $package..." nix-env -iA "nixos.$package" fi } install_if_missing "sgdisk" "gptfdisk" install_if_missing "partprobe" "parted" wait_for_device $SSD echo "Wiping filesystem on $SSD..." wipefs -a $SSD echo "Clearing partition table on $SSD..." sgdisk --zap-all $SSD echo "Partitioning $SSD..." sgdisk -n1:1M:+1G -t1:EF00 -c1:BOOT $SSD sgdisk -n2:0:+"$SWAP_GB"G -t2:8200 -c2:SWAP $SSD sgdisk -n3:0:0 -t3:8304 -c3:ROOT $SSD partprobe -s $SSD udevadm settle wait_for_device ${SSD}-part1 wait_for_device ${SSD}-part2 wait_for_device ${SSD}-part3 echo "Formatting partitions..." mkfs.vfat -F 32 -n BOOT "${SSD}-part1" mkswap -L SWAP "${SSD}-part2" mkfs.ext4 -L ROOT "${SSD}-part3" echo "Mounting partitions..." mount -o X-mount.mkdir "${SSD}-part3" "$MNT" mkdir -p "$MNT/boot" mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1 "${SSD}-part1" "$MNT/boot" echo "Enabling swap..." swapon "${SSD}-part2" echo "Partitioning and setup complete:" lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL ================================================ FILE: hosts/cryodev-pi/hardware.nix ================================================ { pkgs, lib, ... }: { boot = { kernelPackages = pkgs.linuxKernel.packages.linux_rpi4; initrd = { availableKernelModules = [ "xhci_pci" "usbhid" "usb_storage" ]; # Disable default x86 modules that don't exist in the Pi kernel (e.g. dw-hdmi) includeDefaultModules = false; }; }; fileSystems = { "/" = { device = "/dev/disk/by-label/NIXOS_SD"; fsType = "ext4"; options = [ "noatime" ]; }; }; nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux"; hardware.enableRedistributableFirmware = true; } ================================================ FILE: hosts/cryodev-pi/networking.nix ================================================ { networking.hostName = "cryodev-pi"; networking.domain = "cryodev.xyz"; } ================================================ FILE: hosts/cryodev-pi/packages.nix ================================================ { pkgs, ... }: { environment.systemPackages = with pkgs; [ ]; } ================================================ FILE: hosts/cryodev-pi/sd-image.nix ================================================ # SD Card image configuration for Raspberry Pi { config, modulesPath, lib, ... }: { imports = [ (modulesPath + "/installer/sd-card/sd-image-aarch64.nix") ]; sdImage = { # Compress with zstd for smaller download compressImage = true; # Auto-expand root partition on first boot expandOnBoot = true; }; # Image filename based on hostname image.fileName = "${config.networking.hostName}-sd-image.img"; # Disable ZFS to avoid build issues on SD image boot.supportedFilesystems = lib.mkForce [ "vfat" "ext4" ]; # sd-image.nix imports all-hardware.nix which adds x86 modules like dw-hdmi # that don't exist in the RPi4 kernel. Filter them out. boot.initrd.availableKernelModules = lib.mkForce [ "xhci_pci" "usbhid" "usb_storage" ]; } ================================================ FILE: hosts/cryodev-pi/secrets.yaml ================================================ # SOPS encrypted secrets for cryodev-pi # This file should be encrypted with sops before committing # See INSTRUCTIONS.md for setup instructions # Placeholder - replace with actual encrypted secrets # Generate UUID with: uuidgen netdata: stream: child-uuid: ENC[AES256_GCM,data:placeholder,tag:placeholder,type:str] tailscale: auth-key: ENC[AES256_GCM,data:placeholder,tag:placeholder,type:str] ================================================ FILE: hosts/cryodev-pi/users.nix ================================================ { inputs, outputs, ... }: { imports = [ outputs.nixosModules.normalUsers ../../users/steffen ]; } ================================================ FILE: hosts/cryodev-pi/services/comin.nix ================================================ { config, pkgs, outputs, constants, ... }: { imports = [ outputs.nixosModules.comin ]; services.comin = { enable = true; remotes = [ { name = "origin"; url = "https://${constants.services.forgejo.fqdn}/steffen/cryodev.git"; branches.main.name = "main"; } ]; }; } ================================================ FILE: hosts/cryodev-pi/services/default.nix ================================================ { imports = [ # TODO: Enable after first install when SOPS secrets are configured # ./tailscale.nix # ./netdata.nix # ./comin.nix ./nginx.nix ./openssh.nix ]; } ================================================ FILE: hosts/cryodev-pi/services/netdata.nix ================================================ { config, constants, ... }: { services.netdata = { enable = true; config.global = { "debug log" = "syslog"; "access log" = "syslog"; "error log" = "syslog"; }; configDir = { "stream.conf" = config.sops.templates."netdata/stream.conf".path; }; }; sops = let owner = config.services.netdata.user; group = config.services.netdata.group; mode = "0400"; restartUnits = [ "netdata.service" ]; in { # generate with `uuidgen` secrets."netdata/stream/child-uuid" = { inherit owner group mode restartUnits ; }; templates."netdata/stream.conf" = { inherit owner group mode restartUnits ; # child node content = '' [stream] enabled = yes destination = ${constants.hosts.cryodev-main.ip}:${builtins.toString constants.services.netdata.port} api key = ${config.sops.placeholder."netdata/stream/child-uuid"} ''; }; }; } ================================================ FILE: hosts/cryodev-pi/services/nginx.nix ================================================ { outputs, ... }: { imports = [ outputs.nixosModules.nginx ]; services.nginx = { enable = true; forceSSL = true; openFirewall = true; }; } ================================================ FILE: hosts/cryodev-pi/services/openssh.nix ================================================ { outputs, ... }: { imports = [ outputs.nixosModules.openssh ]; services.openssh.enable = true; } ================================================ FILE: hosts/cryodev-pi/services/tailscale.nix ================================================ { config, pkgs, outputs, constants, ... }: { imports = [ outputs.nixosModules.tailscale ]; services.tailscale = { enable = true; # Connect to our own headscale instance loginServer = "https://${constants.services.headscale.fqdn}"; # Allow SSH access over Tailscale enableSSH = true; # Use MagicDNS names acceptDNS = true; # Auth key for automated enrollment authKeyFile = config.sops.secrets."tailscale/auth-key".path; }; sops.secrets."tailscale/auth-key" = { }; } ================================================ FILE: lib/utils.nix ================================================ { lib, ... }: let inherit (lib) mkDefault mkEnableOption mkIf mkOption types ; in { isNotEmptyStr = str: builtins.isString str && str != ""; mkMailIntegrationOption = service: { enable = mkEnableOption "Mail integration for ${service}."; smtpHost = mkOption { type = types.str; default = "localhost"; description = "SMTP host for sending emails."; }; }; mkReverseProxyOption = service: subdomain: { enable = mkEnableOption "Nginx reverse proxy for ${service}."; subdomain = mkOption { type = types.str; default = subdomain; description = "Subdomain for Nginx virtual host. Leave empty for root domain."; }; forceSSL = mkOption { type = types.bool; default = true; description = "Force SSL for Nginx virtual host."; }; }; mkUrl = { fqdn, ssl ? false, port ? null, path ? "", ... }: let protocol = if ssl then "https" else "http"; portPart = if port != null then ":${toString port}" else ""; pathPart = if path != "" then "/${path}" else ""; in "${protocol}://${fqdn}${portPart}${pathPart}"; mkVirtualHost = { address ? "127.0.0.1", port ? null, socketPath ? null, location ? "/", ssl ? false, proxyWebsockets ? true, recommendedProxySettings ? true, extraConfig ? "", ... }: let target = if port != null then "http://${address}:${builtins.toString port}" else if socketPath != null then "http://unix:${socketPath}" else null; in { enableACME = ssl; forceSSL = ssl; locations = mkIf (target != null) { "${location}" = { proxyPass = mkDefault target; inherit proxyWebsockets recommendedProxySettings extraConfig; }; }; }; } ================================================ FILE: modules/nixos/default.nix ================================================ { common = import ./common; comin = import ./comin; forgejo = import ./forgejo; forgejo-runner = import ./forgejo-runner; headplane = import ./headplane; headscale = import ./headscale; mailserver = import ./mailserver; nixvim = import ./nixvim; normalUsers = import ./normalUsers; nginx = import ./nginx; openssh = import ./openssh; sops = import ./sops; tailscale = import ./tailscale; } ================================================ FILE: modules/nixos/comin/default.nix ================================================ { inputs, ... }: { imports = [ inputs.comin.nixosModules.comin ]; } ================================================ FILE: modules/nixos/common/default.nix ================================================ { imports = [ ./environment.nix ./htop.nix ./nationalization.nix ./networking.nix ./nix.nix ./sudo.nix ./well-known.nix ./zsh.nix ./shared ./overlays.nix ]; } ================================================ FILE: modules/nixos/common/environment.nix ================================================ { config, lib, pkgs, ... }: let inherit (lib) mkDefault optionals; in { environment.systemPackages = with pkgs; [ cryptsetup curl dig dnsutils fzf gptfdisk iproute2 jq lm_sensors lsof netcat-openbsd nettools nixos-container nmap nurl p7zip pciutils psmisc rclone rsync tcpdump tmux tree unzip usbutils wget xxd zip (callPackage ../../../apps/rebuild { }) ] ++ optionals (pkgs.stdenv.hostPlatform == pkgs.stdenv.buildPlatform) [ pkgs.kitty.terminfo ]; environment.shellAliases = { l = "ls -lh"; ll = "ls -lAh"; ports = "ss -tulpn"; publicip = "curl ifconfig.me/all"; sudo = "sudo "; # make aliases work with `sudo` }; # saves one instance of nixpkgs. environment.ldso32 = null; boot.tmp.cleanOnBoot = mkDefault true; boot.initrd.systemd.enable = mkDefault (!config.boot.swraid.enable && !config.boot.isContainer); } ================================================ FILE: modules/nixos/common/htop.nix ================================================ { programs.htop = { enable = true; settings = { highlight_base_name = 1; }; }; } ================================================ FILE: modules/nixos/common/nationalization.nix ================================================ { lib, ... }: let de = "de_DE.UTF-8"; en = "en_US.UTF-8"; inherit (lib) mkDefault; in { i18n = { defaultLocale = mkDefault en; extraLocaleSettings = { LC_ADDRESS = mkDefault de; LC_IDENTIFICATION = mkDefault de; LC_MEASUREMENT = mkDefault de; LC_MONETARY = mkDefault de; LC_NAME = mkDefault de; LC_NUMERIC = mkDefault de; LC_PAPER = mkDefault de; LC_TELEPHONE = mkDefault de; LC_TIME = mkDefault en; }; }; console = { font = mkDefault "Lat2-Terminus16"; keyMap = mkDefault "de"; }; time.timeZone = mkDefault "Europe/Berlin"; } ================================================ FILE: modules/nixos/common/networking.nix ================================================ { config, lib, pkgs, ... }: let inherit (lib) mkDefault; inherit (lib.utils) isNotEmptyStr; in { config = { assertions = [ { assertion = isNotEmptyStr config.networking.domain; message = "synix/nixos/common: config.networking.domain cannot be empty."; } { assertion = isNotEmptyStr config.networking.hostName; message = "synix/nixos/common: config.networking.hostName cannot be empty."; } ]; networking = { domain = mkDefault "${config.networking.hostName}.local"; hostId = mkDefault "8425e349"; # same as NixOS install ISO and nixos-anywhere # NetworkManager useDHCP = false; networkmanager = { enable = true; plugins = with pkgs; [ networkmanager-openconnect networkmanager-openvpn ]; }; }; }; } ================================================ FILE: modules/nixos/common/nix.nix ================================================ { config, lib, ... }: let inherit (lib) mkDefault; in { nix = { # use flakes channel.enable = mkDefault false; # De-duplicate store paths using hardlinks except in containers # where the store is host-managed. optimise.automatic = mkDefault (!config.boot.isContainer); }; } ================================================ FILE: modules/nixos/common/overlays.nix ================================================ { outputs, ... }: { nixpkgs.overlays = [ outputs.overlays.local-packages outputs.overlays.modifications outputs.overlays.old-stable-packages outputs.overlays.unstable-packages ]; } ================================================ FILE: modules/nixos/common/sudo.nix ================================================ { config, ... }: { security.sudo = { enable = true; execWheelOnly = true; extraConfig = '' Defaults lecture = never ''; }; assertions = let validUsers = users: users == [ ] || users == [ "root" ]; validGroups = groups: groups == [ ] || groups == [ "wheel" ]; validUserGroups = builtins.all ( r: validUsers (r.users or [ ]) && validGroups (r.groups or [ ]) ) config.security.sudo.extraRules; in [ { assertion = config.security.sudo.execWheelOnly -> validUserGroups; message = "Some definitions in `security.sudo.extraRules` refer to users other than 'root' or groups other than 'wheel'. Disable `config.security.sudo.execWheelOnly`, or adjust the rules."; } ]; } ================================================ FILE: modules/nixos/common/well-known.nix ================================================ { # avoid TOFU MITM programs.ssh.knownHosts = { "github.com".hostNames = [ "github.com" ]; "github.com".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl"; "gitlab.com".hostNames = [ "gitlab.com" ]; "gitlab.com".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf"; "git.sr.ht".hostNames = [ "git.sr.ht" ]; "git.sr.ht".publicKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMZvRd4EtM7R+IHVMWmDkVU3VLQTSwQDSAvW0t2Tkj60"; }; # TODO: add synix } ================================================ FILE: modules/nixos/common/zsh.nix ================================================ { programs.zsh = { enable = true; syntaxHighlighting = { enable = true; highlighters = [ "main" "brackets" "cursor" "pattern" ]; patterns = { "rm -rf" = "fg=white,bold,bg=red"; "rm -fr" = "fg=white,bold,bg=red"; }; }; autosuggestions = { enable = true; strategy = [ "completion" "history" ]; }; enableLsColors = true; }; } ================================================ FILE: modules/nixos/common/shared/default.nix ================================================ { imports = [ ./nix.nix ]; } ================================================ FILE: modules/nixos/common/shared/nix.nix ================================================ { config, lib, pkgs, ... }: let inherit (lib) mkDefault optional versionOlder versions ; in { nix.package = mkDefault pkgs.nix; # for `nix run synix#foo`, `nix build synix#bar`, etc nix.registry = { synix = { from = { id = "synix"; type = "indirect"; }; to = { owner = "sid"; repo = "synix"; host = "git.sid.ovh"; type = "gitea"; }; }; }; # fallback quickly if substituters are not available. nix.settings.connect-timeout = mkDefault 5; nix.settings.fallback = true; nix.settings.experimental-features = [ "nix-command" "flakes" ] ++ optional ( config.nix.package != null && versionOlder (versions.majorMinor config.nix.package.version) "2.22" ) "repl-flake"; nix.settings.log-lines = mkDefault 25; # avoid disk full issues nix.settings.max-free = mkDefault (3000 * 1024 * 1024); nix.settings.min-free = mkDefault (512 * 1024 * 1024); # avoid copying unnecessary stuff over SSH nix.settings.builders-use-substitutes = true; # workaround for https://github.com/NixOS/nix/issues/9574 nix.settings.nix-path = config.nix.nixPath; nix.settings.download-buffer-size = 524288000; # 500 MiB # add all wheel users to the trusted-users group nix.settings.trusted-users = [ "@wheel" ]; # binary caches nix.settings.substituters = [ "https://cache.nixos.org" "https://nix-community.cachix.org" "https://cache.garnix.io" "https://numtide.cachix.org" ]; nix.settings.trusted-public-keys = [ "cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=" "nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs=" "cache.garnix.io:CTFPyKSLcx5RMJKfLo5EEPUObbA78b0YQ2DTCJXqr9g=" "numtide.cachix.org-1:2ps1kLBUWjxIneOy1Ik6cQjb41X0iXVXeHigGmycPPE=" ]; nix.gc = { automatic = true; dates = "weekly"; options = "--delete-older-than 30d"; }; } ================================================ FILE: modules/nixos/forgejo/default.nix ================================================ { config, lib, ... }: let cfg = config.services.forgejo; inherit (cfg) settings; inherit (lib) getExe head mkDefault mkIf ; in { config = mkIf cfg.enable { services.forgejo = { database.type = mkDefault "postgres"; lfs.enable = mkDefault true; settings = { server = { DOMAIN = mkDefault "git.${config.networking.domain}"; PROTOCOL = mkDefault "http"; ROOT_URL = mkDefault "https://${settings.server.DOMAIN}/"; HTTP_ADDR = mkDefault "0.0.0.0"; HTTP_PORT = mkDefault 3456; SSH_PORT = mkDefault (head config.services.openssh.ports); }; service = { DISABLE_REGISTRATION = mkDefault true; }; ui = { DEFAULT_THEME = mkDefault "forgejo-dark"; }; actions = { ENABLED = mkDefault true; }; mailer = { ENABLED = mkDefault false; SMTP_ADDR = mkDefault "mail.${config.networking.domain}"; FROM = mkDefault "git@${settings.server.DOMAIN}"; USER = mkDefault "git@${settings.server.DOMAIN}"; }; }; secrets = { mailer.PASSWD = mkIf settings.mailer.ENABLED config.sops.secrets."forgejo/mail-pw".path; }; }; environment.shellAliases = { forgejo = "sudo -u ${cfg.user} ${getExe cfg.package} --config ${cfg.stateDir}/custom/conf/app.ini"; }; sops.secrets."forgejo/mail-pw" = mkIf settings.mailer.ENABLED { owner = cfg.user; group = cfg.group; mode = "0400"; }; }; } ================================================ FILE: modules/nixos/forgejo-runner/default.nix ================================================ { config, lib, pkgs, ... }: let cfg = config.services.forgejo-runner; inherit (lib) mkEnableOption mkIf mkOption types ; in { options.services.forgejo-runner = { enable = mkEnableOption "Nix-based Forgejo Runner service"; url = mkOption { type = types.str; description = "Forgejo instance URL."; }; tokenFile = mkOption { type = types.path; description = "Path to EnvironmentFile containing TOKEN=..."; }; }; config = mkIf cfg.enable { nix.settings.trusted-users = [ "gitea-runner" ]; services.gitea-actions-runner = { package = pkgs.forgejo-runner; instances.default = { enable = true; name = "${config.networking.hostName}-nix"; inherit (cfg) url tokenFile; labels = [ "host:host" ]; hostPackages = with pkgs; [ bash coreutils curl gitMinimal gnused nix nodejs openssh ]; settings = { log.level = "info"; runner = { capacity = 1; envs = { NIX_CONFIG = "extra-experimental-features = nix-command flakes"; NIX_REMOTE = "daemon"; }; }; }; }; }; }; } ================================================ FILE: modules/nixos/headplane/default.nix ================================================ { inputs, config, lib, pkgs, ... }: let cfg = config.services.headplane; headscale = config.services.headscale; inherit (lib) mkDefault mkIf mkOption types ; in { imports = [ inputs.headplane.nixosModules.headplane ]; options.services.headplane = { port = mkOption { type = types.port; default = 3000; description = "Port for headplane to listen on"; }; }; config = mkIf cfg.enable { nixpkgs.overlays = [ inputs.headplane.overlays.default # Fix upstream pnpm-deps hash mismatch (https://github.com/tale/headplane) (final: prev: { headplane = prev.headplane.overrideAttrs (old: { pnpmDeps = old.pnpmDeps.overrideAttrs { outputHash = "sha256-lk/ezsrW6JHh5nXPSstqHUbaMTeOARBGZcBSoG1S5ns="; }; }); }) ]; services.headplane = { settings = { server = { host = mkDefault "127.0.0.1"; port = mkDefault cfg.port; cookie_secret_path = config.sops.secrets."headplane/cookie_secret".path; }; headscale = { url = mkDefault "http://127.0.0.1:${toString headscale.port}"; public_url = mkDefault headscale.settings.server_url; config_path = mkDefault "/etc/headscale/config.yaml"; }; integration.agent = { enabled = mkDefault true; pre_authkey_path = config.sops.secrets."headplane/agent_pre_authkey".path; }; }; }; sops.secrets = let owner = headscale.user; group = headscale.group; mode = "0400"; in { "headplane/cookie_secret" = { inherit owner group mode; }; "headplane/agent_pre_authkey" = { inherit owner group mode; }; }; }; } ================================================ FILE: modules/nixos/headscale/acl.hujson ================================================ { "acls": [ { "action": "accept", "src": ["*"], "dst": ["*:*"] } ], "ssh": [ { "action": "accept", "src": ["autogroup:member"], "dst": ["autogroup:member"], "users": ["autogroup:nonroot", "root"] } ] } ================================================ FILE: modules/nixos/headscale/default.nix ================================================ { config, lib, ... }: let cfg = config.services.headscale; domain = config.networking.domain; subdomain = cfg.reverseProxy.subdomain; fqdn = if (cfg.reverseProxy.enable && subdomain != "") then "${subdomain}.${domain}" else domain; acl = "headscale/acl.hujson"; inherit (lib) mkDefault mkIf mkOption optional optionals types ; inherit (lib.utils) mkReverseProxyOption mkUrl mkVirtualHost ; in { options.services.headscale = { reverseProxy = mkReverseProxyOption "Headscale" "hs"; openFirewall = mkOption { type = types.bool; default = false; description = "Whether to automatically open firewall ports. TCP: 80, 443; UDP: 3478."; }; }; config = mkIf cfg.enable { assertions = [ { assertion = !cfg.settings.derp.server.enable || cfg.reverseProxy.forceSSL; message = "cryodev/nixos/headscale: DERP requires TLS"; } { assertion = fqdn != cfg.settings.dns.base_domain; message = "cryodev/nixos/headscale: `settings.server_url` must be different from `settings.dns.base_domain`"; } { assertion = !cfg.settings.dns.override_local_dns || cfg.settings.dns.nameservers.global != [ ]; message = "cryodev/nixos/headscale: `settings.dns.nameservers.global` must be set when `settings.dns.override_local_dns` is true"; } ]; environment.etc.${acl} = { inherit (config.services.headscale) user group; source = ./acl.hujson; }; environment.shellAliases = { hs = "${cfg.package}/bin/headscale"; }; services.headscale = { address = mkDefault (if cfg.reverseProxy.enable then "127.0.0.1" else "0.0.0.0"); port = mkDefault 8077; settings = { policy.path = mkDefault "/etc/${acl}"; database.type = mkDefault "sqlite"; # postgres is highly discouraged as it is only supported for legacy reasons server_url = mkDefault (mkUrl { inherit fqdn; ssl = with cfg.reverseProxy; enable && forceSSL; }); derp.server.enable = mkDefault cfg.reverseProxy.forceSSL; dns = { magic_dns = mkDefault true; base_domain = mkDefault "tail"; search_domains = mkDefault [ cfg.settings.dns.base_domain ]; override_local_dns = mkDefault true; nameservers.global = mkDefault ( optionals cfg.settings.dns.override_local_dns [ "1.1.1.1" "1.0.0.1" "2606:4700:4700::1111" "2606:4700:4700::1001" ] ); }; }; }; services.nginx.virtualHosts = mkIf cfg.reverseProxy.enable { "${fqdn}" = mkVirtualHost { inherit (cfg) address port; ssl = cfg.reverseProxy.forceSSL; }; }; networking.firewall = mkIf cfg.openFirewall { allowedTCPPorts = [ 80 443 ]; allowedUDPPorts = optional cfg.settings.derp.server.enable 3478; }; }; } ================================================ FILE: modules/nixos/mailserver/default.nix ================================================ { inputs, config, lib, pkgs, ... }: let cfg = config.mailserver; domain = config.networking.domain; fqdn = "${cfg.subdomain}.${domain}"; inherit (lib) mapAttrs' mkDefault mkIf mkOption nameValuePair types ; in { imports = [ inputs.nixos-mailserver.nixosModules.mailserver ]; options.mailserver = { subdomain = mkOption { type = types.str; default = "mail"; description = "Subdomain for rDNS"; }; accounts = mkOption { type = types.attrsOf ( types.submodule { options = { aliases = mkOption { type = types.listOf types.str; default = [ ]; description = "A list of aliases of this account. `@domain` will be appended automatically."; }; sendOnly = mkOption { type = types.bool; default = false; description = "Specifies if the account should be a send-only account."; }; }; } ); default = { }; description = '' This options wraps `loginAccounts`. `loginAccounts..name` will be automatically set to `@`. ''; }; }; config = mkIf cfg.enable { assertions = [ { assertion = cfg.subdomain != ""; message = "cryodev/nixos/mailserver: config.mailserver.subdomain cannot be empty."; } ]; mailserver = { fqdn = mkDefault fqdn; domains = mkDefault [ domain ]; # stateVersion 3 requires the new mail directory structure # For new installations, this is the correct value # For existing installations, see: https://nixos-mailserver.readthedocs.io/en/latest/migrations.html stateVersion = mkDefault 3; loginAccounts = mapAttrs' ( user: accConf: nameValuePair "${user}@${domain}" { name = "${user}@${domain}"; aliases = map (alias: "${alias}@${domain}") (accConf.aliases or [ ]); sendOnly = accConf.sendOnly; quota = mkDefault "5G"; hashedPasswordFile = config.sops.secrets."mailserver/accounts/${user}".path; } ) cfg.accounts; # Use ACME for certificate x509.useACMEHost = mkDefault fqdn; }; # ACME certificate for mail server security.acme.certs.${fqdn} = { }; security.acme = { acceptTerms = true; defaults.email = mkDefault "postmaster@cryodev.xyz"; defaults.webroot = mkDefault "/var/lib/acme/acme-challenge"; }; environment.systemPackages = [ pkgs.mailutils ]; sops = { secrets = mapAttrs' ( user: _config: nameValuePair "mailserver/accounts/${user}" { restartUnits = [ "postfix.service" "dovecot.service" ]; } ) cfg.accounts; }; }; } ================================================ FILE: modules/nixos/nginx/default.nix ================================================ { config, lib, ... }: let cfg = config.services.nginx; inherit (lib) mkDefault mkIf mkOption optional optionals types ; in { options.services.nginx = { forceSSL = mkOption { type = types.bool; default = false; description = "Force SSL for Nginx virtual host."; }; openFirewall = mkOption { type = types.bool; default = false; description = "Whether to open the firewall for HTTP (and HTTPS if forceSSL is enabled)."; }; }; config = mkIf cfg.enable { networking.firewall.allowedTCPPorts = optionals cfg.openFirewall ( [ 80 ] ++ optional cfg.forceSSL 443 ); services.nginx = { recommendedOptimisation = mkDefault true; recommendedGzipSettings = mkDefault true; recommendedProxySettings = mkDefault true; recommendedTlsSettings = cfg.forceSSL; commonHttpConfig = "access_log syslog:server=unix:/dev/log;"; resolver.addresses = let isIPv6 = addr: builtins.match ".*:.*:.*" addr != null; escapeIPv6 = addr: if isIPv6 addr then "[${addr}]" else addr; cloudflare = [ "1.1.1.1" "2606:4700:4700::1111" ]; resolvers = if config.networking.nameservers == [ ] then cloudflare else config.networking.nameservers; in map escapeIPv6 resolvers; sslDhparam = mkIf cfg.forceSSL config.security.dhparams.params.nginx.path; }; security.acme = mkIf cfg.forceSSL { acceptTerms = true; defaults.email = mkDefault "postmaster@${config.networking.domain}"; defaults.webroot = mkDefault "/var/lib/acme/acme-challenge"; defaults.group = mkDefault "nginx"; }; security.dhparams = mkIf cfg.forceSSL { enable = true; params.nginx = { }; }; }; } ================================================ FILE: modules/nixos/nixvim/default.nix ================================================ { inputs, config, lib, ... }: let cfg = config.programs.nixvim; inherit (lib) mkDefault mkIf; in { imports = [ inputs.nixvim.nixosModules.nixvim ./plugins # TODO: spellfiles.nix uses home-manager options (home.file, xdg.dataHome) # which are not available in NixOS modules. Needs to be rewritten. # ./spellfiles.nix ]; config = { programs.nixvim = { enable = true; # Enable globally on NixOS defaultEditor = mkDefault true; viAlias = mkDefault true; vimAlias = mkDefault true; # Removed home-manager specific options like 'enableMan' which is handled differently or not needed in system module context # Removed clipboard.providers.wl-copy as it's home-manager specific. # System-wide clipboard integration for headless servers is less critical but can be added if needed. # vim.g.* globals = { mapleader = mkDefault " "; }; # vim.opt.* opts = { # behavior cursorline = mkDefault true; # highlights the line under the cursor mouse = mkDefault "a"; # enable mouse support nu = mkDefault true; # line numbers relativenumber = mkDefault true; # relative line numbers scrolloff = mkDefault 20; # keeps some context above/below cursor signcolumn = mkDefault "yes"; # reserve space for signs (e.g., GitGutter) undofile = mkDefault true; # persistent undo updatetime = mkDefault 500; # ms to wait for trigger an event (default 4000ms) wrap = mkDefault true; # wraps text if it exceeds the width of the window # search ignorecase = mkDefault true; # ignore case in search patterns smartcase = mkDefault true; # smart case incsearch = mkDefault true; # incremental search hlsearch = mkDefault true; # highlight search # windows splitbelow = mkDefault true; # new windows are created below current splitright = mkDefault true; # new windows are created to the right of current equalalways = mkDefault true; # window sizes are automatically updated. # tabs expandtab = mkDefault true; # convert tabs into spaces shiftwidth = mkDefault 2; # number of spaces to use for each step of (auto)indent smartindent = mkDefault true; # smart autoindenting on new lines softtabstop = mkDefault 2; # number of spaces in tab when editing tabstop = mkDefault 2; # number of visual spaces per tab # spell checking spell = mkDefault true; spelllang = mkDefault [ "en_us" "de_20" ]; }; # vim.diagnostic.config.* diagnostic.settings = { virtual_text = { spacing = 4; prefix = "●"; severity_sort = true; }; signs = true; underline = true; update_in_insert = false; }; extraConfigLua = '' vim.cmd "set noshowmode" -- Hides "--INSERT--" mode indicator ''; keymaps = import ./keymaps.nix; }; environment = { variables = { EDITOR = mkIf cfg.enable "nvim"; VISUAL = mkIf cfg.enable "nvim"; }; shellAliases = { v = mkIf cfg.enable "nvim"; }; }; }; } ================================================ FILE: modules/nixos/nixvim/keymaps.nix ================================================ [ # cursor navigation { # scroll down, recenter key = ""; action = "zz"; mode = "n"; } { # scroll up, recenter key = ""; action = "zz"; mode = "n"; } # searching { # center cursor after search next key = "n"; action = "nzzzv"; mode = "n"; } { # center cursor after search previous key = "N"; action = "Nzzzv"; mode = "n"; } { # ex command key = "pv"; action = "Ex"; mode = "n"; } # search and replace { # search and replace word under cursor key = "s"; action = ":%s///gI"; mode = "n"; } # search and replace selected text { key = "s"; action = "y:%s/0/0/gI"; mode = "v"; } # clipboard operations { # copy to system clipboard in visual mode key = ""; action = ''"+y ''; mode = "v"; } { # paste from system clipboard in visual mode key = ""; action = ''"+p ''; mode = "v"; } { # yank to system clipboard key = "Y"; action = "+Y"; mode = "n"; } { # replace selected text with clipboard content key = "p"; action = "_dP"; mode = "x"; } { # delete without copying to clipboard key = "d"; action = "_d"; mode = [ "n" "v" ]; } # line operations { # move lines down in visual mode key = "J"; action = ":m '>+1gv=gv"; mode = "v"; } { # move lines up in visual mode key = "K"; action = ":m '<-2gv=gv"; mode = "v"; } { # join lines key = "J"; action = "mzJ`z"; mode = "n"; } # quickfix { # Run make command key = "m"; action = ":make"; mode = "n"; } { # previous quickfix item key = ""; action = "cprevzz"; mode = "n"; } { # next quickfix item key = ""; action = "cnextzz"; mode = "n"; } # location list navigation { # previous location list item key = "j"; action = "lprevzz"; mode = "n"; } { # next location list item key = "k"; action = "lnextzz"; mode = "n"; } # disabling keys { # disable the 'Q' key key = "Q"; action = ""; mode = "n"; } # text selection { # select whole buffer key = ""; action = "ggVG"; mode = "n"; } # window operations { # focus next window key = ""; action = ":wincmd W"; options = { noremap = true; silent = true; }; mode = "n"; } { # focus previous window key = ""; action = ":wincmd w"; options = { noremap = true; silent = true; }; mode = "n"; } # window size adjustments { # increase window width key = ""; action = ":vertical resize +5"; options = { noremap = true; silent = true; }; mode = "n"; } { # decrease window width key = ""; action = ":vertical resize -5"; options = { noremap = true; silent = true; }; mode = "n"; } # window closing and opening { # close current window key = "c"; action = ":q"; options = { noremap = true; silent = true; }; mode = "n"; } { # new vertical split at $HOME key = "n"; action = ":vsp $HOME"; options = { noremap = true; silent = true; }; mode = "n"; } # window split orientation toggling { # toggle split orientation key = "t"; action = ":wincmd T"; options = { noremap = true; silent = true; }; mode = "n"; } # spell checking { # toggle spell checking key = "ss"; action = ":setlocal spell!"; options = { noremap = true; silent = true; }; mode = "n"; } { # switch to english spell checking key = "se"; action = ":setlocal spelllang=en_us"; options = { noremap = true; silent = true; }; mode = "n"; } { # switch to german spell checking key = "sg"; action = ":setlocal spelllang=de_20"; options = { noremap = true; silent = true; }; mode = "n"; } { # move to next misspelling key = "]s"; action = "]szz"; options = { noremap = true; silent = true; }; mode = "n"; } { # move to previous misspelling key = "[s"; action = "[szz"; options = { noremap = true; silent = true; }; mode = "n"; } { # correction suggestions for a misspelled word key = "z="; action = "z="; options = { noremap = true; silent = true; }; mode = "n"; } { # adding words to the dictionary key = "zg"; action = "zg"; options = { noremap = true; silent = true; }; mode = "n"; } # buffer navigation { # next buffer key = ""; action = ":bnext"; options = { noremap = true; silent = true; }; mode = "n"; } { # previous buffer key = ""; action = ":bprevious"; options = { noremap = true; silent = true; }; mode = "n"; } { # close current buffer key = "bd"; action = ":bdelete"; options = { noremap = true; silent = true; }; mode = "n"; } { # apply code action key = "ca"; action = ":lua vim.lsp.buf.code_action()"; options = { noremap = true; silent = true; }; mode = "n"; } ] ================================================ FILE: modules/nixos/nixvim/spellfiles.nix ================================================ { config, pkgs, ... }: let spellDir = config.xdg.dataHome + "/nvim/site/spell"; baseUrl = "http://ftp.de.vim.org/runtime/spell"; in { home.file = { de-spl = { enable = true; source = pkgs.fetchurl { url = baseUrl + "/de.utf-8.spl"; sha256 = "sha256-c8cQfqM5hWzb6SHeuSpFk5xN5uucByYdobndGfaDo9E="; }; target = spellDir + "/de.utf8.spl"; }; de-sug = { enable = true; source = pkgs.fetchurl { url = baseUrl + "/de.utf-8.sug"; sha256 = "sha256-E9Ds+Shj2J72DNSopesqWhOg6Pm6jRxqvkerqFcUqUg="; }; target = spellDir + "/de.utf8.sug"; }; }; } ================================================ FILE: modules/nixos/nixvim/plugins/cmp.nix ================================================ { config, lib, ... }: let cfg = config.programs.nixvim; plugin = cfg.plugins.cmp; inherit (lib) mkDefault mkIf; in { programs.nixvim = { plugins = { cmp = { enable = mkDefault true; settings = { autoEnableSources = mkDefault true; experimental.ghost_text = mkDefault true; snippet.expand = mkDefault "luasnip"; formatting.fields = mkDefault [ "kind" "abbr" "menu" ]; sources = [ { name = "git"; } { name = "nvim_lsp"; } { name = "buffer"; option.get_bufnrs.__raw = "vim.api.nvim_list_bufs"; keywordLength = 3; } { name = "path"; keywordLength = 3; } { name = "luasnip"; } ]; mapping = { "" = "cmp.mapping.complete()"; "" = "cmp.mapping.scroll_docs(-4)"; "" = "cmp.mapping.close()"; "" = "cmp.mapping.scroll_docs(4)"; "" = "cmp.mapping.confirm({ select = true })"; "" = "cmp.mapping(cmp.mapping.select_prev_item(), {'i', 's'})"; "" = "cmp.mapping(cmp.mapping.select_next_item(), {'i', 's'})"; }; }; }; cmp-cmdline = mkIf plugin.enable { enable = mkDefault false; }; # autocomplete for cmdline cmp_luasnip = mkIf plugin.enable { enable = mkDefault true; }; luasnip = mkIf plugin.enable { enable = mkDefault true; }; cmp-treesitter = mkIf (plugin.enable && cfg.plugins.treesitter.enable) { enable = mkDefault true; }; }; }; } ================================================ FILE: modules/nixos/nixvim/plugins/default.nix ================================================ { lib, ... }: { imports = [ ./cmp.nix ./lsp.nix ./lualine.nix ./telescope.nix # ./treesitter.nix # HOTFIX: does not build ./trouble.nix ]; config.programs.nixvim.plugins = { web-devicons.enable = true; }; } ================================================ FILE: modules/nixos/nixvim/plugins/lsp.nix ================================================ { config, lib, pkgs, ... }: let cfg = config.programs.nixvim; plugin = cfg.plugins.lsp; inherit (lib) mkDefault mkIf optional; in { config = { programs.nixvim = { plugins = { lsp-format = mkIf plugin.enable { enable = mkDefault true; }; lsp = { enable = mkDefault true; postConfig = ""; keymaps = { silent = mkDefault true; diagnostic = mkDefault { # Navigate in diagnostics "k" = "goto_prev"; "j" = "goto_next"; }; lspBuf = mkDefault { gd = "definition"; gD = "references"; gt = "type_definition"; gi = "implementation"; K = "hover"; "" = "rename"; }; }; servers = { bashls.enable = mkDefault true; clangd.enable = mkDefault true; cssls.enable = mkDefault true; dockerls.enable = mkDefault true; gopls.enable = mkDefault true; html.enable = mkDefault true; jsonls.enable = mkDefault true; nixd.enable = mkDefault true; pyright.enable = mkDefault true; rust_analyzer = { enable = mkDefault true; installCargo = mkDefault true; installRustc = mkDefault true; settings.rustfmt.overrideCommand = mkDefault [ "${pkgs.rustfmt}/bin/rustfmt --edition 2021" # --config tab_spaces=2" ]; }; texlab.enable = mkDefault true; vhdl_ls.enable = mkDefault true; yamlls.enable = mkDefault true; }; }; }; }; environment.systemPackages = optional (cfg.enable && plugin.servers.nixd.enable) pkgs.nixfmt; }; } ================================================ FILE: modules/nixos/nixvim/plugins/lualine.nix ================================================ { config, lib, ... }: let cfg = config.programs.nixvim; plugin = cfg.plugins.lualine; inherit (lib) mkDefault; in { config = { programs.nixvim = { plugins.lualine = { enable = mkDefault true; settings.options.icons_enabled = mkDefault false; }; }; }; } ================================================ FILE: modules/nixos/nixvim/plugins/telescope.nix ================================================ { config, lib, pkgs, ... }: let cfg = config.programs.nixvim; plugin = cfg.plugins.telescope; inherit (lib) mkDefault optionals; in { config = { programs.nixvim = { plugins.telescope = { enable = mkDefault true; extensions = { file-browser.enable = mkDefault true; fzf-native.enable = mkDefault true; live-grep-args.enable = mkDefault true; manix.enable = mkDefault true; }; keymaps = mkDefault { "" = "file_browser"; "" = "git_files"; "bl" = "buffers"; "fd" = "diagnostics"; "ff" = "find_files"; "fg" = "live_grep"; "fh" = "help_tags"; "fm" = "man_pages"; "fn" = "manix"; "fo" = "oldfiles"; "fb" = "file_browser"; }; }; keymaps = optionals plugin.enable [ { key = ""; action = ":lua require('telescope').extensions.live_grep_args.live_grep_args()"; mode = "n"; } ]; }; environment.systemPackages = optionals plugin.enable [ pkgs.ripgrep # for "live_grep" ]; }; } ================================================ FILE: modules/nixos/nixvim/plugins/treesitter.nix ================================================ { config, lib, pkgs, ... }: let cfg = config.programs.nixvim; plugin = cfg.plugins.treesitter; cc = "${pkgs.gcc}/bin/gcc"; inherit (lib) mkDefault mkIf; in { config = { programs.nixvim = { plugins.treesitter = { enable = mkDefault true; nixvimInjections = mkDefault true; settings = { folding.enable = mkDefault true; highlight.enable = mkDefault true; indent.enable = mkDefault true; }; }; plugins.treesitter-context = mkIf plugin.enable { enable = mkDefault true; }; plugins.treesitter-textobjects = mkIf plugin.enable { enable = mkDefault true; }; }; # Fix for: ERROR `cc` executable not found. environment.sessionVariables = mkIf plugin.enable { CC = mkDefault cc; }; # Fix for: WARNING `tree-sitter` executable not found environment.systemPackages = mkIf plugin.enable [ plugin.package ]; }; } ================================================ FILE: modules/nixos/nixvim/plugins/trouble.nix ================================================ { config, lib, ... }: let cfg = config.programs.nixvim; plugin = cfg.plugins.trouble; inherit (lib) mkDefault mkIf; in { config = { programs.nixvim = { plugins.trouble = { enable = mkDefault true; }; keymaps = mkIf plugin.enable [ { mode = "n"; key = "xq"; action = "Trouble qflist toggle"; options = { desc = "Trouble quifick toggle"; }; } { mode = "n"; key = "xl"; action = "Trouble loclist toggle"; options = { desc = "Trouble loclist toggle"; }; } { mode = "n"; key = "xx"; action = "Trouble diagnostics toggle"; options = { desc = "Trouble diagnostics toggle"; }; } ]; }; }; } ================================================ FILE: modules/nixos/normalUsers/default.nix ================================================ { config, lib, pkgs, ... }: let cfg = config.normalUsers; inherit (lib) attrNames genAttrs mkOption types ; in { options.normalUsers = mkOption { type = types.attrsOf ( types.submodule { options = { extraGroups = mkOption { type = (types.listOf types.str); default = [ ]; description = "Extra groups for the user"; example = [ "wheel" ]; }; shell = mkOption { type = types.path; default = pkgs.zsh; description = "Shell for the user"; }; initialPassword = mkOption { type = types.str; default = "changeme"; description = "Initial password for the user"; }; sshKeyFiles = mkOption { type = (types.listOf types.path); default = [ ]; description = "SSH key files for the user"; example = [ "/path/to/id_rsa.pub" ]; }; }; } ); default = { }; description = "Users to create. The usernames are the attribute names."; }; config = { # Create user groups users.groups = genAttrs (attrNames cfg) (userName: { name = userName; }); # Create users users.users = genAttrs (attrNames cfg) (userName: { name = userName; inherit (cfg.${userName}) extraGroups shell initialPassword; isNormalUser = true; group = "${userName}"; home = "/home/${userName}"; openssh.authorizedKeys.keyFiles = cfg.${userName}.sshKeyFiles; }); }; } ================================================ FILE: modules/nixos/openssh/default.nix ================================================ { lib, ... }: let inherit (lib) mkDefault; in { services.openssh = { enable = mkDefault true; ports = mkDefault [ 2299 ]; openFirewall = mkDefault true; settings = { PermitRootLogin = mkDefault "no"; PasswordAuthentication = mkDefault false; }; }; } ================================================ FILE: modules/nixos/sops/default.nix ================================================ { inputs, config, lib, pkgs, ... }: let # Check both locations for secrets.yaml secretsInSubdir = "${toString inputs.self}/hosts/${config.networking.hostName}/secrets/secrets.yaml"; secretsInRoot = "${toString inputs.self}/hosts/${config.networking.hostName}/secrets.yaml"; secrets = if builtins.pathExists secretsInSubdir then secretsInSubdir else if builtins.pathExists secretsInRoot then secretsInRoot else null; in { imports = [ inputs.sops-nix.nixosModules.sops ]; environment.systemPackages = with pkgs; [ age sops ]; sops.defaultSopsFile = lib.mkIf (secrets != null) (lib.mkDefault secrets); } ================================================ FILE: modules/nixos/tailscale/default.nix ================================================ { config, lib, ... }: let cfg = config.services.tailscale; inherit (lib) mkIf mkOption optional types ; in { options.services.tailscale = { loginServer = mkOption { type = types.str; description = "The Tailscale login server to use."; }; enableSSH = mkOption { type = types.bool; default = false; description = "Enable Tailscale SSH functionality."; }; acceptDNS = mkOption { type = types.bool; default = true; description = "Enable Tailscale's MagicDNS and custom DNS configuration."; }; }; config = mkIf cfg.enable { services.tailscale = { authKeyFile = config.sops.secrets."tailscale/auth-key".path; extraSetFlags = optional cfg.enableSSH "--ssh" ++ optional cfg.acceptDNS "--accept-dns"; extraUpFlags = [ "--login-server=${cfg.loginServer}" ] ++ optional cfg.enableSSH "--ssh" ++ optional cfg.acceptDNS "--accept-dns"; }; environment.shellAliases = { ts = "${cfg.package}/bin/tailscale"; }; networking.firewall.trustedInterfaces = [ cfg.interfaceName ]; sops.secrets."tailscale/auth-key" = { }; }; } ================================================ FILE: overlays/default.nix ================================================ { inputs, ... }: { # packages in `pkgs/` accessible through 'pkgs.local' local-packages = final: prev: { local = import ../pkgs { pkgs = final; }; }; # https://nixos.wiki/wiki/Overlays modifications = final: prev: let files = [ ]; imports = builtins.map (f: import f final prev) files; in builtins.foldl' (a: b: a // b) { } imports; # old-stable nixpkgs accessible through 'pkgs.old-stable' old-stable-packages = final: prev: { old-stable = import inputs.nixpkgs-old-stable { inherit (final) system; inherit (prev) config; }; }; # unstable nixpkgs accessible through 'pkgs.unstable' unstable-packages = final: prev: { unstable = import inputs.nixpkgs-unstable { inherit (final) system; inherit (prev) config; }; }; } ================================================ FILE: pkgs/default.nix ================================================ { pkgs ? import , ... }: { # example = pkgs.callPackage ./example { }; } ================================================ FILE: scripts/install.sh ================================================ #!/usr/bin/env bash # NixOS install script ### VARIABLES ### ASK_VERIFICATION=1 # Default to ask for verification CONFIG_DIR="/tmp/nixos" # Directory to copy flake to / clone flake into GIT_BRANCH="master" # Default Git branch GIT_REPO="" # Git repository URL HOSTNAME="" # Hostname MNT="/mnt" # root mount point SEPARATOR="________________________________________" # line separator ### FUNCTIONS ### # Function to display help information Show_help() { echo "Usage: $0 [-r REPO] [-n HOSTNAME] [-b BRANCH] [-y] [-h]" echo echo "Options:" echo " -r, --repo REPO Your NixOS configuration Git repository URL" echo " -n, --hostname HOSTNAME Specify the hostname for the NixOS configuration" echo " -b, --branch BRANCH Specify the Git branch to use (default: $GIT_BRANCH)" echo " -y, --yes Do not ask for user verification before proceeding" echo " -h, --help Show this help message and exit" } # Function to format, partition, and mount disks for $HOSTNAME using disko Run_disko() { echo "$SEPARATOR" echo "Running disko..." nix --experimental-features "nix-command flakes" run github:nix-community/disko/latest -- --mode disko "$CONFIG_DIR"/hosts/"$HOSTNAME"/disks.nix } # Function to format, partition, and mount disks for $HOSTNAME using a partitioning script Run_script() { echo "$SEPARATOR" echo "Running partitioning script..." bash "$CONFIG_DIR"/hosts/"$HOSTNAME"/disks.sh } # Function to check mount points and partitioning Check_partitioning() { echo "$SEPARATOR" echo "Printing mount points and partitioning..." mount | grep "$MNT" lsblk -f [[ "$ASK_VERIFICATION" == 1 ]] && read -rp "Verify the mount points and partitioning. Press Ctrl+c to cancel or Enter to continue..." } # Function to generate hardware configuration Generate_hardware_config() { [[ "$ASK_VERIFICATION" == 1 ]] && read -rp "No hardware configuration found. Press Ctrl+c to cancel or Enter to generate one..." echo "$SEPARATOR" echo "Generating hardware configuration..." nixos-generate-config --root "$MNT" --show-hardware-config > "$CONFIG_DIR"/hosts/"$HOSTNAME"/hardware.nix # Check if hardware configuration has been generated if [[ ! -f "$CONFIG_DIR"/hosts/"$HOSTNAME"/hardware.nix ]]; then echo "Error: Hardware configuration cannot be generated." exit 1 fi # Add configuration to git # TODO: get rid of cd cd "$CONFIG_DIR"/hosts/"$HOSTNAME" || exit 1 git add "$CONFIG_DIR"/hosts/"$HOSTNAME"/hardware.nix cd || exit 1 echo "Hardware configuration generated successfully." }; # Function to install configuration for $HOSTNAME Install() { # Check if hardware configuration exists [[ ! -f "$CONFIG_DIR"/hosts/"$HOSTNAME"/hardware.nix ]] && Generate_hardware_config echo "$SEPARATOR" echo "Installing NixOS..." nixos-install --root "$MNT" --no-root-password --flake "$CONFIG_DIR"#"$HOSTNAME" && echo "You can reboot the system now." } ### PARSE ARGUMENTS ### while [[ "$#" -gt 0 ]]; do case $1 in -r|--repo) GIT_REPO="$2"; shift ;; -b|--branch) GIT_BRANCH="$2"; shift ;; -y|--yes) ASK_VERIFICATION=0 ;; -h|--help) Show_help; exit 0 ;; -n|--hostname) HOSTNAME="$2"; shift ;; *) echo "Unknown option: $1"; Show_help; exit 1 ;; esac shift done ### PREREQUISITES ### echo "$SEPARATOR" mkdir -p "$CONFIG_DIR" # Clone NixOS configuration from $GIT_REPO if provided if [[ ! -z "$GIT_REPO" ]]; then # Install git if not already installed if ! command -v git &> /dev/null; then echo "Git is not installed. Installing..." nix-env -iA nixos.git fi # Clone Git repo if directory is empty if [[ -z "$(ls -A "$CONFIG_DIR" 2>/dev/null)" ]]; then echo "Cloning NixOS configuration repo..." git clone --depth 1 -b "$GIT_BRANCH" "$GIT_REPO" "$CONFIG_DIR" # Check if git repository has been cloned if [[ ! -d "$CONFIG_DIR"/.git ]]; then echo "Error: Git repository could not be cloned." exit 1 fi else echo "$CONFIG_DIR is not empty. Skip cloning $GIT_REPO." fi fi if [[ ! -f "$CONFIG_DIR"/flake.nix ]]; then echo "Error: $CONFIG_DIR does not contain 'flake.nix'." exit 1 fi ### CHOOSE CONFIG ### # If hostname is not provided via options, prompt the user if [[ -z "$HOSTNAME" ]]; then # Get list of available hostnames HOSTNAMES=$(ls "$CONFIG_DIR"/hosts) echo "$SEPARATOR" echo "Please choose a hostname to install its NixOS configuration." echo "$HOSTNAMES" read -rp "Enter hostname: " HOSTNAME # Check if hostname is empty if [[ -z "$HOSTNAME" ]]; then echo "Error: Hostname cannot be empty." exit 1 fi fi ### INSTALLATION ### # Check if NixOS configuration exists if [[ -d "$CONFIG_DIR"/hosts/"$HOSTNAME" ]]; then # Check for existing disko configuration if [[ -f "$CONFIG_DIR"/hosts/"$HOSTNAME"/disks.nix ]]; then Run_disko || ( echo "Error: disko failed." && exit 1 ) # Check for partitioning script elif [[ -f "$CONFIG_DIR"/hosts/"$HOSTNAME"/disks.sh ]]; then Run_script || ( echo "Error: Partitioning script failed." && exit 1 ) else echo "Error: No disko configuration (disks.nix) or partitioning script (disks.sh) found for host '$HOSTNAME'." exit 1 fi Check_partitioning Install || ( echo "Error: Installation failed." && exit 1 ) else echo "Error: Configuration for host '$HOSTNAME' does not exist." exit 1 fi ================================================ FILE: templates/generic-server/boot.nix ================================================ { boot.loader.systemd-boot = { enable = true; configurationLimit = 10; }; boot.loader.efi.canTouchEfiVariables = true; } ================================================ FILE: templates/generic-server/default.nix ================================================ { inputs, outputs, ... }: { imports = [ ./boot.nix ./hardware.nix ./networking.nix ./packages.nix ./services ./users.nix outputs.nixosModules.common outputs.nixosModules.nixvim ]; system.stateVersion = "25.11"; } ================================================ FILE: templates/generic-server/disks.sh ================================================ #!/usr/bin/env bash SSD='/dev/disk/by-id/FIXME' MNT='/mnt' SWAP_GB=4 # Helper function to wait for devices wait_for_device() { local device=$1 echo "Waiting for device: $device ..." while [[ ! -e $device ]]; do sleep 1 done echo "Device $device is ready." } # Function to install a package if it's not already installed install_if_missing() { local cmd="$1" local package="$2" if ! command -v "$cmd" &> /dev/null; then echo "$cmd not found, installing $package..." nix-env -iA "nixos.$package" fi } install_if_missing "sgdisk" "gptfdisk" install_if_missing "partprobe" "parted" wait_for_device $SSD echo "Wiping filesystem on $SSD..." wipefs -a $SSD echo "Clearing partition table on $SSD..." sgdisk --zap-all $SSD echo "Partitioning $SSD..." sgdisk -n1:1M:+1G -t1:EF00 -c1:BOOT $SSD sgdisk -n2:0:+"$SWAP_GB"G -t2:8200 -c2:SWAP $SSD sgdisk -n3:0:0 -t3:8304 -c3:ROOT $SSD partprobe -s $SSD udevadm settle wait_for_device ${SSD}-part1 wait_for_device ${SSD}-part2 wait_for_device ${SSD}-part3 echo "Formatting partitions..." mkfs.vfat -F 32 -n BOOT "${SSD}-part1" mkswap -L SWAP "${SSD}-part2" mkfs.ext4 -L ROOT "${SSD}-part3" echo "Mounting partitions..." mount -o X-mount.mkdir "${SSD}-part3" "$MNT" mkdir -p "$MNT/boot" mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1 "${SSD}-part1" "$MNT/boot" echo "Enabling swap..." swapon "${SSD}-part2" echo "Partitioning and setup complete:" lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL ================================================ FILE: templates/generic-server/flake.nix ================================================ { description = "A generic x86_64 server client template"; path = ./.; } ================================================ FILE: templates/generic-server/hardware.nix ================================================ { config, lib, pkgs, modulesPath, ... }: { imports = [ (modulesPath + "/installer/scan/not-detected.nix") ]; boot.initrd.availableKernelModules = [ "ahci" "nvme" "sd_mod" "usb_storage" "xhci_pci" ]; boot.initrd.kernelModules = [ ]; boot.kernelModules = [ ]; boot.extraModulePackages = [ ]; fileSystems."/" = { device = "/dev/disk/by-label/ROOT"; fsType = "ext4"; }; fileSystems."/boot" = { device = "/dev/disk/by-label/BOOT"; fsType = "vfat"; options = [ "fmask=0022" "dmask=0022" ]; }; swapDevices = [ { device = "/dev/disk/by-label/SWAP"; } ]; networking.useDHCP = lib.mkDefault true; nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux"; } ================================================ FILE: templates/generic-server/networking.nix ================================================ { networking.hostName = "HOSTNAME"; networking.domain = "cryodev.xyz"; } ================================================ FILE: templates/generic-server/packages.nix ================================================ { pkgs, ... }: { environment.systemPackages = with pkgs; [ ]; } ================================================ FILE: templates/generic-server/users.nix ================================================ { inputs, outputs, ... }: { imports = [ outputs.nixosModules.normalUsers # Add users here, e.g.: # ../../users/ ]; } ================================================ FILE: templates/generic-server/services/comin.nix ================================================ { config, pkgs, outputs, constants, ... }: { imports = [ outputs.nixosModules.comin ]; services.comin = { enable = true; remotes = [ { name = "origin"; url = "https://${constants.services.forgejo.fqdn}/steffen/cryodev-server.git"; branches.main.name = "main"; } ]; }; } ================================================ FILE: templates/generic-server/services/default.nix ================================================ { imports = [ ./nginx.nix ./openssh.nix ./tailscale.nix ./netdata.nix ./comin.nix ]; } ================================================ FILE: templates/generic-server/services/netdata.nix ================================================ { config, pkgs, outputs, constants, ... }: { services.netdata = { enable = true; config = { stream = { enabled = "yes"; destination = "${constants.hosts.cryodev-main.ip}:${toString constants.services.netdata.port}"; "api key" = config.sops.placeholder."netdata/stream/child-uuid"; }; }; }; # Make sure sops is enabled/imported for this host to handle the secret imports = [ outputs.nixosModules.sops ]; sops = { defaultSopsFile = ../secrets.yaml; secrets."netdata/stream/child-uuid" = { owner = "netdata"; group = "netdata"; }; }; } ================================================ FILE: templates/generic-server/services/nginx.nix ================================================ { outputs, ... }: { imports = [ outputs.nixosModules.nginx ]; services.nginx = { enable = true; forceSSL = true; openFirewall = true; }; } ================================================ FILE: templates/generic-server/services/openssh.nix ================================================ { outputs, ... }: { imports = [ outputs.nixosModules.openssh ]; services.openssh.enable = true; } ================================================ FILE: templates/generic-server/services/tailscale.nix ================================================ { config, pkgs, outputs, constants, ... }: { imports = [ outputs.nixosModules.tailscale ]; services.tailscale = { enable = true; # Connect to our own headscale instance loginServer = "https://${constants.services.headscale.fqdn}"; # Allow SSH access over Tailscale enableSSH = true; # Use MagicDNS names acceptDNS = true; # Auth key for automated enrollment authKeyFile = config.sops.secrets."tailscale/auth-key".path; }; sops.secrets."tailscale/auth-key" = { }; } ================================================ FILE: templates/raspberry-pi/boot.nix ================================================ { boot = { loader = { grub.enable = false; generic-extlinux-compatible.enable = true; }; }; } ================================================ FILE: templates/raspberry-pi/default.nix ================================================ { inputs, outputs, ... }: { imports = [ ./boot.nix ./hardware.nix ./networking.nix ./packages.nix ./services ./users.nix outputs.nixosModules.common outputs.nixosModules.nixvim ]; system.stateVersion = "25.11"; } ================================================ FILE: templates/raspberry-pi/disks.sh ================================================ #!/usr/bin/env bash SSD='/dev/disk/by-id/FIXME' MNT='/mnt' SWAP_GB=4 # Helper function to wait for devices wait_for_device() { local device=$1 echo "Waiting for device: $device ..." while [[ ! -e $device ]]; do sleep 1 done echo "Device $device is ready." } # Function to install a package if it's not already installed install_if_missing() { local cmd="$1" local package="$2" if ! command -v "$cmd" &> /dev/null; then echo "$cmd not found, installing $package..." nix-env -iA "nixos.$package" fi } install_if_missing "sgdisk" "gptfdisk" install_if_missing "partprobe" "parted" wait_for_device $SSD echo "Wiping filesystem on $SSD..." wipefs -a $SSD echo "Clearing partition table on $SSD..." sgdisk --zap-all $SSD echo "Partitioning $SSD..." sgdisk -n1:1M:+1G -t1:EF00 -c1:BOOT $SSD sgdisk -n2:0:+"$SWAP_GB"G -t2:8200 -c2:SWAP $SSD sgdisk -n3:0:0 -t3:8304 -c3:ROOT $SSD partprobe -s $SSD udevadm settle wait_for_device ${SSD}-part1 wait_for_device ${SSD}-part2 wait_for_device ${SSD}-part3 echo "Formatting partitions..." mkfs.vfat -F 32 -n BOOT "${SSD}-part1" mkswap -L SWAP "${SSD}-part2" mkfs.ext4 -L ROOT "${SSD}-part3" echo "Mounting partitions..." mount -o X-mount.mkdir "${SSD}-part3" "$MNT" mkdir -p "$MNT/boot" mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1 "${SSD}-part1" "$MNT/boot" echo "Enabling swap..." swapon "${SSD}-part2" echo "Partitioning and setup complete:" lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL ================================================ FILE: templates/raspberry-pi/flake.nix ================================================ { description = "A Raspberry Pi 4 client template"; path = ./.; } ================================================ FILE: templates/raspberry-pi/hardware.nix ================================================ { pkgs, lib, ... }: { boot = { kernelPackages = pkgs.linuxKernel.packages.linux_rpi4; initrd = { availableKernelModules = [ "xhci_pci" "usbhid" "usb_storage" ]; # Disable default x86 modules that don't exist in the Pi kernel (e.g. dw-hdmi) includeDefaultModules = false; }; }; fileSystems = { "/" = { device = "/dev/disk/by-label/NIXOS_SD"; fsType = "ext4"; options = [ "noatime" ]; }; }; nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux"; hardware.enableRedistributableFirmware = true; } ================================================ FILE: templates/raspberry-pi/networking.nix ================================================ { networking.hostName = "HOSTNAME"; networking.domain = "cryodev.xyz"; } ================================================ FILE: templates/raspberry-pi/packages.nix ================================================ { pkgs, ... }: { environment.systemPackages = with pkgs; [ ]; } ================================================ FILE: templates/raspberry-pi/users.nix ================================================ { inputs, outputs, ... }: { imports = [ outputs.nixosModules.normalUsers # Add users here, e.g.: # ../../users/ ]; } ================================================ FILE: templates/raspberry-pi/services/comin.nix ================================================ { config, pkgs, outputs, constants, ... }: { imports = [ outputs.nixosModules.comin ]; services.comin = { enable = true; remotes = [ { name = "origin"; url = "https://${constants.services.forgejo.fqdn}/steffen/cryodev-server.git"; branches.main.name = "main"; } ]; }; } ================================================ FILE: templates/raspberry-pi/services/default.nix ================================================ { imports = [ ./nginx.nix ./openssh.nix ./tailscale.nix ./netdata.nix ./comin.nix ]; } ================================================ FILE: templates/raspberry-pi/services/netdata.nix ================================================ { config, pkgs, outputs, constants, ... }: { services.netdata = { enable = true; config = { stream = { enabled = "yes"; destination = "${constants.hosts.cryodev-main.ip}:${toString constants.services.netdata.port}"; "api key" = config.sops.placeholder."netdata/stream/child-uuid"; }; }; }; # Make sure sops is enabled/imported for this host to handle the secret imports = [ outputs.nixosModules.sops ]; sops = { defaultSopsFile = ../secrets.yaml; secrets."netdata/stream/child-uuid" = { owner = "netdata"; group = "netdata"; }; }; } ================================================ FILE: templates/raspberry-pi/services/nginx.nix ================================================ { outputs, ... }: { imports = [ outputs.nixosModules.nginx ]; services.nginx = { enable = true; forceSSL = true; openFirewall = true; }; } ================================================ FILE: templates/raspberry-pi/services/openssh.nix ================================================ { outputs, ... }: { imports = [ outputs.nixosModules.openssh ]; services.openssh.enable = true; } ================================================ FILE: templates/raspberry-pi/services/tailscale.nix ================================================ { config, pkgs, outputs, constants, ... }: { imports = [ outputs.nixosModules.tailscale ]; services.tailscale = { enable = true; # Connect to our own headscale instance loginServer = "https://${constants.services.headscale.fqdn}"; # Allow SSH access over Tailscale enableSSH = true; # Use MagicDNS names acceptDNS = true; # Auth key for automated enrollment authKeyFile = config.sops.secrets."tailscale/auth-key".path; }; sops.secrets."tailscale/auth-key" = { }; } ================================================ FILE: users/benjamin/default.nix ================================================ { normalUsers.benjamin = { extraGroups = [ "wheel" ]; sshKeyFiles = [ # TODO: Add benjamin's public key # ./pubkeys/benjamin.pub ]; }; } ================================================ FILE: users/ralph/default.nix ================================================ { normalUsers.ralph = { extraGroups = [ "wheel" ]; sshKeyFiles = [ # TODO: Add ralph's public key # ./pubkeys/ralph.pub ]; }; } ================================================ FILE: users/steffen/default.nix ================================================ { outputs, ... }: { normalUsers.steffen = { extraGroups = [ "wheel" ]; sshKeyFiles = [ ./pubkeys/X670E.pub ]; }; } ================================================ FILE: users/steffen/pubkeys/X670E.pub ================================================ ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDKNTpsF9Z313gWHiHi4SvjeXI4Mh80mtq0bR0AjsZr/SnPsXEiM8/ODbQNJ806qHLFSA4uA4vaevdZIJkpDqRIQviW7zHGp/weRh2+2ynH8RyFqJvsWIqWn8G5wXPYcRZ6eFjcqKraAQC46ITER4+NPgdC6Cr+dsHWyIroBep4m3EGhSLYNRaMYoKZ5aqD2jJLBolokVfseF06Y7tQ3QSwUioXgiodBdZ9hgXc/5AJdsXSxJMHmRArqbHwbWI0fhwkX+0jiUpOMXMGsJZx5G20X70mQpJu+UnQsGcw+ylQw6ZYtFmzNcYmOS//91DTzraHprnrENyb+pYV2UUZhKxjdkexpSBkkPoVEzMcw9+LCg4e/jsZ+urlRhdTPWW0/AaWJx3UJc1pHHu5UpIvQKfMdt9dZbgG7oYYE1JeCoTvtQKiBcdc54cmJuvwshaAkfN92tYGvj/L1Jeb06M34dycdCXGDGMIofMsZOsnDcHuY1CT82NlRjXmatAUOaO0rCbVNPluNmu4gmWhclQmhoUEmojBGaIXrcRuxrIJYZpWubQdBUCZiJFBJzEb2qnT0nFSe0Gu0tPOYdD/jcUVgYPRWggxQV6hssSlgERTJdzC5PhBnSe8Xi8W/rMgZA8+YBIKBJpJjF5HZTJ67EBZmNS3HWaZNIUmRXcgsONr41RCrw== steffen@X670E ================================================ FILE: .forgejo/workflows/ci.yml ================================================ name: CI on: [pull_request] jobs: flake-check: runs-on: host steps: - name: Checkout repository uses: actions/checkout@v4 - name: Run flake check run: nix flake check --impure build-hosts: needs: flake-check runs-on: host steps: - name: Checkout repository uses: actions/checkout@v4 - name: Build cryodev-main run: nix build .#nixosConfigurations.cryodev-main.config.system.build.toplevel --impure - name: Build cryodev-pi run: nix build .#nixosConfigurations.cryodev-pi.config.system.build.toplevel --impure --extra-platforms aarch64-linux ================================================ FILE: .forgejo/workflows/deploy.yml ================================================ name: Deploy on: push: branches: - main jobs: flake-check: runs-on: host steps: - name: Checkout repository uses: actions/checkout@v4 - name: Run flake check run: nix flake check --impure build-hosts: needs: flake-check runs-on: host steps: - name: Checkout repository uses: actions/checkout@v4 - name: Build cryodev-main run: nix build .#nixosConfigurations.cryodev-main.config.system.build.toplevel --impure - name: Build cryodev-pi run: nix build .#nixosConfigurations.cryodev-pi.config.system.build.toplevel --impure --extra-platforms aarch64-linux build-pi-images: needs: build-hosts runs-on: host strategy: matrix: host: [cryodev-pi] fail-fast: false steps: - name: Checkout repository uses: actions/checkout@v4 - name: Build SD image for ${{ matrix.host }} run: | echo "Building SD image for: ${{ matrix.host }}" nix build .#nixosConfigurations.${{ matrix.host }}.config.system.build.sdImage \ --extra-platforms aarch64-linux \ --out-link result-${{ matrix.host }} IMAGE_PATH=$(find result-${{ matrix.host }} -name "*.img.zst" -type f | head -1) if [ -z "$IMAGE_PATH" ]; then echo "Error: No image found!" exit 1 fi cp "$IMAGE_PATH" ./${{ matrix.host }}-sd-image.img.zst sha256sum ${{ matrix.host }}-sd-image.img.zst > ${{ matrix.host }}-sd-image.img.zst.sha256 echo "Image size:" ls -lh ${{ matrix.host }}-sd-image.img.zst - name: Upload artifact uses: actions/upload-artifact@v3 with: name: ${{ matrix.host }}-sd-image path: | ${{ matrix.host }}-sd-image.img.zst ${{ matrix.host }}-sd-image.img.zst.sha256 create-release: needs: build-pi-images runs-on: host steps: - name: Checkout repository uses: actions/checkout@v4 - name: Download all artifacts uses: actions/download-artifact@v3 with: path: artifacts/ - name: Create Release and Upload env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | VERSION="v$(date +%Y-%m-%d)-$(git rev-parse --short HEAD)" curl -s -X POST \ -H "Authorization: token ${GITHUB_TOKEN}" \ -H "Content-Type: application/json" \ -d "{\"tag_name\": \"${VERSION}\", \"name\": \"Pi Images ${VERSION}\", \"body\": \"Raspberry Pi SD card images. See docs for usage.\", \"draft\": false, \"prerelease\": false}" \ "https://git.cryodev.xyz/api/v1/repos/${GITHUB_REPOSITORY}/releases" \ -o release.json RELEASE_ID=$(jq -r '.id' release.json) echo "Release ID: $RELEASE_ID" for file in $(find artifacts -type f); do echo "Uploading: $(basename $file)" curl -s -X POST \ -H "Authorization: token ${GITHUB_TOKEN}" \ -H "Content-Type: application/octet-stream" \ --data-binary @"$file" \ "https://git.cryodev.xyz/api/v1/repos/${GITHUB_REPOSITORY}/releases/${RELEASE_ID}/assets?name=$(basename $file)" done echo "Done: https://git.cryodev.xyz/${GITHUB_REPOSITORY}/releases/tag/${VERSION}"