cryodev/digest.txt
2026-03-06 08:31:13 +01:00

4930 lines
119 KiB
Text

Directory structure:
└── cryodev-server/
├── README.md
├── AGENTS.md
├── constants.nix
├── flake.nix
├── INSTRUCTIONS.md
├── .sops.yaml
├── hosts/
│ ├── cryodev-main/
│ │ ├── boot.nix
│ │ ├── default.nix
│ │ ├── disks.sh
│ │ ├── hardware.nix
│ │ ├── networking.nix
│ │ ├── packages.nix
│ │ ├── users.nix
│ │ └── services/
│ │ ├── default.nix
│ │ ├── forgejo.nix
│ │ ├── headplane.nix
│ │ ├── headscale.nix
│ │ ├── mailserver.nix
│ │ ├── netdata.nix
│ │ ├── nginx.nix
│ │ ├── openssh.nix
│ │ ├── sops.nix
│ │ └── tailscale.nix
│ └── cryodev-pi/
│ ├── boot.nix
│ ├── default.nix
│ ├── disks.sh
│ ├── hardware.nix
│ ├── networking.nix
│ ├── packages.nix
│ ├── users.nix
│ └── services/
│ ├── comin.nix
│ ├── default.nix
│ ├── netdata.nix
│ ├── nginx.nix
│ ├── openssh.nix
│ └── tailscale.nix
├── modules/
│ └── nixos/
│ ├── default.nix
│ ├── comin/
│ │ └── default.nix
│ ├── common/
│ │ ├── default.nix
│ │ ├── environment.nix
│ │ ├── htop.nix
│ │ ├── nationalization.nix
│ │ ├── networking.nix
│ │ ├── nix.nix
│ │ ├── overlays.nix
│ │ ├── sudo.nix
│ │ ├── well-known.nix
│ │ ├── zsh.nix
│ │ └── shared/
│ │ ├── default.nix
│ │ └── nix.nix
│ ├── forgejo/
│ │ └── default.nix
│ ├── forgejo-runner/
│ │ └── default.nix
│ ├── headplane/
│ │ └── default.nix
│ ├── headscale/
│ │ ├── acl.hujson
│ │ └── default.nix
│ ├── mailserver/
│ │ └── default.nix
│ ├── nginx/
│ │ └── default.nix
│ ├── nixvim/
│ │ ├── default.nix
│ │ ├── keymaps.nix
│ │ ├── spellfiles.nix
│ │ └── plugins/
│ │ ├── cmp.nix
│ │ ├── default.nix
│ │ ├── lsp.nix
│ │ ├── lualine.nix
│ │ ├── telescope.nix
│ │ ├── treesitter.nix
│ │ └── trouble.nix
│ ├── normalUsers/
│ │ └── default.nix
│ ├── openssh/
│ │ └── default.nix
│ ├── sops/
│ │ └── default.nix
│ └── tailscale/
│ └── default.nix
├── overlays/
│ └── default.nix
├── pkgs/
│ └── default.nix
├── scripts/
│ └── install.sh
├── templates/
│ ├── generic-server/
│ │ ├── boot.nix
│ │ ├── default.nix
│ │ ├── disks.sh
│ │ ├── flake.nix
│ │ ├── hardware.nix
│ │ ├── networking.nix
│ │ ├── packages.nix
│ │ ├── users.nix
│ │ └── services/
│ │ ├── comin.nix
│ │ ├── default.nix
│ │ ├── netdata.nix
│ │ ├── nginx.nix
│ │ ├── openssh.nix
│ │ └── tailscale.nix
│ └── raspberry-pi/
│ ├── boot.nix
│ ├── default.nix
│ ├── disks.sh
│ ├── flake.nix
│ ├── hardware.nix
│ ├── networking.nix
│ ├── packages.nix
│ ├── users.nix
│ └── services/
│ ├── comin.nix
│ ├── default.nix
│ ├── netdata.nix
│ ├── nginx.nix
│ ├── openssh.nix
│ └── tailscale.nix
├── users/
│ ├── cryotherm/
│ │ └── default.nix
│ └── steffen/
│ ├── default.nix
│ └── pubkeys/
│ └── X670E.pub
└── .forgejo/
└── workflows/
├── build-hosts.yml
├── deploy-main.yml
└── flake-check.yml
================================================
FILE: README.md
================================================
# cryodev-server NixOS Configuration
This repository contains the declarative NixOS configuration for the **cryodev** infrastructure, managed using **Nix Flakes**. It defines a robust, secure, and self-hosted environment spanning a main server and client devices.
---
# 🇬🇧 English Description
## Overview
The infrastructure is designed around a central server (`cryodev-main`) and satellite clients (e.g., `cryodev-pi`). It leverages modern DevOps practices including Infrastructure as Code (IaC), GitOps, and Mesh VPNs.
## Key Features & Architecture
### 🖥️ Hosts
* **`cryodev-main` (x86_64 Server)**: The core infrastructure hub.
* **`cryodev-pi` (Raspberry Pi 4)**: A remote client/worker node.
### 🚀 Continuous Deployment (CD)
We utilize different deployment strategies optimized for each host type:
* **Push-based (Server):** The main server is deployed via **Forgejo Actions** using **[deploy-rs](https://github.com/serokell/deploy-rs)**. This ensures immediate updates and robust rollbacks in case of failure.
* **Pull-based (Client):** The Raspberry Pi uses **[Comin](https://github.com/nlewo/comin)** to periodically poll the Git repository and apply updates automatically. This is ideal for devices behind NAT or with unstable connections.
### 🌐 Networking (Tailscale & Headscale)
* **Self-hosted VPN:** We run **Headscale**, an open-source implementation of the Tailscale control server.
* **Headplane:** A web frontend for managing Headscale users and routes.
* **Mesh Network:** All hosts are connected via a secure, private WireGuard mesh network.
* **MagicDNS:** Automatic DNS resolution for devices within the tailnet.
### 📊 Monitoring (Netdata)
* **Parent/Child Streaming:** The main server acts as a centralized Netdata parent node.
* **Distributed Monitoring:** `cryodev-pi` (and other clients) stream their metrics securely over the Tailscale VPN to the parent node.
* **Alerting:** Integrated with the mailserver to send health alerts.
### 📧 Mail Services
* **NixOS Mailserver:** A fully functional mail stack (Postfix/Dovecot).
* **Integration:** Used by internal services (Forgejo, Netdata) to send notifications.
* **Security:** SPF, DKIM, and DMARC configured for `cryodev.xyz`.
### 🛠️ Development & Productivity
* **Forgejo:** Self-hosted Git service (fork of Gitea) with built-in CI/CD Actions.
* **Forgejo Runner:** Self-hosted runners executing the CI/CD pipelines.
* **Neovim:** A fully pre-configured Neovim environment (aliased as `v`) available system-wide via a custom NixOS module.
* **Secret Management:** **[sops-nix](https://github.com/Mic92/sops-nix)** encrypts secrets using Age and SSH host keys, ensuring no sensitive data is committed in plain text.
* **Templates:** Ready-to-use NixOS configurations for quickly bootstrapping new clients.
* `#raspberry-pi`: Template for Raspberry Pi 4 clients.
* `#generic-server`: Template for generic x86_64 servers.
* **Bootstrap Script:** An `install.sh` script automates disk partitioning (via disko) and system installation for new hosts.
## 🚧 Roadmap & Missing Features
### BioSafe Gateway (Dual Ethernet)
The Raspberry Pi hosts utilize a custom board with two Ethernet ports:
* **WAN:** Standard Internet connection.
* **LAN (`eth1`):** A dedicated local connection managed specifically by the **BioSafe Gateway App**.
* *Status:* The network configuration logic and the integration of the controlling app are currently missing.
### Closed Source Integration
The **BioSafe Gateway App** is closed source.
* It needs to be added as a **private Flake input**.
* Authentication mechanism (e.g., access tokens via Secrets) to fetch this private input during the build process is not yet implemented.
### SD Card Image Pipeline
Currently, the Pi requires manual setup.
* *Goal:* A CI/CD pipeline that builds a fully configured, flashable SD card image.
* *Usage:* Download image -> Flash to SD -> Insert in Pi -> Boot & Auto-connect.
## Directory Structure
* `flake.nix`: The entry point defining inputs (dependencies) and outputs (system configs).
* `hosts/`: Configuration specific to each machine.
* `modules/`: Reusable NixOS modules (custom or imported).
* `pkgs/`: Custom packages.
* `constants.nix`: Central source of truth for IPs, ports, and domains.
* `INSTRUCTIONS.md`: Detailed setup guide (DNS, SOPS, Initial Deployment).
* `AGENTS.md`: Guidelines for AI agents working on this repository.
---
# 🇩🇪 Deutsche Beschreibung
## Überblick
Die Infrastruktur ist um einen zentralen Server (`cryodev-main`) und Satelliten-Clients (z.B. `cryodev-pi`) herum aufgebaut. Sie nutzt moderne DevOps-Praktiken wie Infrastructure as Code (IaC), GitOps und Mesh-VPNs.
## Hauptfunktionen & Architektur
### 🖥️ Hosts
* **`cryodev-main` (x86_64 Server)**: Der zentrale Infrastruktur-Knoten.
* **`cryodev-pi` (Raspberry Pi 4)**: Ein entfernter Client/Worker-Node.
### 🚀 Continuous Deployment (CD)
Wir verwenden unterschiedliche Deployment-Strategien, optimiert für den jeweiligen Host-Typ:
* **Push-basiert (Server):** Der Hauptserver wird über **Forgejo Actions** mittels **[deploy-rs](https://github.com/serokell/deploy-rs)** aktualisiert. Dies garantiert sofortige Updates und robuste Rollbacks bei Fehlern.
* **Pull-basiert (Client):** Der Raspberry Pi nutzt **[Comin](https://github.com/nlewo/comin)**, um das Git-Repository periodisch abzufragen und Updates automatisch anzuwenden. Ideal für Geräte hinter NAT oder mit instabilen Verbindungen.
### 🌐 Netzwerk (Tailscale & Headscale)
* **Self-hosted VPN:** Wir betreiben **Headscale**, eine Open-Source-Implementierung des Tailscale-Kontrollservers.
* **Headplane:** Ein Web-Frontend zur Verwaltung von Headscale-Benutzern und Routen.
* **Mesh-Netzwerk:** Alle Hosts sind über ein sicheres, privates WireGuard-Mesh-Netzwerk verbunden.
* **MagicDNS:** Automatische Namensauflösung für Geräte innerhalb des Tailnets.
### 📊 Monitoring (Netdata)
* **Parent/Child Streaming:** Der Hauptserver agiert als zentraler Netdata Parent-Node.
* **Verteiltes Monitoring:** `cryodev-pi` (und andere Clients) streamen ihre Metriken sicher über das Tailscale-VPN an den Parent-Node.
* **Alarmierung:** Integriert mit dem Mailserver zum Versenden von Gesundheitswarnungen.
### 📧 Mail-Dienste
* **NixOS Mailserver:** Ein voll funktionsfähiger Mail-Stack (Postfix/Dovecot).
* **Integration:** Wird von internen Diensten (Forgejo, Netdata) für Benachrichtigungen genutzt.
* **Sicherheit:** SPF, DKIM und DMARC sind für `cryodev.xyz` konfiguriert.
### 🛠️ Entwicklung & Produktivität
* **Forgejo:** Self-hosted Git-Dienst (Gitea-Fork) mit integrierten CI/CD Actions.
* **Forgejo Runner:** Eigene Runner, die die CI/CD-Pipelines ausführen.
* **Neovim:** Eine vollständig vorkonfigurierte Neovim-Umgebung (Alias `v`), die systemweit über ein eigenes NixOS-Modul bereitgestellt wird.
* **Secret Management:** **[sops-nix](https://github.com/Mic92/sops-nix)** verschlüsselt Geheimnisse mittels Age und SSH-Host-Keys, sodass keine sensiblen Daten im Klartext gespeichert werden.
* **Templates:** Vorgefertigte NixOS-Konfigurationen zum schnellen Aufsetzen neuer Clients.
* `#raspberry-pi`: Template für Raspberry Pi 4 Clients.
* `#generic-server`: Template für generische x86_64 Server.
* **Bootstrap Skript:** Ein `install.sh` Skript automatisiert die Disk-Partitionierung (via disko) und Systeminstallation für neue Hosts.
## 🚧 Roadmap & Fehlende Funktionen
### BioSafe Gateway (Dual-Ethernet)
Die Raspberry Pi Hosts nutzen ein spezielles Board mit zwei Ethernet-Anschlüssen:
* **WAN:** Normale Internetverbindung.
* **LAN (`eth1`):** Eine dedizierte lokale Verbindung, die spezifisch durch die **BioSafe Gateway App** verwaltet wird.
* *Status:* Die Logik für die Netzwerkkonfiguration und die Integration der steuernden App fehlen aktuell.
### Integration von Closed Source
Die **BioSafe Gateway App** ist Closed Source.
* Sie muss als **privater Flake Input** hinzugefügt werden.
* Mechanismen zur Authentifizierung (z.B. Access Tokens via Secrets) für das Abrufen dieses privaten Inputs während des Build-Prozesses sind noch nicht implementiert.
### Pipeline für SD-Karten-Images
Momentan erfordert der Pi eine manuelle Einrichtung.
* *Ziel:* Eine CI/CD-Pipeline, die ein fertig konfiguriertes, flashbares SD-Karten-Image baut.
* *Nutzung:* Image herunterladen -> auf SD flashen -> in Pi stecken -> booten & automatisch verbinden.
## Verzeichnisstruktur
* `flake.nix`: Der Einstiegspunkt, der Inputs (Abhängigkeiten) und Outputs (Systemkonfigurationen) definiert.
* `hosts/`: Maschinenspezifische Konfigurationen.
* `modules/`: Wiederverwendbare NixOS-Module (eigene oder importierte).
* `pkgs/`: Benutzerdefinierte Pakete.
* `constants.nix`: Zentrale Quelle für IPs, Ports und Domains.
* `INSTRUCTIONS.md`: Detaillierte Einrichtungsanleitung (DNS, SOPS, Initial Deployment).
* `AGENTS.md`: Richtlinien für KI-Agenten, die an diesem Repository arbeiten.
================================================
FILE: AGENTS.md
================================================
# Agent Guidelines for NixOS Configuration
## Project Overview
This repository contains a NixOS configuration managed with Nix Flakes. It defines system configurations for one or more hosts (currently `cryodev-main` and `cryodev-pi`), custom packages, modules, overlays, and templates.
## Environment & Build Commands
### Prerequisites
- **Nix** with Flakes enabled.
- **Git**
### Core Commands
- **Build Host Configuration**:
```bash
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel
# Examples:
nix build .#nixosConfigurations.cryodev-main.config.system.build.toplevel
nix build .#nixosConfigurations.cryodev-pi.config.system.build.toplevel
```
- **Format Code**:
```bash
nix fmt
```
This uses the formatter defined in `flake.nix` (typically `nixfmt` via `pre-commit-hooks`).
- **Run Checks (Lint/Test)**:
```bash
nix flake check
```
This validates the flake outputs and runs configured checks (including formatting checks and deploy-rs checks).
- **Update Flake Inputs**:
```bash
nix flake update
```
### Development Shell
You can enter a development shell with necessary tools (if configured in `devShells`):
```bash
nix develop
```
## Code Style & Conventions
### General Nix Style
- **Formatting**: Strictly adhere to `nixfmt` style. Run `nix fmt` before committing.
- **Indentation**: Use 2 spaces for indentation.
- **Lines**: Limit lines to 80-100 characters where possible, but follow the formatter's lead.
- **Comments**: Use `#` for single-line comments. Avoid block comments `/* ... */` unless necessary for large blocks of text.
### Module Structure
- **Function Header**: Always use the standard module arguments pattern:
```nix
{ config, lib, pkgs, inputs, outputs, constants, ... }:
```
- Include `inputs`, `outputs`, and `constants` if you need access to flake inputs, the flake's own outputs, or the central constants.
- **Option Definitions**:
- Define options in `options = { ... };`.
- Implement configuration in `config = { ... };`.
- Use `mkEnableOption` for boolean flags.
- Use `mkOption` with types (e.g., `types.str`, `types.bool`) and descriptions.
- **Imports**:
- Use relative paths for local modules: `imports = [ ./module.nix ];`.
- Use `inputs` or `outputs` for external/shared modules: `imports = [ outputs.nixosModules.common ];`.
### Naming Conventions
- **Files**: `kebab-case.nix` (e.g., `hardware-configuration.nix`).
- **Options**: `camelCase` (e.g., `services.myService.enable`).
- **Variables**: `camelCase` in `let ... in` blocks.
### Flake Specifics
- **Inputs**: Defined in `flake.nix`.
- `nixpkgs`: Main package set (typically following a stable release).
- `nixpkgs-unstable`: Unstable channel for latest packages.
- **Outputs**:
- `nixosConfigurations`: Host definitions.
- `nixosModules`: Reusable modules exported by this flake.
- `packages`: Custom packages exported by this flake.
- `overlays`: Overlays to modify `nixpkgs`.
- `templates`: Project templates for creating new hosts.
### Error Handling
- Use `assert` for critical configuration invariants.
- Use `warnings` or `trace` for deprecation or non-critical issues during evaluation.
- Example:
```nix
config = lib.mkIf cfg.enable {
assertions = [
{ assertion = cfg.port > 1024; message = "Port must be non-privileged"; }
];
};
```
## Directory Structure
- `flake.nix`: Entry point, defines inputs and outputs.
- `hosts/`: Specific host configurations.
- `<hostname>/default.nix`: Host entry point.
- `modules/`: Reusable NixOS modules.
- `nixos/`: Modules specific to NixOS (e.g. `nixvim`, `comin`, `forgejo`).
- `pkgs/`: Custom package definitions.
- `overlays/`: Nixpkgs overlays.
- `templates/`: Templates for bootstrapping new hosts (`raspberry-pi`, `generic-server`).
- `scripts/`: Helper scripts (e.g., `install.sh` for bootstrapping).
- `constants.nix`: Central configuration for domains, IPs, and ports.
- `INSTRUCTIONS.md`: Setup and deployment instructions (DNS, SOPS, bootstrap).
- `README.md`: General project documentation and architecture overview.
## Deployment Workflows
- **cryodev-main**: Push-based deployment via Forgejo Actions using `deploy-rs`.
- **cryodev-pi**: Pull-based deployment using `comin` (polling the repository).
## Working with Agents
- **Context**: When asking for changes, specify if it's for a specific host (`hosts/cryodev-main`) or a shared module (`modules/`).
- **Verification**: Always run `nix flake check` after changes to ensure validity.
- **Refactoring**: When moving code, update `imports` carefully.
- **Constants**: Use `constants.nix` for values like domains, IPs, and ports instead of hardcoding them in modules.
================================================
FILE: constants.nix
================================================
{
# Domain
domain = "cryodev.xyz";
# Hosts
hosts = {
cryodev-main = {
ip = "100.64.0.1"; # Tailscale IP example
};
cryodev-pi = {
ip = "100.64.0.2"; # Tailscale IP example
};
};
# Services
services = {
forgejo = {
fqdn = "git.cryodev.xyz";
port = 3000;
};
headscale = {
fqdn = "headscale.cryodev.xyz";
port = 8080;
};
headplane = {
fqdn = "headplane.cryodev.xyz";
port = 3001;
};
netdata = {
fqdn = "netdata.cryodev.xyz";
port = 19999;
};
mail = {
fqdn = "mail.cryodev.xyz";
port = 587;
};
};
}
================================================
FILE: flake.nix
================================================
{
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-25.11";
nixpkgs-unstable.url = "github:nixos/nixpkgs/nixos-unstable";
nixpkgs-old-stable.url = "github:nixos/nixpkgs/nixos-25.05";
sops-nix.url = "github:Mic92/sops-nix";
sops-nix.inputs.nixpkgs.follows = "nixpkgs";
nixos-mailserver.url = "gitlab:simple-nixos-mailserver/nixos-mailserver";
nixos-mailserver.inputs.nixpkgs.follows = "nixpkgs";
headplane.url = "github:yrd/headplane-nix";
comin.url = "github:nlewo/comin";
comin.inputs.nixpkgs.follows = "nixpkgs";
deploy-rs.url = "github:serokell/deploy-rs";
deploy-rs.inputs.nixpkgs.follows = "nixpkgs";
nixvim.url = "github:nix-community/nixvim/nixos-25.11";
nixvim.inputs.nixpkgs.follows = "nixpkgs";
git-hooks.url = "github:cachix/git-hooks.nix";
git-hooks.inputs.nixpkgs.follows = "nixpkgs";
};
outputs =
{
self,
nixpkgs,
...
}@inputs:
let
inherit (self) outputs;
supportedSystems = [
"x86_64-linux"
"aarch64-linux"
];
forAllSystems = nixpkgs.lib.genAttrs supportedSystems;
lib = nixpkgs.lib;
constants = import ./constants.nix;
mkNixosConfiguration =
system: modules:
nixpkgs.lib.nixosSystem {
inherit system modules;
specialArgs = {
inherit
inputs
outputs
lib
constants
;
};
};
in
{
packages = forAllSystems (system: import ./pkgs nixpkgs.legacyPackages.${system});
overlays = import ./overlays { inherit inputs; };
nixosModules = import ./modules/nixos;
nixosConfigurations = {
cryodev-main = mkNixosConfiguration "x86_64-linux" [ ./hosts/cryodev-main ];
cryodev-pi = mkNixosConfiguration "aarch64-linux" [ ./hosts/cryodev-pi ];
};
templates = {
raspberry-pi = {
path = ./templates/raspberry-pi;
description = "Raspberry Pi 4 Client";
};
generic-server = {
path = ./templates/generic-server;
description = "Generic x86_64 Customer Server";
};
};
formatter = forAllSystems (
system:
let
pkgs = nixpkgs.legacyPackages.${system};
config = self.checks.${system}.pre-commit-check.config;
inherit (config) package configFile;
script = ''
${pkgs.lib.getExe package} run --all-files --config ${configFile}
'';
in
pkgs.writeShellScriptBin "pre-commit-run" script
);
deploy = {
nodes = {
cryodev-main = {
hostname = constants.domain;
profiles.system = {
user = "root";
path = inputs.deploy-rs.lib.x86_64-linux.activate.nixos self.nixosConfigurations.cryodev-main;
};
};
};
};
checks = forAllSystems (
system:
let
pkgs = nixpkgs.legacyPackages.${system};
flakePkgs = self.packages.${system};
overlaidPkgs = import nixpkgs {
inherit system;
overlays = [ self.overlays.modifications ];
};
deployChecks = inputs.deploy-rs.lib.${system}.deployChecks self.deploy;
in
{
pre-commit-check = inputs.git-hooks.lib.${system}.run {
src = ./.;
hooks = {
nixfmt.enable = true;
};
};
build-packages = pkgs.linkFarm "flake-packages-${system}" flakePkgs;
build-overlays = pkgs.linkFarm "flake-overlays-${system}" {
# package = overlaidPkgs.package;
};
}
// deployChecks
);
};
}
================================================
FILE: INSTRUCTIONS.md
================================================
# Server Setup Instructions / Server-Einrichtungsanleitung
---
# 🇬🇧 English Instructions
## 1. Prerequisites
Ensure you have the following tools installed on your local machine:
- `nix` (with flakes enabled)
- `sops`
- `age`
- `ssh`
## 2. DNS Configuration
Configure the following DNS records for your domain `cryodev.xyz`:
| Hostname | Type | Value | Purpose |
|----------|------|-------|---------|
| `@` | A | `<SERVER_IP>` | Main entry point |
| `@` | AAAA | `<SERVER_IPV6>` | Main entry point (IPv6) |
| `git` | CNAME | `@` | Forgejo |
| `headscale` | CNAME | `@` | Headscale |
| `headplane` | CNAME | `@` | Headplane |
| `netdata` | CNAME | `@` | Netdata Monitoring |
| `mail` | A | `<SERVER_IP>` | Mailserver |
| `mail` | AAAA | `<SERVER_IPV6>` | Mailserver (IPv6) |
| `@` | MX | `10 mail.cryodev.xyz.` | Mail delivery |
| `@` | TXT | `"v=spf1 mx ~all"` | SPF Record |
| `_dmarc` | TXT | `"v=DMARC1; p=none"` | DMARC Record |
## 3. Secret Management (SOPS)
This repository uses `sops-nix` to manage secrets encrypted with `age`, utilizing the SSH host keys of the servers.
### 3.1 Get Server Public Keys
You need to convert the servers' SSH host public keys to age public keys.
**For `cryodev-main`:**
```bash
nix-shell -p ssh-to-age --run 'ssh-keyscan -t ed25519 <MAIN_IP> | ssh-to-age'
```
**For `cryodev-pi`:**
```bash
nix-shell -p ssh-to-age --run 'ssh-keyscan -t ed25519 <PI_IP> | ssh-to-age'
```
### 3.2 Configure `.sops.yaml`
Edit the `.sops.yaml` file in the root of this repository. Add the age public keys to the `keys` section and ensure creation rules exist for both hosts.
```yaml
keys:
- &admin_key age1e8p35795htf7twrejyugpzw0qja2v33awcw76y4gp6acnxnkzq0s935t4t # Admin Key (Steffen)
- &main_key age1... # cryodev-main Key
- &pi_key age1... # cryodev-pi Key
creation_rules:
- path_regex: hosts/cryodev-main/secrets.yaml$
key_groups:
- age:
- *admin_key
- *main_key
- path_regex: hosts/cryodev-pi/secrets.yaml$
key_groups:
- age:
- *admin_key
- *pi_key
```
### 3.3 Generating Secret Values
**Mailserver Passwords (for `cryodev-main`):**
```bash
nix-shell -p mkpasswd --run 'mkpasswd -sm bcrypt'
```
**Headplane Secrets (for `cryodev-main`):**
```bash
nix-shell -p openssl --run "openssl rand -hex 16"
# Agent Pre-Authkey requires Headscale running:
sudo headscale users create headplane-agent
sudo headscale preauthkeys create --expiration 99y --reusable --user headplane-agent
```
**Tailscale Auth Keys (for both hosts):**
*Requires Headscale running on `cryodev-main`.*
```bash
# For cryodev-main:
sudo headscale preauthkeys create --expiration 99y --reusable --user default
# For cryodev-pi:
sudo headscale preauthkeys create --expiration 99y --reusable --user default
```
**Netdata Child UUID (for `cryodev-pi`):**
```bash
uuidgen
```
**Forgejo Runner Token:**
Get from Forgejo Admin Panel.
### 3.4 Creating Secrets Files
**`hosts/cryodev-main/secrets.yaml`:**
```bash
sops hosts/cryodev-main/secrets.yaml
```
```yaml
mailserver:
accounts:
forgejo: "$2y$05$..."
admin: "$2y$05$..."
forgejo-runner:
token: "..."
headplane:
cookie_secret: "..."
agent_pre_authkey: "..."
tailscale:
auth-key: "..."
```
**`hosts/cryodev-pi/secrets.yaml`:**
```bash
sops hosts/cryodev-pi/secrets.yaml
```
```yaml
tailscale:
auth-key: "..."
netdata:
stream:
child-uuid: "..." # Output from uuidgen
```
## 4. Initial Deployment (Bootstrap)
Before the continuous deployment can take over, you must perform an initial deployment manually using the provided install script.
### 4.1 Prepare Target Machine
1. Boot into the NixOS Installation ISO.
2. Set a root password (for SSH): `passwd`.
3. Ensure internet connectivity.
### 4.2 Run Install Script
From your local machine (where this repo is), copy the script to the target or run it directly if you can fetch it.
**Method A: Copy Script via SSH**
```bash
scp scripts/install.sh nixos@<TARGET_IP>:install.sh
ssh nixos@<TARGET_IP>
sudo -i
chmod +x /home/nixos/install.sh
./home/nixos/install.sh -r <GIT_REPO_URL> -n <HOSTNAME>
```
**Method B: Run on Target (if repo is public or reachable)**
```bash
# On the target machine (as root)
nix-shell -p git
git clone <GIT_REPO_URL> /tmp/nixos
cd /tmp/nixos
bash scripts/install.sh -n <HOSTNAME>
```
*Note: The script handles disk partitioning (via disko/script), hardware config generation, and installation.*
## 5. Continuous Deployment (CD)
### 5.1 cryodev-pi (Pull-based via Comin)
The `cryodev-pi` host is configured to pull updates automatically via `comin`.
1. **Create Repository:** Create a new repository named `cryodev-server` on your Forgejo instance (`https://git.cryodev.xyz`).
2. **Push Configuration:** Push this entire NixOS configuration to the `main` branch of that repository.
3. **Comin URL:** The configuration expects the repository at: `https://git.cryodev.xyz/steffen/cryodev-server.git`.
### 5.2 cryodev-main (Push-based via Forgejo Actions)
The main server is deployed via a Forgejo Action.
1. **Generate SSH Key:**
```bash
ssh-keygen -t ed25519 -f deploy_key -C "forgejo-actions"
```
2. **Add Public Key:** Add the content of `deploy_key.pub` to `/root/.ssh/authorized_keys` on `cryodev-main`.
3. **Add Secret:** Add the content of `deploy_key` (private key) as a secret named `DEPLOY_SSH_KEY` in your Forgejo repository settings.
## 6. Creating New Hosts (Templates)
To quickly bootstrap a new host configuration, you can use the provided templates.
1. **Copy Template:**
```bash
# For a Raspberry Pi:
cp -r templates/raspberry-pi hosts/new-pi-name
# For a generic x86 server:
cp -r templates/generic-server hosts/new-server-name
```
2. **Adjust Configuration:**
* **Hostname:** Edit `hosts/new-name/networking.nix`.
* **Flake:** Register the new host in `flake.nix` under `nixosConfigurations`.
* **Constants:** Add IP and ports to `constants.nix`.
* **Secrets:** Add keys to `.sops.yaml` and create `hosts/new-name/secrets.yaml`.
---
# 🇩🇪 Deutsche Anleitung
## 1. Voraussetzungen
Stellen Sie sicher, dass folgende Tools lokal installiert sind:
- `nix` (mit Flakes)
- `sops`
- `age`
- `ssh`
- `ssh-to-age`
- `uuidgen`
## 2. DNS-Konfiguration
Richten Sie folgende DNS-Einträge für `cryodev.xyz` ein:
| Hostname | Typ | Wert | Zweck |
|----------|-----|------|-------|
| `@` | A | `<SERVER_IP>` | Hauptserver |
| `@` | AAAA | `<SERVER_IPV6>` | Hauptserver (IPv6) |
| `git` | CNAME | `@` | Forgejo |
| `headscale` | CNAME | `@` | Headscale |
| `headplane` | CNAME | `@` | Headplane |
| `netdata` | CNAME | `@` | Netdata Monitoring |
| `mail` | A | `<SERVER_IP>` | Mailserver |
| `mail` | AAAA | `<SERVER_IPV6>` | Mailserver (IPv6) |
| `@` | MX | `10 mail.cryodev.xyz.` | Mail-Empfang |
| `@` | TXT | `"v=spf1 mx ~all"` | SPF-Record |
| `_dmarc` | TXT | `"v=DMARC1; p=none"` | DMARC-Record |
## 3. Verwaltung von Geheimnissen (SOPS)
Dieses Repository nutzt `sops-nix` mit den SSH-Host-Keys der Server.
### 3.1 Public Keys abrufen
**Für `cryodev-main`:**
```bash
nix-shell -p ssh-to-age --run 'ssh-keyscan -t ed25519 <MAIN_IP> | ssh-to-age'
```
**Für `cryodev-pi`:**
```bash
nix-shell -p ssh-to-age --run 'ssh-keyscan -t ed25519 <PI_IP> | ssh-to-age'
```
### 3.2 `.sops.yaml` konfigurieren
Bearbeiten Sie `.sops.yaml` und fügen Sie die Keys sowie Regeln für beide Hosts hinzu:
```yaml
keys:
- &admin_key age1e8p35795htf7twrejyugpzw0qja2v33awcw76y4gp6acnxnkzq0s935t4t # Admin Key (Steffen)
- &main_key age1... # cryodev-main Key
- &pi_key age1... # cryodev-pi Key
creation_rules:
- path_regex: hosts/cryodev-main/secrets.yaml$
key_groups:
- age:
- *admin_key
- *main_key
- path_regex: hosts/cryodev-pi/secrets.yaml$
key_groups:
- age:
- *admin_key
- *pi_key
```
### 3.3 Werte generieren
**Mailserver:** `mkpasswd -sm bcrypt`
**Headplane:** `openssl rand -hex 16`
**Netdata UUID:** `uuidgen`
**Tailscale Auth Keys (auf `cryodev-main`):**
```bash
sudo headscale preauthkeys create --expiration 99y --reusable --user default
```
### 3.4 Secrets-Dateien erstellen
**`hosts/cryodev-main/secrets.yaml`:**
```bash
sops hosts/cryodev-main/secrets.yaml
```
```yaml
mailserver:
accounts:
forgejo: "$2y$05$..."
admin: "$2y$05$..."
forgejo-runner:
token: "..."
headplane:
cookie_secret: "..."
agent_pre_authkey: "..."
tailscale:
auth-key: "..."
```
**`hosts/cryodev-pi/secrets.yaml`:**
```bash
sops hosts/cryodev-pi/secrets.yaml
```
```yaml
tailscale:
auth-key: "..."
netdata:
stream:
child-uuid: "..." # Output von uuidgen
```
## 4. Erstes Deployment (Bootstrap)
Bevor das automatische Deployment funktionieren kann, müssen Sie das System einmal manuell mit dem Installationsskript installieren.
### 4.1 Zielmaschine vorbereiten
1. Booten Sie das NixOS Installations-ISO.
2. Setzen Sie ein Root-Passwort: `passwd`.
3. Stellen Sie eine Internetverbindung her.
### 4.2 Install-Script ausführen
Kopieren Sie das Skript von Ihrem lokalen Rechner auf das Zielsystem.
**Methode A: Per SCP**
```bash
scp scripts/install.sh nixos@<TARGET_IP>:install.sh
ssh nixos@<TARGET_IP>
sudo -i
chmod +x /home/nixos/install.sh
./home/nixos/install.sh -r <GIT_REPO_URL> -n <HOSTNAME>
```
**Methode B: Direkt auf dem Ziel (bei öffentlichem/erreichbarem Repo)**
```bash
# Auf der Zielmaschine (als root)
nix-shell -p git
git clone <GIT_REPO_URL> /tmp/nixos
cd /tmp/nixos
bash scripts/install.sh -n <HOSTNAME>
```
*Hinweis: Das Skript kümmert sich um Partitionierung, Hardware-Config und Installation.*
## 5. Continuous Deployment (CD)
### 5.1 cryodev-pi (Pull-basiert via Comin)
Der Host `cryodev-pi` zieht Updates automatisch via `comin`.
1. **Repository erstellen:** Erstellen Sie ein Repository namens `cryodev-server` auf `https://git.cryodev.xyz`.
2. **Konfiguration pushen:** Pushen Sie diese Konfiguration in den `main`-Branch.
3. **Comin URL:** `https://git.cryodev.xyz/steffen/cryodev-server.git`.
### 5.2 cryodev-main (Push-basiert via Forgejo Actions)
Der Hauptserver wird über eine Forgejo Action deployt.
1. **SSH Key generieren:**
```bash
ssh-keygen -t ed25519 -f deploy_key -C "forgejo-actions"
```
2. **Public Key hinzufügen:** Inhalt von `deploy_key.pub` in `/root/.ssh/authorized_keys` auf `cryodev-main` eintragen.
3. **Secret hinzufügen:** Inhalt von `deploy_key` (Private Key) als Secret `DEPLOY_SSH_KEY` im Forgejo-Repository hinterlegen.
## 6. Neue Hosts erstellen (Templates)
Um schnell eine neue Host-Konfiguration zu erstellen, können Sie die bereitgestellten Templates nutzen.
1. **Template kopieren:**
```bash
# Für einen Raspberry Pi:
cp -r templates/raspberry-pi hosts/neuer-pi-name
# Für einen generischen x86 Server:
cp -r templates/generic-server hosts/neuer-server-name
```
2. **Konfiguration anpassen:**
* **Hostname:** Bearbeiten Sie `hosts/neuer-name/networking.nix`.
* **Flake:** Registrieren Sie den neuen Host in `flake.nix` unter `nixosConfigurations`.
* **Constants:** Fügen Sie IP und Ports in `constants.nix` hinzu.
* **Secrets:** Fügen Sie Keys zu `.sops.yaml` hinzu und erstellen Sie `hosts/neuer-name/secrets.yaml`.
================================================
FILE: .sops.yaml
================================================
keys:
- &admin_key age1e8p35795htf7twrejyugpzw0qja2v33awcw76y4gp6acnxnkzq0s935t4t # Admin key (Steffen)
creation_rules:
- path_regex: hosts/cryodev-main/secrets.yaml$
key_groups:
- age:
- *admin_key
# - *server_key # Add server key here once obtained
- path_regex: hosts/cryodev-pi/secrets.yaml$
key_groups:
- age:
- *admin_key
# - *pi_key # Add pi key here once obtained
================================================
FILE: hosts/cryodev-main/boot.nix
================================================
{
boot.loader.systemd-boot = {
enable = true;
configurationLimit = 10;
};
boot.loader.efi.canTouchEfiVariables = true;
}
================================================
FILE: hosts/cryodev-main/default.nix
================================================
{
inputs,
outputs,
...
}:
{
imports = [
./boot.nix
./hardware.nix
./networking.nix
./packages.nix
./services
./users.nix
outputs.nixosModules.common
outputs.nixosModules.nixvim
];
system.stateVersion = "25.11";
}
================================================
FILE: hosts/cryodev-main/disks.sh
================================================
#!/usr/bin/env bash
SSD='/dev/disk/by-id/FIXME'
MNT='/mnt'
SWAP_GB=4
# Helper function to wait for devices
wait_for_device() {
local device=$1
echo "Waiting for device: $device ..."
while [[ ! -e $device ]]; do
sleep 1
done
echo "Device $device is ready."
}
# Function to install a package if it's not already installed
install_if_missing() {
local cmd="$1"
local package="$2"
if ! command -v "$cmd" &> /dev/null; then
echo "$cmd not found, installing $package..."
nix-env -iA "nixos.$package"
fi
}
install_if_missing "sgdisk" "gptfdisk"
install_if_missing "partprobe" "parted"
wait_for_device $SSD
echo "Wiping filesystem on $SSD..."
wipefs -a $SSD
echo "Clearing partition table on $SSD..."
sgdisk --zap-all $SSD
echo "Partitioning $SSD..."
sgdisk -n1:1M:+1G -t1:EF00 -c1:BOOT $SSD
sgdisk -n2:0:+"$SWAP_GB"G -t2:8200 -c2:SWAP $SSD
sgdisk -n3:0:0 -t3:8304 -c3:ROOT $SSD
partprobe -s $SSD
udevadm settle
wait_for_device ${SSD}-part1
wait_for_device ${SSD}-part2
wait_for_device ${SSD}-part3
echo "Formatting partitions..."
mkfs.vfat -F 32 -n BOOT "${SSD}-part1"
mkswap -L SWAP "${SSD}-part2"
mkfs.ext4 -L ROOT "${SSD}-part3"
echo "Mounting partitions..."
mount -o X-mount.mkdir "${SSD}-part3" "$MNT"
mkdir -p "$MNT/boot"
mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1 "${SSD}-part1" "$MNT/boot"
echo "Enabling swap..."
swapon "${SSD}-part2"
echo "Partitioning and setup complete:"
lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
================================================
FILE: hosts/cryodev-main/hardware.nix
================================================
{
config,
lib,
pkgs,
modulesPath,
...
}:
{
imports = [
(modulesPath + "/installer/scan/not-detected.nix")
];
boot.initrd.availableKernelModules = [
"ahci"
"nvme"
"sd_mod"
"sdhci_pci"
"sr_mod"
"usb_storage"
"virtio_pci"
"virtio_scsi"
"xhci_pci"
];
boot.initrd.kernelModules = [ ];
boot.kernelModules = [ ];
boot.extraModulePackages = [ ];
fileSystems."/" = {
device = "/dev/disk/by-label/ROOT";
fsType = "ext4";
};
fileSystems."/boot" = {
device = "/dev/disk/by-label/BOOT";
fsType = "vfat";
options = [
"fmask=0022"
"dmask=0022"
];
};
swapDevices = [ { device = "/dev/disk/by-label/SWAP"; } ];
networking.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
}
================================================
FILE: hosts/cryodev-main/networking.nix
================================================
{
networking.hostName = "cryodev-main";
networking.domain = "cryodev.xyz";
}
================================================
FILE: hosts/cryodev-main/packages.nix
================================================
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [ ];
}
================================================
FILE: hosts/cryodev-main/users.nix
================================================
{ inputs, outputs, ... }:
{
imports = [
outputs.nixosModules.normalUsers
../../users/steffen
];
}
================================================
FILE: hosts/cryodev-main/services/default.nix
================================================
{
imports = [
./forgejo.nix
./headplane.nix
./headscale.nix
./mailserver.nix
./netdata.nix
./nginx.nix
./openssh.nix
./sops.nix
./tailscale.nix
];
}
================================================
FILE: hosts/cryodev-main/services/forgejo.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
imports = [
outputs.nixosModules.forgejo
outputs.nixosModules.forgejo-runner
];
services.forgejo = {
enable = true;
settings = {
server = {
DOMAIN = constants.services.forgejo.fqdn;
ROOT_URL = "https://${constants.services.forgejo.fqdn}/";
HTTP_PORT = constants.services.forgejo.port;
};
service = {
DISABLE_REGISTRATION = true;
};
mailer = {
ENABLED = true;
FROM = "forgejo@${constants.domain}";
SMTP_ADDR = constants.services.mail.fqdn;
SMTP_PORT = constants.services.mail.port;
USER = "forgejo@${constants.domain}";
};
};
sops = true; # Enable sops integration for secrets
};
services.forgejo-runner = {
enable = true;
url = "https://${constants.services.forgejo.fqdn}";
# Token needs to be set up via sops/secrets
sops = true;
};
services.nginx.virtualHosts."${constants.services.forgejo.fqdn}" = {
forceSSL = true;
enableACME = true;
locations."/" = {
proxyPass = "http://127.0.0.1:${toString constants.services.forgejo.port}";
};
};
}
================================================
FILE: hosts/cryodev-main/services/headplane.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
imports = [
outputs.nixosModules.headplane
];
services.headplane = {
enable = true;
port = constants.services.headplane.port;
headscale = {
url = "http://127.0.0.1:${toString constants.services.headscale.port}";
public_url = "https://${constants.services.headscale.fqdn}";
};
# Secrets for headplane need to be configured via sops
sops.secrets = {
"headplane/cookie_secret" = { };
"headplane/agent_pre_authkey" = { };
};
};
services.nginx.virtualHosts."${constants.services.headplane.fqdn}" = {
forceSSL = true;
enableACME = true;
locations."/" = {
proxyPass = "http://127.0.0.1:${toString constants.services.headplane.port}";
};
};
}
================================================
FILE: hosts/cryodev-main/services/headscale.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
imports = [
outputs.nixosModules.headscale
];
services.headscale = {
enable = true;
address = "127.0.0.1";
port = constants.services.headscale.port;
settings = {
server_url = "https://${constants.services.headscale.fqdn}";
dns_config.base_domain = constants.domain;
};
};
services.nginx.virtualHosts."${constants.services.headscale.fqdn}" = {
forceSSL = true;
enableACME = true;
locations."/" = {
proxyPass = "http://127.0.0.1:${toString constants.services.headscale.port}";
proxyWebsockets = true;
};
};
}
================================================
FILE: hosts/cryodev-main/services/mailserver.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
imports = [
outputs.nixosModules.mailserver
];
mailserver = {
enable = true;
fqdn = constants.services.mail.fqdn;
domains = [ constants.domain ];
accounts = {
forgejo = { };
admin = {
aliases = [ "postmaster" ];
};
};
certificateScheme = "acme-nginx";
sops = true;
};
}
================================================
FILE: hosts/cryodev-main/services/netdata.nix
================================================
{
config,
pkgs,
constants,
...
}:
{
services.netdata = {
enable = true;
package = pkgs.netdata.override {
withCloudUi = true;
};
config = {
global = {
"debug log" = "syslog";
"access log" = "syslog";
"error log" = "syslog";
"bind to" = "127.0.0.1";
};
};
};
services.nginx.virtualHosts."${constants.services.netdata.fqdn}" = {
forceSSL = true;
enableACME = true;
locations."/" = {
proxyPass = "http://127.0.0.1:${toString constants.services.netdata.port}";
proxyWebsockets = true;
# Basic Auth can be added here if desired, or restrict by IP
# extraConfig = "allow 100.64.0.0/10; deny all;"; # Example for Tailscale only
};
};
}
================================================
FILE: hosts/cryodev-main/services/nginx.nix
================================================
{
inputs,
outputs,
lib,
config,
pkgs,
...
}:
{
imports = [ outputs.nixosModules.nginx ];
services.nginx = {
enable = true;
forceSSL = true; # Force SSL for all vhosts by default if configured to use this option
openFirewall = true;
recommendedOptimisation = true;
recommendedGzipSettings = true;
recommendedProxySettings = true;
recommendedTlsSettings = true;
};
}
================================================
FILE: hosts/cryodev-main/services/openssh.nix
================================================
{
outputs,
...
}:
{
imports = [
outputs.nixosModules.openssh
];
services.openssh.enable = true;
}
================================================
FILE: hosts/cryodev-main/services/sops.nix
================================================
{
config,
pkgs,
outputs,
...
}:
{
imports = [
outputs.nixosModules.sops
];
sops = {
defaultSopsFile = ../secrets.yaml;
# age.keyFile is not set, sops-nix defaults to using /etc/ssh/ssh_host_ed25519_key
secrets = {
"forgejo-runner/token" = { };
"tailscale/auth-key" = { };
};
};
}
================================================
FILE: hosts/cryodev-main/services/tailscale.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
imports = [
outputs.nixosModules.tailscale
];
services.tailscale = {
enable = true;
# Connect to our own headscale instance
loginServer = "https://${constants.services.headscale.fqdn}";
# Allow SSH access over Tailscale
enableSSH = true;
# Use MagicDNS names
acceptDNS = true;
};
}
================================================
FILE: hosts/cryodev-pi/boot.nix
================================================
{
boot = {
loader = {
grub.enable = false;
generic-extlinux-compatible.enable = true;
};
};
}
================================================
FILE: hosts/cryodev-pi/default.nix
================================================
{
inputs,
outputs,
...
}:
{
imports = [
./boot.nix
./hardware.nix
./networking.nix
./packages.nix
./services
./users.nix
outputs.nixosModules.common
outputs.nixosModules.nixvim
];
system.stateVersion = "25.11";
}
================================================
FILE: hosts/cryodev-pi/disks.sh
================================================
#!/usr/bin/env bash
SSD='/dev/disk/by-id/FIXME'
MNT='/mnt'
SWAP_GB=4
# Helper function to wait for devices
wait_for_device() {
local device=$1
echo "Waiting for device: $device ..."
while [[ ! -e $device ]]; do
sleep 1
done
echo "Device $device is ready."
}
# Function to install a package if it's not already installed
install_if_missing() {
local cmd="$1"
local package="$2"
if ! command -v "$cmd" &> /dev/null; then
echo "$cmd not found, installing $package..."
nix-env -iA "nixos.$package"
fi
}
install_if_missing "sgdisk" "gptfdisk"
install_if_missing "partprobe" "parted"
wait_for_device $SSD
echo "Wiping filesystem on $SSD..."
wipefs -a $SSD
echo "Clearing partition table on $SSD..."
sgdisk --zap-all $SSD
echo "Partitioning $SSD..."
sgdisk -n1:1M:+1G -t1:EF00 -c1:BOOT $SSD
sgdisk -n2:0:+"$SWAP_GB"G -t2:8200 -c2:SWAP $SSD
sgdisk -n3:0:0 -t3:8304 -c3:ROOT $SSD
partprobe -s $SSD
udevadm settle
wait_for_device ${SSD}-part1
wait_for_device ${SSD}-part2
wait_for_device ${SSD}-part3
echo "Formatting partitions..."
mkfs.vfat -F 32 -n BOOT "${SSD}-part1"
mkswap -L SWAP "${SSD}-part2"
mkfs.ext4 -L ROOT "${SSD}-part3"
echo "Mounting partitions..."
mount -o X-mount.mkdir "${SSD}-part3" "$MNT"
mkdir -p "$MNT/boot"
mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1 "${SSD}-part1" "$MNT/boot"
echo "Enabling swap..."
swapon "${SSD}-part2"
echo "Partitioning and setup complete:"
lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
================================================
FILE: hosts/cryodev-pi/hardware.nix
================================================
{ pkgs, lib, ... }:
{
boot = {
kernelPackages = pkgs.linuxKernel.packages.linux_rpi4;
initrd.availableKernelModules = [
"xhci_pci"
"usbhid"
"usb_storage"
];
};
fileSystems = {
"/" = {
device = "/dev/disk/by-label/NIXOS_SD";
fsType = "ext4";
options = [ "noatime" ];
};
};
nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux";
hardware.enableRedistributableFirmware = true;
}
================================================
FILE: hosts/cryodev-pi/networking.nix
================================================
{
networking.hostName = "cryodev-pi";
networking.domain = "cryodev.xyz";
}
================================================
FILE: hosts/cryodev-pi/packages.nix
================================================
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [ ];
}
================================================
FILE: hosts/cryodev-pi/users.nix
================================================
{ inputs, outputs, ... }:
{
imports = [
outputs.nixosModules.normalUsers
../../users/steffen
../../users/cryotherm
];
}
================================================
FILE: hosts/cryodev-pi/services/comin.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
imports = [
outputs.nixosModules.comin
];
services.comin = {
enable = true;
remotes = [
{
name = "origin";
url = "https://${constants.services.forgejo.fqdn}/steffen/cryodev-server.git";
branches.main.name = "main";
}
];
};
}
================================================
FILE: hosts/cryodev-pi/services/default.nix
================================================
{
imports = [
./nginx.nix
./openssh.nix
./tailscale.nix
./netdata.nix
./comin.nix
];
}
================================================
FILE: hosts/cryodev-pi/services/netdata.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
services.netdata = {
enable = true;
config = {
stream = {
enabled = "yes";
destination = "${constants.hosts.cryodev-main.ip}:${toString constants.services.netdata.port}";
"api key" = config.sops.placeholder."netdata/stream/child-uuid";
};
};
};
# Make sure sops is enabled/imported for this host to handle the secret
imports = [ outputs.nixosModules.sops ];
sops = {
defaultSopsFile = ../secrets.yaml;
secrets."netdata/stream/child-uuid" = {
owner = "netdata";
group = "netdata";
};
};
}
================================================
FILE: hosts/cryodev-pi/services/nginx.nix
================================================
{
outputs,
...
}:
{
imports = [ outputs.nixosModules.nginx ];
services.nginx = {
enable = true;
forceSSL = true;
openFirewall = true;
};
}
================================================
FILE: hosts/cryodev-pi/services/openssh.nix
================================================
{
outputs,
...
}:
{
imports = [
outputs.nixosModules.openssh
];
services.openssh.enable = true;
}
================================================
FILE: hosts/cryodev-pi/services/tailscale.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
imports = [
outputs.nixosModules.tailscale
];
services.tailscale = {
enable = true;
# Connect to our own headscale instance
loginServer = "https://${constants.services.headscale.fqdn}";
# Allow SSH access over Tailscale
enableSSH = true;
# Use MagicDNS names
acceptDNS = true;
# Auth key for automated enrollment
authKeyFile = config.sops.secrets."tailscale/auth-key".path;
};
sops.secrets."tailscale/auth-key" = { };
}
================================================
FILE: modules/nixos/default.nix
================================================
{
common = import ./common;
comin = import ./comin;
forgejo = import ./forgejo;
forgejo-runner = import ./forgejo-runner;
mailserver = import ./mailserver;
nixvim = import ./nixvim;
normalUsers = import ./normalUsers;
nginx = import ./nginx;
openssh = import ./openssh;
sops = import ./sops;
tailscale = import ./tailscale;
}
================================================
FILE: modules/nixos/comin/default.nix
================================================
{
inputs,
...
}:
{
imports = [ inputs.comin.nixosModules.comin ];
}
================================================
FILE: modules/nixos/common/default.nix
================================================
{
imports = [
./environment.nix
./htop.nix
./nationalization.nix
./networking.nix
./nix.nix
./sudo.nix
./well-known.nix
./zsh.nix
./shared
./overlays.nix
];
}
================================================
FILE: modules/nixos/common/environment.nix
================================================
{
config,
lib,
pkgs,
...
}:
let
inherit (lib) mkDefault optionals;
in
{
environment.systemPackages =
with pkgs;
[
cryptsetup
curl
dig
dnsutils
fzf
gptfdisk
iproute2
jq
lm_sensors
lsof
netcat-openbsd
nettools
nixos-container
nmap
nurl
p7zip
pciutils
psmisc
rclone
rsync
tcpdump
tmux
tree
unzip
usbutils
wget
xxd
zip
(callPackage ../../../apps/rebuild { })
]
++ optionals (pkgs.stdenv.hostPlatform == pkgs.stdenv.buildPlatform) [
pkgs.kitty.terminfo
];
environment.shellAliases = {
l = "ls -lh";
ll = "ls -lAh";
ports = "ss -tulpn";
publicip = "curl ifconfig.me/all";
sudo = "sudo "; # make aliases work with `sudo`
};
# saves one instance of nixpkgs.
environment.ldso32 = null;
boot.tmp.cleanOnBoot = mkDefault true;
boot.initrd.systemd.enable = mkDefault (!config.boot.swraid.enable && !config.boot.isContainer);
}
================================================
FILE: modules/nixos/common/htop.nix
================================================
{
programs.htop = {
enable = true;
settings = {
highlight_base_name = 1;
};
};
}
================================================
FILE: modules/nixos/common/nationalization.nix
================================================
{ lib, ... }:
let
de = "de_DE.UTF-8";
en = "en_US.UTF-8";
inherit (lib) mkDefault;
in
{
i18n = {
defaultLocale = mkDefault en;
extraLocaleSettings = {
LC_ADDRESS = mkDefault de;
LC_IDENTIFICATION = mkDefault de;
LC_MEASUREMENT = mkDefault de;
LC_MONETARY = mkDefault de;
LC_NAME = mkDefault de;
LC_NUMERIC = mkDefault de;
LC_PAPER = mkDefault de;
LC_TELEPHONE = mkDefault de;
LC_TIME = mkDefault en;
};
};
console = {
font = mkDefault "Lat2-Terminus16";
keyMap = mkDefault "de";
};
time.timeZone = mkDefault "Europe/Berlin";
}
================================================
FILE: modules/nixos/common/networking.nix
================================================
{
config,
lib,
pkgs,
...
}:
let
inherit (lib) mkDefault;
inherit (lib.utils) isNotEmptyStr;
in
{
config = {
assertions = [
{
assertion = isNotEmptyStr config.networking.domain;
message = "synix/nixos/common: config.networking.domain cannot be empty.";
}
{
assertion = isNotEmptyStr config.networking.hostName;
message = "synix/nixos/common: config.networking.hostName cannot be empty.";
}
];
networking = {
domain = mkDefault "${config.networking.hostName}.local";
hostId = mkDefault "8425e349"; # same as NixOS install ISO and nixos-anywhere
# NetworkManager
useDHCP = false;
networkmanager = {
enable = true;
plugins = with pkgs; [
networkmanager-openconnect
networkmanager-openvpn
];
};
};
};
}
================================================
FILE: modules/nixos/common/nix.nix
================================================
{
config,
lib,
...
}:
let
inherit (lib) mkDefault;
in
{
nix = {
# use flakes
channel.enable = mkDefault false;
# De-duplicate store paths using hardlinks except in containers
# where the store is host-managed.
optimise.automatic = mkDefault (!config.boot.isContainer);
};
}
================================================
FILE: modules/nixos/common/overlays.nix
================================================
{ outputs, ... }:
{
nixpkgs.overlays = [
outputs.overlays.local-packages
outputs.overlays.modifications
outputs.overlays.old-stable-packages
outputs.overlays.unstable-packages
];
}
================================================
FILE: modules/nixos/common/sudo.nix
================================================
{ config, ... }:
{
security.sudo = {
enable = true;
execWheelOnly = true;
extraConfig = ''
Defaults lecture = never
'';
};
assertions =
let
validUsers = users: users == [ ] || users == [ "root" ];
validGroups = groups: groups == [ ] || groups == [ "wheel" ];
validUserGroups = builtins.all (
r: validUsers (r.users or [ ]) && validGroups (r.groups or [ ])
) config.security.sudo.extraRules;
in
[
{
assertion = config.security.sudo.execWheelOnly -> validUserGroups;
message = "Some definitions in `security.sudo.extraRules` refer to users other than 'root' or groups other than 'wheel'. Disable `config.security.sudo.execWheelOnly`, or adjust the rules.";
}
];
}
================================================
FILE: modules/nixos/common/well-known.nix
================================================
{
# avoid TOFU MITM
programs.ssh.knownHosts = {
"github.com".hostNames = [ "github.com" ];
"github.com".publicKey =
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl";
"gitlab.com".hostNames = [ "gitlab.com" ];
"gitlab.com".publicKey =
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf";
"git.sr.ht".hostNames = [ "git.sr.ht" ];
"git.sr.ht".publicKey =
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMZvRd4EtM7R+IHVMWmDkVU3VLQTSwQDSAvW0t2Tkj60";
};
# TODO: add synix
}
================================================
FILE: modules/nixos/common/zsh.nix
================================================
{
programs.zsh = {
enable = true;
syntaxHighlighting = {
enable = true;
highlighters = [
"main"
"brackets"
"cursor"
"pattern"
];
patterns = {
"rm -rf" = "fg=white,bold,bg=red";
"rm -fr" = "fg=white,bold,bg=red";
};
};
autosuggestions = {
enable = true;
strategy = [
"completion"
"history"
];
};
enableLsColors = true;
};
}
================================================
FILE: modules/nixos/common/shared/default.nix
================================================
{
imports = [
./nix.nix
];
}
================================================
FILE: modules/nixos/common/shared/nix.nix
================================================
{
config,
lib,
pkgs,
...
}:
let
inherit (lib)
mkDefault
optional
versionOlder
versions
;
in
{
nix.package = mkDefault pkgs.nix;
# for `nix run synix#foo`, `nix build synix#bar`, etc
nix.registry = {
synix = {
from = {
id = "synix";
type = "indirect";
};
to = {
owner = "sid";
repo = "synix";
host = "git.sid.ovh";
type = "gitea";
};
};
};
# fallback quickly if substituters are not available.
nix.settings.connect-timeout = mkDefault 5;
nix.settings.fallback = true;
nix.settings.experimental-features = [
"nix-command"
"flakes"
]
++ optional (
config.nix.package != null && versionOlder (versions.majorMinor config.nix.package.version) "2.22"
) "repl-flake";
nix.settings.log-lines = mkDefault 25;
# avoid disk full issues
nix.settings.max-free = mkDefault (3000 * 1024 * 1024);
nix.settings.min-free = mkDefault (512 * 1024 * 1024);
# avoid copying unnecessary stuff over SSH
nix.settings.builders-use-substitutes = true;
# workaround for https://github.com/NixOS/nix/issues/9574
nix.settings.nix-path = config.nix.nixPath;
nix.settings.download-buffer-size = 524288000; # 500 MiB
# add all wheel users to the trusted-users group
nix.settings.trusted-users = [
"@wheel"
];
# binary caches
nix.settings.substituters = [
"https://cache.nixos.org"
"https://nix-community.cachix.org"
"https://cache.garnix.io"
"https://numtide.cachix.org"
];
nix.settings.trusted-public-keys = [
"cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY="
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
"cache.garnix.io:CTFPyKSLcx5RMJKfLo5EEPUObbA78b0YQ2DTCJXqr9g="
"numtide.cachix.org-1:2ps1kLBUWjxIneOy1Ik6cQjb41X0iXVXeHigGmycPPE="
];
nix.gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 30d";
};
}
================================================
FILE: modules/nixos/forgejo/default.nix
================================================
{
config,
lib,
...
}:
let
cfg = config.services.forgejo;
inherit (cfg) settings;
inherit (lib)
getExe
head
mkDefault
mkIf
;
in
{
config = mkIf cfg.enable {
services.forgejo = {
database.type = "postgres";
lfs.enable = true;
settings = {
server = {
DOMAIN = "git.${config.networking.domain}";
PROTOCOL = "http";
ROOT_URL = "https://${settings.server.DOMAIN}/";
HTTP_ADDR = "0.0.0.0";
HTTP_PORT = 3456;
SSH_PORT = head config.services.openssh.ports;
};
service = {
DISABLE_REGISTRATION = true;
};
ui = {
DEFAULT_THEME = "forgejo-dark";
};
actions = {
ENABLED = true;
};
mailer = {
ENABLED = mkDefault false;
SMTP_ADDR = "mail.${config.networking.domain}";
FROM = "git@${settings.server.DOMAIN}";
USER = "git@${settings.server.DOMAIN}";
};
};
secrets = {
mailer.PASSWD = mkIf settings.mailer.ENABLED config.sops.secrets."forgejo/mail-pw".path;
};
};
environment.shellAliases = {
forgejo = "sudo -u ${cfg.user} ${getExe cfg.package} --config ${cfg.stateDir}/custom/conf/app.ini";
};
sops.secrets."forgejo/mail-pw" = mkIf settings.mailer.ENABLED {
owner = cfg.user;
group = cfg.group;
mode = "0400";
};
};
}
================================================
FILE: modules/nixos/forgejo-runner/default.nix
================================================
{
config,
lib,
pkgs,
...
}:
let
cfg = config.services.forgejo-runner;
inherit (lib)
mkEnableOption
mkIf
mkOption
types
;
in
{
options.services.forgejo-runner = {
enable = mkEnableOption "Nix-based Forgejo Runner service";
url = mkOption {
type = types.str;
description = "Forgejo instance URL.";
};
tokenFile = mkOption {
type = types.path;
description = "Path to EnvironmentFile containing TOKEN=...";
};
};
config = mkIf cfg.enable {
nix.settings.trusted-users = [ "gitea-runner" ];
services.gitea-actions-runner = {
package = pkgs.forgejo-runner;
instances.default = {
enable = true;
name = "${config.networking.hostName}-nix";
inherit (cfg) url tokenFile;
labels = [ "host:host" ];
hostPackages = with pkgs; [
bash
coreutils
curl
gitMinimal
gnused
nix
nodejs
openssh
deploy-rs
];
settings = {
log.level = "info";
runner = {
capacity = 1;
envs = {
NIX_CONFIG = "extra-experimental-features = nix-command flakes";
NIX_REMOTE = "daemon";
};
};
};
};
};
};
}
================================================
FILE: modules/nixos/headplane/default.nix
================================================
{
inputs,
config,
lib,
...
}:
let
cfg = config.services.headplane;
domain = config.networking.domain;
subdomain = cfg.reverseProxy.subdomain;
fqdn = if (cfg.reverseProxy.enable && subdomain != "") then "${subdomain}.${domain}" else domain;
headscale = config.services.headscale;
inherit (lib)
mkDefault
mkIf
;
inherit (lib.utils)
mkReverseProxyOption
mkVirtualHost
;
in
{
imports = [ inputs.headplane.nixosModules.headplane ];
options.services.headplane = {
reverseProxy = mkReverseProxyOption "Headplane" "hp";
};
config = mkIf cfg.enable {
nixpkgs.overlays = [
inputs.headplane.overlays.default
];
services.headplane = {
settings = {
server = {
host = mkDefault (if cfg.reverseProxy.enable then "127.0.0.1" else "0.0.0.0");
port = mkDefault 3000;
cookie_secret_path = config.sops.secrets."headplane/cookie_secret".path;
};
headscale = {
url = "http://127.0.0.1:${toString headscale.port}";
public_url = headscale.settings.server_url;
config_path = "/etc/headscale/config.yaml";
};
integration.agent = {
enabled = mkDefault true;
pre_authkey_path = config.sops.secrets."headplane/agent_pre_authkey".path;
};
};
};
services.nginx.virtualHosts = mkIf cfg.reverseProxy.enable {
"${fqdn}" = mkVirtualHost {
port = cfg.settings.server.port;
ssl = cfg.reverseProxy.forceSSL;
};
};
sops.secrets =
let
owner = headscale.user;
group = headscale.group;
mode = "0400";
in
{
"headplane/cookie_secret" = {
inherit owner group mode;
};
"headplane/agent_pre_authkey" = {
inherit owner group mode;
};
};
};
}
================================================
FILE: modules/nixos/headscale/acl.hujson
================================================
{
"acls": [
{
"action": "accept",
"src": ["*"],
"dst": ["*:*"]
}
],
"ssh": [
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["autogroup:member"],
"users": ["autogroup:nonroot", "root"]
}
]
}
================================================
FILE: modules/nixos/headscale/default.nix
================================================
{
config,
lib,
...
}:
let
cfg = config.services.headscale;
domain = config.networking.domain;
subdomain = cfg.reverseProxy.subdomain;
fqdn = if (cfg.reverseProxy.enable && subdomain != "") then "${subdomain}.${domain}" else domain;
acl = "headscale/acl.hujson";
inherit (lib)
mkDefault
mkIf
mkOption
optional
optionals
types
;
inherit (lib.utils)
mkReverseProxyOption
mkUrl
mkVirtualHost
;
in
{
options.services.headscale = {
reverseProxy = mkReverseProxyOption "Headscale" "hs";
openFirewall = mkOption {
type = types.bool;
default = false;
description = "Whether to automatically open firewall ports. TCP: 80, 443; UDP: 3478.";
};
};
config = mkIf cfg.enable {
assertions = [
{
assertion = !cfg.settings.derp.server.enable || cfg.reverseProxy.forceSSL;
message = "cryodev/nixos/headscale: DERP requires TLS";
}
{
assertion = fqdn != cfg.settings.dns.base_domain;
message = "cryodev/nixos/headscale: `settings.server_url` must be different from `settings.dns.base_domain`";
}
{
assertion = !cfg.settings.dns.override_local_dns || cfg.settings.dns.nameservers.global != [ ];
message = "cryodev/nixos/headscale: `settings.dns.nameservers.global` must be set when `settings.dns.override_local_dns` is true";
}
];
environment.etc.${acl} = {
inherit (config.services.headscale) user group;
source = ./acl.hujson;
};
environment.shellAliases = {
hs = "${cfg.package}/bin/headscale";
};
services.headscale = {
address = mkDefault (if cfg.reverseProxy.enable then "127.0.0.1" else "0.0.0.0");
port = mkDefault 8077;
settings = {
policy.path = "/etc/${acl}";
database.type = "sqlite"; # postgres is highly discouraged as it is only supported for legacy reasons
server_url = mkUrl {
inherit fqdn;
ssl = with cfg.reverseProxy; enable && forceSSL;
};
derp.server.enable = cfg.reverseProxy.forceSSL;
dns = {
magic_dns = mkDefault true;
base_domain = mkDefault "tail";
search_domains = [ cfg.settings.dns.base_domain ];
override_local_dns = mkDefault true;
nameservers.global = optionals cfg.settings.dns.override_local_dns [
"1.1.1.1"
"1.0.0.1"
"2606:4700:4700::1111"
"2606:4700:4700::1001"
];
};
};
};
services.nginx.virtualHosts = mkIf cfg.reverseProxy.enable {
"${fqdn}" = mkVirtualHost {
inherit (cfg) address port;
ssl = cfg.reverseProxy.forceSSL;
};
};
networking.firewall = mkIf cfg.openFirewall {
allowedTCPPorts = [
80
443
];
allowedUDPPorts = optional cfg.settings.derp.server.enable 3478;
};
};
}
================================================
FILE: modules/nixos/mailserver/default.nix
================================================
{
inputs,
config,
lib,
pkgs,
...
}:
let
cfg = config.mailserver;
domain = config.networking.domain;
fqdn = "${cfg.subdomain}.${domain}";
inherit (lib)
mapAttrs'
mkDefault
mkIf
mkOption
nameValuePair
types
;
in
{
imports = [ inputs.nixos-mailserver.nixosModules.mailserver ];
options.mailserver = {
subdomain = mkOption {
type = types.str;
default = "mail";
description = "Subdomain for rDNS";
};
accounts = mkOption {
type = types.attrsOf (
types.submodule {
options = {
aliases = mkOption {
type = types.listOf types.str;
default = [ ];
description = "A list of aliases of this account. `@domain` will be appended automatically.";
};
sendOnly = mkOption {
type = types.bool;
default = false;
description = "Specifies if the account should be a send-only account.";
};
};
}
);
default = { };
description = ''
This options wraps `loginAccounts`.
`loginAccounts.<attr-name>.name` will be automatically set to `<attr-name>@<domain>`.
'';
};
};
config = mkIf cfg.enable {
assertions = [
{
assertion = cfg.subdomain != "";
message = "cryodev/nixos/mailserver: config.mailserver.subdomain cannot be empty.";
}
];
mailserver = {
fqdn = mkDefault fqdn;
domains = mkDefault [ domain ];
certificateScheme = mkDefault "acme-nginx";
stateVersion = mkDefault 1;
loginAccounts = mapAttrs' (
user: accConf:
nameValuePair "${user}@${domain}" {
name = "${user}@${domain}";
aliases = map (alias: "${alias}@${domain}") (accConf.aliases or [ ]);
sendOnly = accConf.sendOnly;
quota = mkDefault "5G";
hashedPasswordFile = config.sops.secrets."mailserver/accounts/${user}".path;
}
) cfg.accounts;
};
security.acme = {
acceptTerms = true;
defaults.email = mkDefault "postmaster@cryodev.xyz";
defaults.webroot = mkDefault "/var/lib/acme/acme-challenge";
};
environment.systemPackages = [ pkgs.mailutils ];
sops = {
secrets = mapAttrs' (
user: _config:
nameValuePair "mailserver/accounts/${user}" {
restartUnits = [
"postfix.service"
"dovecot.service"
];
}
) cfg.accounts;
};
};
}
================================================
FILE: modules/nixos/nginx/default.nix
================================================
{ config, lib, ... }:
let
cfg = config.services.nginx;
inherit (lib)
mkDefault
mkIf
mkOption
optional
optionals
types
;
in
{
options.services.nginx = {
forceSSL = mkOption {
type = types.bool;
default = false;
description = "Force SSL for Nginx virtual host.";
};
openFirewall = mkOption {
type = types.bool;
default = false;
description = "Whether to open the firewall for HTTP (and HTTPS if forceSSL is enabled).";
};
};
config = mkIf cfg.enable {
networking.firewall.allowedTCPPorts = optionals cfg.openFirewall (
[
80
]
++ optional cfg.forceSSL 443
);
services.nginx = {
recommendedOptimisation = mkDefault true;
recommendedGzipSettings = mkDefault true;
recommendedProxySettings = mkDefault true;
recommendedTlsSettings = cfg.forceSSL;
commonHttpConfig = "access_log syslog:server=unix:/dev/log;";
resolver.addresses =
let
isIPv6 = addr: builtins.match ".*:.*:.*" addr != null;
escapeIPv6 = addr: if isIPv6 addr then "[${addr}]" else addr;
cloudflare = [
"1.1.1.1"
"2606:4700:4700::1111"
];
resolvers =
if config.networking.nameservers == [ ] then cloudflare else config.networking.nameservers;
in
map escapeIPv6 resolvers;
sslDhparam = mkIf cfg.forceSSL config.security.dhparams.params.nginx.path;
};
security.acme = mkIf cfg.forceSSL {
acceptTerms = true;
defaults.email = mkDefault "postmaster@${config.networking.domain}";
defaults.webroot = mkDefault "/var/lib/acme/acme-challenge";
};
security.dhparams = mkIf cfg.forceSSL {
enable = true;
params.nginx = { };
};
};
}
================================================
FILE: modules/nixos/nixvim/default.nix
================================================
{
inputs,
config,
lib,
...
}:
let
cfg = config.programs.nixvim;
inherit (lib) mkDefault mkIf;
in
{
imports = [
inputs.nixvim.nixosModules.nixvim
./plugins
./spellfiles.nix
];
config = {
programs.nixvim = {
enable = true; # Enable globally on NixOS
defaultEditor = mkDefault true;
viAlias = mkDefault true;
vimAlias = mkDefault true;
# Removed home-manager specific options like 'enableMan' which is handled differently or not needed in system module context
# Removed clipboard.providers.wl-copy as it's home-manager specific.
# System-wide clipboard integration for headless servers is less critical but can be added if needed.
# vim.g.*
globals = {
mapleader = mkDefault " ";
};
# vim.opt.*
opts = {
# behavior
cursorline = mkDefault true; # highlights the line under the cursor
mouse = mkDefault "a"; # enable mouse support
nu = mkDefault true; # line numbers
relativenumber = mkDefault true; # relative line numbers
scrolloff = mkDefault 20; # keeps some context above/below cursor
signcolumn = mkDefault "yes"; # reserve space for signs (e.g., GitGutter)
undofile = mkDefault true; # persistent undo
updatetime = mkDefault 500; # ms to wait for trigger an event (default 4000ms)
wrap = mkDefault true; # wraps text if it exceeds the width of the window
# search
ignorecase = mkDefault true; # ignore case in search patterns
smartcase = mkDefault true; # smart case
incsearch = mkDefault true; # incremental search
hlsearch = mkDefault true; # highlight search
# windows
splitbelow = mkDefault true; # new windows are created below current
splitright = mkDefault true; # new windows are created to the right of current
equalalways = mkDefault true; # window sizes are automatically updated.
# tabs
expandtab = mkDefault true; # convert tabs into spaces
shiftwidth = mkDefault 2; # number of spaces to use for each step of (auto)indent
smartindent = mkDefault true; # smart autoindenting on new lines
softtabstop = mkDefault 2; # number of spaces in tab when editing
tabstop = mkDefault 2; # number of visual spaces per tab
# spell checking
spell = mkDefault true;
spelllang = mkDefault [
"en_us"
"de_20"
];
};
# vim.diagnostic.config.*
diagnostic.settings = {
virtual_text = {
spacing = 4;
prefix = "●";
severity_sort = true;
};
signs = true;
underline = true;
update_in_insert = false;
};
extraConfigLua = ''
vim.cmd "set noshowmode" -- Hides "--INSERT--" mode indicator
'';
keymaps = import ./keymaps.nix;
};
environment = {
variables = {
EDITOR = mkIf cfg.enable "nvim";
VISUAL = mkIf cfg.enable "nvim";
};
shellAliases = {
v = mkIf cfg.enable "nvim";
};
};
};
}
================================================
FILE: modules/nixos/nixvim/keymaps.nix
================================================
[
# cursor navigation
{
# scroll down, recenter
key = "<C-d>";
action = "<C-d>zz";
mode = "n";
}
{
# scroll up, recenter
key = "<C-u>";
action = "<C-u>zz";
mode = "n";
}
# searching
{
# center cursor after search next
key = "n";
action = "nzzzv";
mode = "n";
}
{
# center cursor after search previous
key = "N";
action = "Nzzzv";
mode = "n";
}
{
# ex command
key = "<leader>pv";
action = "<cmd>Ex<CR>";
mode = "n";
}
# search and replace
{
# search and replace word under cursor
key = "<leader>s";
action = ":%s/<C-r><C-w>/<C-r><C-w>/gI<Left><Left><Left>";
mode = "n";
}
# search and replace selected text
{
key = "<leader>s";
action = "y:%s/<C-r>0/<C-r>0/gI<Left><Left><Left>";
mode = "v";
}
# clipboard operations
{
# copy to system clipboard in visual mode
key = "<C-c>";
action = ''"+y '';
mode = "v";
}
{
# paste from system clipboard in visual mode
key = "<C-v>";
action = ''"+p '';
mode = "v";
}
{
# yank to system clipboard
key = "<leader>Y";
action = "+Y";
mode = "n";
}
{
# replace selected text with clipboard content
key = "<leader>p";
action = "_dP";
mode = "x";
}
{
# delete without copying to clipboard
key = "<leader>d";
action = "_d";
mode = [
"n"
"v"
];
}
# line operations
{
# move lines down in visual mode
key = "J";
action = ":m '>+1<CR>gv=gv";
mode = "v";
}
{
# move lines up in visual mode
key = "K";
action = ":m '<-2<CR>gv=gv";
mode = "v";
}
{
# join lines
key = "J";
action = "mzJ`z";
mode = "n";
}
# quickfix
{
# Run make command
key = "<leader>m";
action = "<cmd>:make<CR>";
mode = "n";
}
{
# previous quickfix item
key = "<C-A-J>";
action = "<cmd>cprev<CR>zz";
mode = "n";
}
{
# next quickfix item
key = "<C-A-K>";
action = "<cmd>cnext<CR>zz";
mode = "n";
}
# location list navigation
{
# previous location list item
key = "<leader>j";
action = "<cmd>lprev<CR>zz";
mode = "n";
}
{
# next location list item
key = "<leader>k";
action = "<cmd>lnext<CR>zz";
mode = "n";
}
# disabling keys
{
# disable the 'Q' key
key = "Q";
action = "<nop>";
mode = "n";
}
# text selection
{
# select whole buffer
key = "<C-a>";
action = "ggVG";
mode = "n";
}
# window operations
{
# focus next window
key = "<C-j>";
action = ":wincmd W<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
{
# focus previous window
key = "<C-k>";
action = ":wincmd w<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
# window size adjustments
{
# increase window width
key = "<C-l>";
action = ":vertical resize +5<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
{
# decrease window width
key = "<C-h>";
action = ":vertical resize -5<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
# window closing and opening
{
# close current window
key = "<leader-S>c";
action = ":q<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
{
# new vertical split at $HOME
key = "<leader>n";
action = ":vsp $HOME<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
# window split orientation toggling
{
# toggle split orientation
key = "<leader>t";
action = ":wincmd T<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
# spell checking
{
# toggle spell checking
key = "<leader>ss";
action = ":setlocal spell!<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
{
# switch to english spell checking
key = "<leader>se";
action = ":setlocal spelllang=en_us<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
{
# switch to german spell checking
key = "<leader>sg";
action = ":setlocal spelllang=de_20<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
{
# move to next misspelling
key = "]s";
action = "]szz";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
{
# move to previous misspelling
key = "[s";
action = "[szz";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
{
# correction suggestions for a misspelled word
key = "z=";
action = "z=";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
{
# adding words to the dictionary
key = "zg";
action = "zg";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
# buffer navigation
{
# next buffer
key = "<C-S-J>";
action = ":bnext<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
{
# previous buffer
key = "<C-S-K>";
action = ":bprevious<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
{
# close current buffer
key = "<leader>bd";
action = ":bdelete<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
{
# apply code action
key = "<leader>ca";
action = ":lua vim.lsp.buf.code_action()<CR>";
options = {
noremap = true;
silent = true;
};
mode = "n";
}
]
================================================
FILE: modules/nixos/nixvim/spellfiles.nix
================================================
{ config, pkgs, ... }:
let
spellDir = config.xdg.dataHome + "/nvim/site/spell";
baseUrl = "http://ftp.de.vim.org/runtime/spell";
in
{
home.file = {
de-spl = {
enable = true;
source = pkgs.fetchurl {
url = baseUrl + "/de.utf-8.spl";
sha256 = "sha256-c8cQfqM5hWzb6SHeuSpFk5xN5uucByYdobndGfaDo9E=";
};
target = spellDir + "/de.utf8.spl";
};
de-sug = {
enable = true;
source = pkgs.fetchurl {
url = baseUrl + "/de.utf-8.sug";
sha256 = "sha256-E9Ds+Shj2J72DNSopesqWhOg6Pm6jRxqvkerqFcUqUg=";
};
target = spellDir + "/de.utf8.sug";
};
};
}
================================================
FILE: modules/nixos/nixvim/plugins/cmp.nix
================================================
{ config, lib, ... }:
let
cfg = config.programs.nixvim;
plugin = cfg.plugins.cmp;
inherit (lib) mkDefault mkIf;
in
{
programs.nixvim = {
plugins = {
cmp = {
enable = mkDefault true;
settings = {
autoEnableSources = mkDefault true;
experimental.ghost_text = mkDefault true;
snippet.expand = mkDefault "luasnip";
formatting.fields = mkDefault [
"kind"
"abbr"
"menu"
];
sources = [
{ name = "git"; }
{ name = "nvim_lsp"; }
{
name = "buffer";
option.get_bufnrs.__raw = "vim.api.nvim_list_bufs";
keywordLength = 3;
}
{
name = "path";
keywordLength = 3;
}
{ name = "luasnip"; }
];
mapping = {
"<C-Space>" = "cmp.mapping.complete()";
"<C-d>" = "cmp.mapping.scroll_docs(-4)";
"<C-e>" = "cmp.mapping.close()";
"<C-f>" = "cmp.mapping.scroll_docs(4)";
"<C-CR>" = "cmp.mapping.confirm({ select = true })";
"<S-Tab>" = "cmp.mapping(cmp.mapping.select_prev_item(), {'i', 's'})";
"<Tab>" = "cmp.mapping(cmp.mapping.select_next_item(), {'i', 's'})";
};
};
};
cmp-cmdline = mkIf plugin.enable { enable = mkDefault false; }; # autocomplete for cmdline
cmp_luasnip = mkIf plugin.enable { enable = mkDefault true; };
luasnip = mkIf plugin.enable { enable = mkDefault true; };
cmp-treesitter = mkIf (plugin.enable && cfg.plugins.treesitter.enable) { enable = mkDefault true; };
};
};
}
================================================
FILE: modules/nixos/nixvim/plugins/default.nix
================================================
{ lib, ... }:
{
imports = [
./cmp.nix
./lsp.nix
./lualine.nix
./telescope.nix
# ./treesitter.nix # HOTFIX: does not build
./trouble.nix
];
config.programs.nixvim.plugins = {
markdown-preview.enable = lib.mkDefault true;
# warning: Nixvim: `plugins.web-devicons` was enabled automatically because the following plugins are enabled. This behaviour is deprecated. Please explicitly define `plugins.web-devicons.enable`
web-devicons.enable = true;
};
}
================================================
FILE: modules/nixos/nixvim/plugins/lsp.nix
================================================
{
config,
lib,
pkgs,
...
}:
let
cfg = config.programs.nixvim;
plugin = cfg.plugins.lsp;
inherit (lib) mkDefault mkIf optional;
in
{
config = {
programs.nixvim = {
plugins = {
lsp-format = mkIf plugin.enable { enable = mkDefault true; };
lsp = {
enable = mkDefault true;
postConfig = "";
keymaps = {
silent = mkDefault true;
diagnostic = mkDefault {
# Navigate in diagnostics
"<leader>k" = "goto_prev";
"<leader>j" = "goto_next";
};
lspBuf = mkDefault {
gd = "definition";
gD = "references";
gt = "type_definition";
gi = "implementation";
K = "hover";
"<F2>" = "rename";
};
};
servers = {
bashls.enable = mkDefault true;
clangd.enable = mkDefault true;
cssls.enable = mkDefault true;
dockerls.enable = mkDefault true;
gopls.enable = mkDefault true;
html.enable = mkDefault true;
jsonls.enable = mkDefault true;
nixd.enable = mkDefault true;
pyright.enable = mkDefault true;
rust_analyzer = {
enable = mkDefault true;
installCargo = mkDefault true;
installRustc = mkDefault true;
settings.rustfmt.overrideCommand = mkDefault [
"${pkgs.rustfmt}/bin/rustfmt --edition 2021" # --config tab_spaces=2"
];
};
texlab.enable = mkDefault true;
vhdl_ls.enable = mkDefault true;
yamlls.enable = mkDefault true;
};
};
};
};
home.packages = optional (cfg.enable && plugin.servers.nixd.enable) pkgs.nixfmt;
};
}
================================================
FILE: modules/nixos/nixvim/plugins/lualine.nix
================================================
{ config, lib, ... }:
let
cfg = config.programs.nixvim;
plugin = cfg.plugins.lualine;
inherit (lib) mkDefault;
in
{
config = {
programs.nixvim = {
plugins.lualine = {
enable = mkDefault true;
settings.options.icons_enabled = mkDefault false;
};
};
};
}
================================================
FILE: modules/nixos/nixvim/plugins/telescope.nix
================================================
{
config,
lib,
pkgs,
...
}:
let
cfg = config.programs.nixvim;
plugin = cfg.plugins.telescope;
inherit (lib) mkDefault optionals;
in
{
config = {
programs.nixvim = {
plugins.telescope = {
enable = mkDefault true;
extensions = {
file-browser.enable = mkDefault true;
fzf-native.enable = mkDefault true;
live-grep-args.enable = mkDefault true;
manix.enable = mkDefault true;
};
keymaps = mkDefault {
"<C-e>" = "file_browser";
"<C-p>" = "git_files";
"<leader>bl" = "buffers";
"<leader>fd" = "diagnostics";
"<leader>ff" = "find_files";
"<leader>fg" = "live_grep";
"<leader>fh" = "help_tags";
"<leader>fm" = "man_pages";
"<leader>fn" = "manix";
"<leader>fo" = "oldfiles";
"<space>fb" = "file_browser";
};
};
keymaps = optionals plugin.enable [
{
key = "<C-f>";
action = ":lua require('telescope').extensions.live_grep_args.live_grep_args()<CR>";
mode = "n";
}
];
};
home.packages = optionals plugin.enable [
pkgs.ripgrep # for "live_grep"
];
};
}
================================================
FILE: modules/nixos/nixvim/plugins/treesitter.nix
================================================
{
config,
lib,
pkgs,
...
}:
let
cfg = config.programs.nixvim;
plugin = cfg.plugins.treesitter;
cc = "${pkgs.gcc}/bin/gcc";
inherit (lib) mkDefault mkIf;
in
{
config = {
programs.nixvim = {
plugins.treesitter = {
enable = mkDefault true;
nixvimInjections = mkDefault true;
settings = {
folding.enable = mkDefault true;
highlight.enable = mkDefault true;
indent.enable = mkDefault true;
};
};
plugins.treesitter-context = mkIf plugin.enable { enable = mkDefault true; };
plugins.treesitter-textobjects = mkIf plugin.enable { enable = mkDefault true; };
};
# Fix for: ERROR `cc` executable not found.
home.sessionVariables = mkIf plugin.enable {
CC = mkDefault cc;
};
# Fix for: WARNING `tree-sitter` executable not found
home.packages = mkIf plugin.enable [
plugin.package
];
};
}
================================================
FILE: modules/nixos/nixvim/plugins/trouble.nix
================================================
{ config, lib, ... }:
let
cfg = config.programs.nixvim;
plugin = cfg.plugins.trouble;
inherit (lib) mkDefault mkIf;
in
{
config = {
programs.nixvim = {
plugins.trouble = {
enable = mkDefault true;
};
keymaps = mkIf plugin.enable [
{
mode = "n";
key = "<leader>xq";
action = "<CMD>Trouble qflist toggle<CR>";
options = {
desc = "Trouble quifick toggle";
};
}
{
mode = "n";
key = "<leader>xl";
action = "<CMD>Trouble loclist toggle<CR>";
options = {
desc = "Trouble loclist toggle";
};
}
{
mode = "n";
key = "<leader>xx";
action = "<CMD>Trouble diagnostics toggle<CR>";
options = {
desc = "Trouble diagnostics toggle";
};
}
];
};
};
}
================================================
FILE: modules/nixos/normalUsers/default.nix
================================================
{
config,
lib,
pkgs,
...
}:
let
cfg = config.normalUsers;
inherit (lib)
attrNames
genAttrs
mkOption
types
;
in
{
options.normalUsers = mkOption {
type = types.attrsOf (
types.submodule {
options = {
extraGroups = mkOption {
type = (types.listOf types.str);
default = [ ];
description = "Extra groups for the user";
example = [ "wheel" ];
};
shell = mkOption {
type = types.path;
default = pkgs.zsh;
description = "Shell for the user";
};
initialPassword = mkOption {
type = types.str;
default = "changeme";
description = "Initial password for the user";
};
sshKeyFiles = mkOption {
type = (types.listOf types.path);
default = [ ];
description = "SSH key files for the user";
example = [ "/path/to/id_rsa.pub" ];
};
};
}
);
default = { };
description = "Users to create. The usernames are the attribute names.";
};
config = {
# Create user groups
users.groups = genAttrs (attrNames cfg) (userName: {
name = userName;
});
# Create users
users.users = genAttrs (attrNames cfg) (userName: {
name = userName;
inherit (cfg.${userName}) extraGroups shell initialPassword;
isNormalUser = true;
group = "${userName}";
home = "/home/${userName}";
openssh.authorizedKeys.keyFiles = cfg.${userName}.sshKeyFiles;
});
};
}
================================================
FILE: modules/nixos/openssh/default.nix
================================================
{ lib, ... }:
let
inherit (lib) mkDefault;
in
{
services.openssh = {
enable = mkDefault true;
ports = mkDefault [ 2299 ];
openFirewall = mkDefault true;
settings = {
PermitRootLogin = mkDefault "no";
PasswordAuthentication = mkDefault false;
};
};
}
================================================
FILE: modules/nixos/sops/default.nix
================================================
{
inputs,
config,
lib,
pkgs,
...
}:
let
secrets = "${toString inputs.self}/hosts/${config.networking.hostName}/secrets/secrets.yaml";
in
{
imports = [ inputs.sops-nix.nixosModules.sops ];
environment.systemPackages = with pkgs; [
age
sops
];
sops.defaultSopsFile = lib.mkIf (builtins.pathExists secrets) (lib.mkDefault secrets);
}
================================================
FILE: modules/nixos/tailscale/default.nix
================================================
{ config, lib, ... }:
let
cfg = config.services.tailscale;
inherit (lib)
mkIf
mkOption
optional
types
;
in
{
options.services.tailscale = {
loginServer = mkOption {
type = types.str;
description = "The Tailscale login server to use.";
};
enableSSH = mkOption {
type = types.bool;
default = false;
description = "Enable Tailscale SSH functionality.";
};
acceptDNS = mkOption {
type = types.bool;
default = true;
description = "Enable Tailscale's MagicDNS and custom DNS configuration.";
};
};
config = mkIf cfg.enable {
services.tailscale = {
authKeyFile = config.sops.secrets."tailscale/auth-key".path;
extraSetFlags = optional cfg.enableSSH "--ssh" ++ optional cfg.acceptDNS "--accept-dns";
extraUpFlags = [
"--login-server=${cfg.loginServer}"
]
++ optional cfg.enableSSH "--ssh"
++ optional cfg.acceptDNS "--accept-dns";
};
environment.shellAliases = {
ts = "${cfg.package}/bin/tailscale";
};
networking.firewall.trustedInterfaces = [ cfg.interfaceName ];
sops.secrets."tailscale/auth-key" = { };
};
}
================================================
FILE: overlays/default.nix
================================================
{ inputs, ... }:
{
# packages in `pkgs/` accessible through 'pkgs.local'
local-packages = final: prev: { local = import ../pkgs { pkgs = final; }; };
# https://nixos.wiki/wiki/Overlays
modifications =
final: prev:
let
files = [
];
imports = builtins.map (f: import f final prev) files;
in
builtins.foldl' (a: b: a // b) { } imports;
# old-stable nixpkgs accessible through 'pkgs.old-stable'
old-stable-packages = final: prev: {
old-stable = import inputs.nixpkgs-old-stable {
inherit (final) system;
inherit (prev) config;
};
};
# unstable nixpkgs accessible through 'pkgs.unstable'
unstable-packages = final: prev: {
unstable = import inputs.nixpkgs-unstable {
inherit (final) system;
inherit (prev) config;
};
};
}
================================================
FILE: pkgs/default.nix
================================================
{
pkgs ? import <nixpkgs>,
...
}:
{
# example = pkgs.callPackage ./example { };
}
================================================
FILE: scripts/install.sh
================================================
#!/usr/bin/env bash
# NixOS install script
### VARIABLES ###
ASK_VERIFICATION=1 # Default to ask for verification
CONFIG_DIR="/tmp/nixos" # Directory to copy flake to / clone flake into
GIT_BRANCH="master" # Default Git branch
GIT_REPO="" # Git repository URL
HOSTNAME="" # Hostname
MNT="/mnt" # root mount point
SEPARATOR="________________________________________" # line separator
### FUNCTIONS ###
# Function to display help information
Show_help() {
echo "Usage: $0 [-r REPO] [-n HOSTNAME] [-b BRANCH] [-y] [-h]"
echo
echo "Options:"
echo " -r, --repo REPO Your NixOS configuration Git repository URL"
echo " -n, --hostname HOSTNAME Specify the hostname for the NixOS configuration"
echo " -b, --branch BRANCH Specify the Git branch to use (default: $GIT_BRANCH)"
echo " -y, --yes Do not ask for user verification before proceeding"
echo " -h, --help Show this help message and exit"
}
# Function to format, partition, and mount disks for $HOSTNAME using disko
Run_disko() {
echo "$SEPARATOR"
echo "Running disko..."
nix --experimental-features "nix-command flakes" run github:nix-community/disko/latest -- --mode disko "$CONFIG_DIR"/hosts/"$HOSTNAME"/disks.nix
}
# Function to format, partition, and mount disks for $HOSTNAME using a partitioning script
Run_script() {
echo "$SEPARATOR"
echo "Running partitioning script..."
bash "$CONFIG_DIR"/hosts/"$HOSTNAME"/disks.sh
}
# Function to check mount points and partitioning
Check_partitioning() {
echo "$SEPARATOR"
echo "Printing mount points and partitioning..."
mount | grep "$MNT"
lsblk -f
[[ "$ASK_VERIFICATION" == 1 ]] && read -rp "Verify the mount points and partitioning. Press Ctrl+c to cancel or Enter to continue..."
}
# Function to generate hardware configuration
Generate_hardware_config() {
[[ "$ASK_VERIFICATION" == 1 ]] && read -rp "No hardware configuration found. Press Ctrl+c to cancel or Enter to generate one..."
echo "$SEPARATOR"
echo "Generating hardware configuration..."
nixos-generate-config --root "$MNT" --show-hardware-config > "$CONFIG_DIR"/hosts/"$HOSTNAME"/hardware.nix
# Check if hardware configuration has been generated
if [[ ! -f "$CONFIG_DIR"/hosts/"$HOSTNAME"/hardware.nix ]]; then
echo "Error: Hardware configuration cannot be generated."
exit 1
fi
# Add configuration to git
# TODO: get rid of cd
cd "$CONFIG_DIR"/hosts/"$HOSTNAME" || exit 1
git add "$CONFIG_DIR"/hosts/"$HOSTNAME"/hardware.nix
cd || exit 1
echo "Hardware configuration generated successfully."
};
# Function to install configuration for $HOSTNAME
Install() {
# Check if hardware configuration exists
[[ ! -f "$CONFIG_DIR"/hosts/"$HOSTNAME"/hardware.nix ]] && Generate_hardware_config
echo "$SEPARATOR"
echo "Installing NixOS..."
nixos-install --root "$MNT" --no-root-password --flake "$CONFIG_DIR"#"$HOSTNAME" && echo "You can reboot the system now."
}
### PARSE ARGUMENTS ###
while [[ "$#" -gt 0 ]]; do
case $1 in
-r|--repo) GIT_REPO="$2"; shift ;;
-b|--branch) GIT_BRANCH="$2"; shift ;;
-y|--yes) ASK_VERIFICATION=0 ;;
-h|--help) Show_help; exit 0 ;;
-n|--hostname) HOSTNAME="$2"; shift ;;
*) echo "Unknown option: $1"; Show_help; exit 1 ;;
esac
shift
done
### PREREQUISITES ###
echo "$SEPARATOR"
mkdir -p "$CONFIG_DIR"
# Clone NixOS configuration from $GIT_REPO if provided
if [[ ! -z "$GIT_REPO" ]]; then
# Install git if not already installed
if ! command -v git &> /dev/null; then
echo "Git is not installed. Installing..."
nix-env -iA nixos.git
fi
# Clone Git repo if directory is empty
if [[ -z "$(ls -A "$CONFIG_DIR" 2>/dev/null)" ]]; then
echo "Cloning NixOS configuration repo..."
git clone --depth 1 -b "$GIT_BRANCH" "$GIT_REPO" "$CONFIG_DIR"
# Check if git repository has been cloned
if [[ ! -d "$CONFIG_DIR"/.git ]]; then
echo "Error: Git repository could not be cloned."
exit 1
fi
else
echo "$CONFIG_DIR is not empty. Skip cloning $GIT_REPO."
fi
fi
if [[ ! -f "$CONFIG_DIR"/flake.nix ]]; then
echo "Error: $CONFIG_DIR does not contain 'flake.nix'."
exit 1
fi
### CHOOSE CONFIG ###
# If hostname is not provided via options, prompt the user
if [[ -z "$HOSTNAME" ]]; then
# Get list of available hostnames
HOSTNAMES=$(ls "$CONFIG_DIR"/hosts)
echo "$SEPARATOR"
echo "Please choose a hostname to install its NixOS configuration."
echo "$HOSTNAMES"
read -rp "Enter hostname: " HOSTNAME
# Check if hostname is empty
if [[ -z "$HOSTNAME" ]]; then
echo "Error: Hostname cannot be empty."
exit 1
fi
fi
### INSTALLATION ###
# Check if NixOS configuration exists
if [[ -d "$CONFIG_DIR"/hosts/"$HOSTNAME" ]]; then
# Check for existing disko configuration
if [[ -f "$CONFIG_DIR"/hosts/"$HOSTNAME"/disks.nix ]]; then
Run_disko || ( echo "Error: disko failed." && exit 1 )
# Check for partitioning script
elif [[ -f "$CONFIG_DIR"/hosts/"$HOSTNAME"/disks.sh ]]; then
Run_script || ( echo "Error: Partitioning script failed." && exit 1 )
else
echo "Error: No disko configuration (disks.nix) or partitioning script (disks.sh) found for host '$HOSTNAME'."
exit 1
fi
Check_partitioning
Install || ( echo "Error: Installation failed." && exit 1 )
else
echo "Error: Configuration for host '$HOSTNAME' does not exist."
exit 1
fi
================================================
FILE: templates/generic-server/boot.nix
================================================
{
boot = {
loader = {
grub.enable = false;
generic-extlinux-compatible.enable = true;
};
};
}
================================================
FILE: templates/generic-server/default.nix
================================================
{
inputs,
outputs,
...
}:
{
imports = [
./boot.nix
./hardware.nix
./networking.nix
./packages.nix
./services
./users.nix
outputs.nixosModules.common
outputs.nixosModules.nixvim
];
system.stateVersion = "25.11";
}
================================================
FILE: templates/generic-server/disks.sh
================================================
#!/usr/bin/env bash
SSD='/dev/disk/by-id/FIXME'
MNT='/mnt'
SWAP_GB=4
# Helper function to wait for devices
wait_for_device() {
local device=$1
echo "Waiting for device: $device ..."
while [[ ! -e $device ]]; do
sleep 1
done
echo "Device $device is ready."
}
# Function to install a package if it's not already installed
install_if_missing() {
local cmd="$1"
local package="$2"
if ! command -v "$cmd" &> /dev/null; then
echo "$cmd not found, installing $package..."
nix-env -iA "nixos.$package"
fi
}
install_if_missing "sgdisk" "gptfdisk"
install_if_missing "partprobe" "parted"
wait_for_device $SSD
echo "Wiping filesystem on $SSD..."
wipefs -a $SSD
echo "Clearing partition table on $SSD..."
sgdisk --zap-all $SSD
echo "Partitioning $SSD..."
sgdisk -n1:1M:+1G -t1:EF00 -c1:BOOT $SSD
sgdisk -n2:0:+"$SWAP_GB"G -t2:8200 -c2:SWAP $SSD
sgdisk -n3:0:0 -t3:8304 -c3:ROOT $SSD
partprobe -s $SSD
udevadm settle
wait_for_device ${SSD}-part1
wait_for_device ${SSD}-part2
wait_for_device ${SSD}-part3
echo "Formatting partitions..."
mkfs.vfat -F 32 -n BOOT "${SSD}-part1"
mkswap -L SWAP "${SSD}-part2"
mkfs.ext4 -L ROOT "${SSD}-part3"
echo "Mounting partitions..."
mount -o X-mount.mkdir "${SSD}-part3" "$MNT"
mkdir -p "$MNT/boot"
mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1 "${SSD}-part1" "$MNT/boot"
echo "Enabling swap..."
swapon "${SSD}-part2"
echo "Partitioning and setup complete:"
lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
================================================
FILE: templates/generic-server/flake.nix
================================================
{
description = "A generic x86_64 server client template";
path = ./.;
}
================================================
FILE: templates/generic-server/hardware.nix
================================================
{ pkgs, lib, ... }:
{
boot = {
kernelPackages = pkgs.linuxKernel.packages.linux_rpi4;
initrd.availableKernelModules = [
"xhci_pci"
"usbhid"
"usb_storage"
];
};
fileSystems = {
"/" = {
device = "/dev/disk/by-label/NIXOS_SD";
fsType = "ext4";
options = [ "noatime" ];
};
};
nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux";
hardware.enableRedistributableFirmware = true;
}
================================================
FILE: templates/generic-server/networking.nix
================================================
{
networking.hostName = "cryodev-pi";
networking.domain = "cryodev.xyz";
}
================================================
FILE: templates/generic-server/packages.nix
================================================
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [ ];
}
================================================
FILE: templates/generic-server/users.nix
================================================
{ inputs, outputs, ... }:
{
imports = [
outputs.nixosModules.normalUsers
../../users/steffen
../../users/cryotherm
];
}
================================================
FILE: templates/generic-server/services/comin.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
imports = [
outputs.nixosModules.comin
];
services.comin = {
enable = true;
remotes = [
{
name = "origin";
url = "https://${constants.services.forgejo.fqdn}/steffen/cryodev-server.git";
branches.main.name = "main";
}
];
};
}
================================================
FILE: templates/generic-server/services/default.nix
================================================
{
imports = [
./nginx.nix
./openssh.nix
./tailscale.nix
./netdata.nix
./comin.nix
];
}
================================================
FILE: templates/generic-server/services/netdata.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
services.netdata = {
enable = true;
config = {
stream = {
enabled = "yes";
destination = "${constants.hosts.cryodev-main.ip}:${toString constants.services.netdata.port}";
"api key" = config.sops.placeholder."netdata/stream/child-uuid";
};
};
};
# Make sure sops is enabled/imported for this host to handle the secret
imports = [ outputs.nixosModules.sops ];
sops = {
defaultSopsFile = ../secrets.yaml;
secrets."netdata/stream/child-uuid" = {
owner = "netdata";
group = "netdata";
};
};
}
================================================
FILE: templates/generic-server/services/nginx.nix
================================================
{
outputs,
...
}:
{
imports = [ outputs.nixosModules.nginx ];
services.nginx = {
enable = true;
forceSSL = true;
openFirewall = true;
};
}
================================================
FILE: templates/generic-server/services/openssh.nix
================================================
{
outputs,
...
}:
{
imports = [
outputs.nixosModules.openssh
];
services.openssh.enable = true;
}
================================================
FILE: templates/generic-server/services/tailscale.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
imports = [
outputs.nixosModules.tailscale
];
services.tailscale = {
enable = true;
# Connect to our own headscale instance
loginServer = "https://${constants.services.headscale.fqdn}";
# Allow SSH access over Tailscale
enableSSH = true;
# Use MagicDNS names
acceptDNS = true;
# Auth key for automated enrollment
authKeyFile = config.sops.secrets."tailscale/auth-key".path;
};
sops.secrets."tailscale/auth-key" = { };
}
================================================
FILE: templates/raspberry-pi/boot.nix
================================================
{
boot = {
loader = {
grub.enable = false;
generic-extlinux-compatible.enable = true;
};
};
}
================================================
FILE: templates/raspberry-pi/default.nix
================================================
{
inputs,
outputs,
...
}:
{
imports = [
./boot.nix
./hardware.nix
./networking.nix
./packages.nix
./services
./users.nix
outputs.nixosModules.common
outputs.nixosModules.nixvim
];
system.stateVersion = "25.11";
}
================================================
FILE: templates/raspberry-pi/disks.sh
================================================
#!/usr/bin/env bash
SSD='/dev/disk/by-id/FIXME'
MNT='/mnt'
SWAP_GB=4
# Helper function to wait for devices
wait_for_device() {
local device=$1
echo "Waiting for device: $device ..."
while [[ ! -e $device ]]; do
sleep 1
done
echo "Device $device is ready."
}
# Function to install a package if it's not already installed
install_if_missing() {
local cmd="$1"
local package="$2"
if ! command -v "$cmd" &> /dev/null; then
echo "$cmd not found, installing $package..."
nix-env -iA "nixos.$package"
fi
}
install_if_missing "sgdisk" "gptfdisk"
install_if_missing "partprobe" "parted"
wait_for_device $SSD
echo "Wiping filesystem on $SSD..."
wipefs -a $SSD
echo "Clearing partition table on $SSD..."
sgdisk --zap-all $SSD
echo "Partitioning $SSD..."
sgdisk -n1:1M:+1G -t1:EF00 -c1:BOOT $SSD
sgdisk -n2:0:+"$SWAP_GB"G -t2:8200 -c2:SWAP $SSD
sgdisk -n3:0:0 -t3:8304 -c3:ROOT $SSD
partprobe -s $SSD
udevadm settle
wait_for_device ${SSD}-part1
wait_for_device ${SSD}-part2
wait_for_device ${SSD}-part3
echo "Formatting partitions..."
mkfs.vfat -F 32 -n BOOT "${SSD}-part1"
mkswap -L SWAP "${SSD}-part2"
mkfs.ext4 -L ROOT "${SSD}-part3"
echo "Mounting partitions..."
mount -o X-mount.mkdir "${SSD}-part3" "$MNT"
mkdir -p "$MNT/boot"
mount -t vfat -o fmask=0077,dmask=0077,iocharset=iso8859-1 "${SSD}-part1" "$MNT/boot"
echo "Enabling swap..."
swapon "${SSD}-part2"
echo "Partitioning and setup complete:"
lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
================================================
FILE: templates/raspberry-pi/flake.nix
================================================
{
description = "A Raspberry Pi 4 client template";
path = ./.;
}
================================================
FILE: templates/raspberry-pi/hardware.nix
================================================
{ pkgs, lib, ... }:
{
boot = {
kernelPackages = pkgs.linuxKernel.packages.linux_rpi4;
initrd.availableKernelModules = [
"xhci_pci"
"usbhid"
"usb_storage"
];
};
fileSystems = {
"/" = {
device = "/dev/disk/by-label/NIXOS_SD";
fsType = "ext4";
options = [ "noatime" ];
};
};
nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux";
hardware.enableRedistributableFirmware = true;
}
================================================
FILE: templates/raspberry-pi/networking.nix
================================================
{
networking.hostName = "cryodev-pi";
networking.domain = "cryodev.xyz";
}
================================================
FILE: templates/raspberry-pi/packages.nix
================================================
{ pkgs, ... }:
{
environment.systemPackages = with pkgs; [ ];
}
================================================
FILE: templates/raspberry-pi/users.nix
================================================
{ inputs, outputs, ... }:
{
imports = [
outputs.nixosModules.normalUsers
../../users/steffen
../../users/cryotherm
];
}
================================================
FILE: templates/raspberry-pi/services/comin.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
imports = [
outputs.nixosModules.comin
];
services.comin = {
enable = true;
remotes = [
{
name = "origin";
url = "https://${constants.services.forgejo.fqdn}/steffen/cryodev-server.git";
branches.main.name = "main";
}
];
};
}
================================================
FILE: templates/raspberry-pi/services/default.nix
================================================
{
imports = [
./nginx.nix
./openssh.nix
./tailscale.nix
./netdata.nix
./comin.nix
];
}
================================================
FILE: templates/raspberry-pi/services/netdata.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
services.netdata = {
enable = true;
config = {
stream = {
enabled = "yes";
destination = "${constants.hosts.cryodev-main.ip}:${toString constants.services.netdata.port}";
"api key" = config.sops.placeholder."netdata/stream/child-uuid";
};
};
};
# Make sure sops is enabled/imported for this host to handle the secret
imports = [ outputs.nixosModules.sops ];
sops = {
defaultSopsFile = ../secrets.yaml;
secrets."netdata/stream/child-uuid" = {
owner = "netdata";
group = "netdata";
};
};
}
================================================
FILE: templates/raspberry-pi/services/nginx.nix
================================================
{
outputs,
...
}:
{
imports = [ outputs.nixosModules.nginx ];
services.nginx = {
enable = true;
forceSSL = true;
openFirewall = true;
};
}
================================================
FILE: templates/raspberry-pi/services/openssh.nix
================================================
{
outputs,
...
}:
{
imports = [
outputs.nixosModules.openssh
];
services.openssh.enable = true;
}
================================================
FILE: templates/raspberry-pi/services/tailscale.nix
================================================
{
config,
pkgs,
outputs,
constants,
...
}:
{
imports = [
outputs.nixosModules.tailscale
];
services.tailscale = {
enable = true;
# Connect to our own headscale instance
loginServer = "https://${constants.services.headscale.fqdn}";
# Allow SSH access over Tailscale
enableSSH = true;
# Use MagicDNS names
acceptDNS = true;
# Auth key for automated enrollment
authKeyFile = config.sops.secrets."tailscale/auth-key".path;
};
sops.secrets."tailscale/auth-key" = { };
}
================================================
FILE: users/cryotherm/default.nix
================================================
{
normalUsers.cryotherm = {
extraGroups = [ ];
# No sshKeyFiles, so password login only (if allowed) or local access
sshKeyFiles = [ ];
};
}
================================================
FILE: users/steffen/default.nix
================================================
{ outputs, ... }:
{
normalUsers.steffen = {
extraGroups = [
"wheel"
];
sshKeyFiles = [ ./pubkeys/X670E.pub ];
};
}
================================================
FILE: users/steffen/pubkeys/X670E.pub
================================================
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDKNTpsF9Z313gWHiHi4SvjeXI4Mh80mtq0bR0AjsZr/SnPsXEiM8/ODbQNJ806qHLFSA4uA4vaevdZIJkpDqRIQviW7zHGp/weRh2+2ynH8RyFqJvsWIqWn8G5wXPYcRZ6eFjcqKraAQC46ITER4+NPgdC6Cr+dsHWyIroBep4m3EGhSLYNRaMYoKZ5aqD2jJLBolokVfseF06Y7tQ3QSwUioXgiodBdZ9hgXc/5AJdsXSxJMHmRArqbHwbWI0fhwkX+0jiUpOMXMGsJZx5G20X70mQpJu+UnQsGcw+ylQw6ZYtFmzNcYmOS//91DTzraHprnrENyb+pYV2UUZhKxjdkexpSBkkPoVEzMcw9+LCg4e/jsZ+urlRhdTPWW0/AaWJx3UJc1pHHu5UpIvQKfMdt9dZbgG7oYYE1JeCoTvtQKiBcdc54cmJuvwshaAkfN92tYGvj/L1Jeb06M34dycdCXGDGMIofMsZOsnDcHuY1CT82NlRjXmatAUOaO0rCbVNPluNmu4gmWhclQmhoUEmojBGaIXrcRuxrIJYZpWubQdBUCZiJFBJzEb2qnT0nFSe0Gu0tPOYdD/jcUVgYPRWggxQV6hssSlgERTJdzC5PhBnSe8Xi8W/rMgZA8+YBIKBJpJjF5HZTJ67EBZmNS3HWaZNIUmRXcgsONr41RCrw== steffen@X670E
================================================
FILE: .forgejo/workflows/build-hosts.yml
================================================
name: Build hosts
on:
pull_request:
branches:
- main
jobs:
build-hosts:
runs-on: docker
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v27
with:
nix_path: nixpkgs=channel:nixos-unstable
- name: Build cryodev-main
run: nix build .#nixosConfigurations.cryodev-main.config.system.build.toplevel --impure
- name: Build cryodev-pi
run: nix build .#nixosConfigurations.cryodev-pi.config.system.build.toplevel --impure
================================================
FILE: .forgejo/workflows/deploy-main.yml
================================================
name: Deploy cryodev-main
on:
push:
branches:
- main
jobs:
deploy-cryodev-main:
runs-on: docker
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v27
with:
nix_path: nixpkgs=channel:nixos-unstable
- name: Set up SSH
env:
DEPLOY_KEY: ${{ secrets.DEPLOY_SSH_KEY }}
run: |
mkdir -p ~/.ssh
echo "$DEPLOY_KEY" > ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
# Add host key (replace with actual host key or use ssh-keyscan in unsafe environments)
ssh-keyscan -H cryodev.xyz >> ~/.ssh/known_hosts
- name: Deploy with deploy-rs
run: |
# Deploy using deploy-rs
nix run github:serokell/deploy-rs -- -s .#cryodev-main
================================================
FILE: .forgejo/workflows/flake-check.yml
================================================
name: Flake check
on: [pull_request]
jobs:
flake-check:
runs-on: docker
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Nix
uses: cachix/install-nix-action@v27
with:
nix_path: nixpkgs=channel:nixos-unstable
- name: Run flake check
run: nix flake check --impure