Introduction
This flake manages all Nix-based configurations — NixOS, nix-darwin, and Home Manager — from a single repository.
What’s inside
| Directory | Purpose |
|---|---|
systems/ | NixOS and nix-darwin host configurations |
homes/ | Home Manager user environments |
modules/ | Reusable NixOS, darwin, and home-manager modules |
packages/ | Custom packages and overrides |
shells/ | Dev shells (Go, Python, etc.) |
overlays/ | Nixpkgs overlays |
lib/ | Helper functions |
Key technologies
- Stylix — system-wide theming
- Disko — declarative disk partitioning
- Colmena — stateless deployment
- Devenv — reproducible dev environments
- SOPS-nix — secrets management
- Lanzaboote — Secure Boot
- nixos-facter — hardware reports
Getting Started
Clone
git clone https://github.com/arunoruto/flake ~/.config/flake
If you clone elsewhere, set the FLAKE environment variable:
export FLAKE=/path/to/flake
Commands throughout this guide assume FLAKE points to your flake directory.
NixOS
First-time install:
sudo nixos-rebuild switch --flake ~/.config/flake#<device-name> --accept-flake-config
After initial setup, use nh for convenience (see Daily Usage):
nh os switch ~/.config/flake#<device-name>
Darwin (macOS)
sudo nix run nix-darwin/nix-darwin-<version>#darwin-rebuild -- switch
Replace <version> with the nix-darwin release (e.g., 25.11).
Home Manager
On NixOS
home-manager switch --flake ~/.config/flake#<username> --accept-flake-config
Standalone (non-NixOS)
nix --experimental-features 'nix-command flakes' --accept-flake-config run nixpkgs#home-manager -- switch --flake ~/.config/flake#<username>
Shells like zsh require quoting:
--flake './#<username>'
Directory layout
Your host config lives in systems/<arch>/<host>/. Home Manager profiles are in homes/<username>/. Add new ones following the existing patterns.
Building ISOs for NixOS Hosts
This flake supports building bootable ISOs that embed the flake source, so nixos-install --flake works against /etc/nixos/flake without network access to the flake repo.
ISOs are generated from a single generic module (systems/iso/installer.nix) parameterized by hostname.
Quick start
# Build ISO + .sha256 checksum sidecar (Ventoy-ready)
nix build .#iso-<hostname>
# Copy to USB (ISO + sidecar file)
cp -r result/iso/* /mnt/ventoy/
# Or write directly with dd
sudo dd if=result/iso/*.iso of=/dev/sdX bs=4M status=progress conv=fsync
## Adding a new ISO target
Append the hostname to the `isoHosts` list in `flake.nix`:
```nix
isoHosts = [ "shinji" "kenpachi" "zangetsu" ];
The generic installer.nix module:
- Sets
isoImage.edition= hostname (distinguishable filenames likenixos-shinji-25.11-x86_64-linux.iso) - Embeds the flake source at
/nixos-flake→ copied to/etc/nixos/flakeat boot - Includes
diskoin the live Nix store - Prints MOTD instructions referencing the hostname
- Provides an
autoinstallsystemd oneshot (triggers whenautoinstallis in/proc/cmdline)
Install workflow
Manual (default)
- Boot from USB
- Login (root, no password)
- Follow the on-screen instructions:
sudo disko --mode disko /etc/nixos/flake#<hostname>
sudo nixos-install --flake /etc/nixos/flake#<hostname> --root /mnt
sudo reboot
Autoinstall
Add autoinstall to the kernel command line at boot:
- In the GRUB menu, highlight the entry and press
e - Append
autoinstallto thelinuxline - Press
Ctrl+xorF10to boot
The system will automatically partition the disk (via disko), install NixOS, and reboot.
Example: shinji ISO
# Build ISO only
nix build .#nixosConfigurations.iso-shinji.config.system.build.isoImage
# Build ISO + checksums (recommended)
nix build .#nixosConfigurations.iso-shinji.config.system.build.isoChecksums
# Write to USB
sudo dd if=result/iso/*.iso of=/dev/sdX bs=4M status=progress conv=fsync
For Ventoy, copy result/iso/* directly — the .sha256 file will be auto-detected.
Prerequisites for a host to be ISO-installable
The target host must:
- Use disko for disk partitioning (
disk.niximportinginputs.disko.nixosModules.disko) - Have
fileSystemsinhardware-configuration.nixcommented out — disko generates them declaratively - Include the block device kernel module in
boot.initrd.availableKernelModules(e.g."nvme","ahci","sd_mod")
How it works
systems/iso/installer.nix receives hostname via specialArgs and uses it to:
- Set the ISO edition/volume label
- Populate MOTD instructions and autoinstall commands dynamically
- The flake source (
self) is embedded as a raw copy on the ISO at/nixos-flake - At boot,
postBootCommandscopies it to/etc/nixos/flakewherenixos-install --flakecan find it
Verification
Build the ISO with an accompanying .sha256 checksum file for verification:
nix build .#nixosConfigurations.iso-<hostname>.config.system.build.isoChecksums
Output:
result/
├── iso/
│ ├── nixos-<hostname>-...iso
│ └── nixos-<hostname>-...iso.sha256
└── SHA256SUMS
The .sha256 file uses sha256sum format (hash filename). Copy result/iso/* to your USB stick.
Ventoy
Ventoy auto-detects .sha256 files placed next to the ISO. Select the ISO in the Ventoy boot menu, press m, choose SHA256 — it calculates and compares against the file, confirming the copy was not corrupted.
Manual verification
cd result/
sha256sum -c SHA256SUMS
Size considerations
- The ISO includes the flake source closure, the live Nix store (squashfs), and disko
- Typical size: ~1.4 GB depending on extra
storeContents - Use
isoImage.squashfsCompression = "zstd -Xcompression-level 19"(default) for best compression - Add
isoImage.compressImage = truefor an extra.zstlayer (slower build, smaller file)
Generic minimal ISO
The flake also has a barebones ISO at systems/iso/default.nix that imports the standard installation-cd-minimal with helix added. Not registered as a nixosConfiguration — build it directly if needed.
Storage
Filesystems at a glance
| FS | Use in this flake | Best for |
|---|---|---|
| ext4 | 11 hosts — root + data | Simplicity, zero maintenance |
| btrfs | 3 hosts — root (shinji, yhwach, kyuubi) | Compression, snapshots |
| zfs | 2 hosts — data pools only (sado, kuchiki) | Large storage, checksums |
| vfat | Every host with /boot (EFI) | EFI system partition |
Host → filesystem mapping
| Host | Root | Data | Managed by |
|---|---|---|---|
| shinji | btrfs (@root + @nix) | — | disko |
| yhwach | btrfs (subvol=@) | — | hardware-config |
| kyuubi | btrfs (subvol=@) | — | hardware-config |
| sado | ext4 | zfs (/mnt/flash) | hardware-config |
| kuchiki | ext4 | zfs (/mnt/storage) | hardware-config |
| kenpachi | ext4 (LVM) | — | disko |
| aizen | ext4 (LVM) | — | disko |
| 7 others | ext4 | — | hardware-config |
Choosing a filesystem for a new host
- ext4 — the default. Simple, proven, zero configuration. Used on 11 of 14 hosts.
- btrfs — when you want transparent zstd compression (saves ~25% on
/nix/store) or snapshot support. Use the@root+@nixsubvolume layout to keep snapshots lightweight. See btrfs.md for details. - zfs — only for dedicated data pools that benefit from checksums, dedup, or RAID-Z. Do not use for root: the out-of-tree kernel module frequently breaks on kernel updates.
Disko
Disko provides declarative disk partitioning. Instead of manually running fdisk, mkfs, and recording UUIDs in hardware-configuration.nix, you define everything in a disk.nix:
- Partition layout (BIOS boot, ESP, root)
- Filesystem type (ext4, btrfs, zfs)
- Subvolume layout for btrfs
- LVM volumes if needed
Disko is imported per-host via inputs.disko.nixosModules.disko in the host’s disk.nix. Currently 3 of 14 hosts use it (shinji, kenpachi, aizen). The rest use a manually-generated hardware-configuration.nix with explicit fileSystems entries.
When disko manages the filesystems, the fileSystems entries in hardware-configuration.nix should be commented out — disko generates them declaratively at build time.
Example: shinji
# systems/x86_64-linux/shinji/default.nix
{ inputs, ... }: {
imports = [
./configuration.nix
./disk.nix # disko: btrfs on NVMe with @root + @nix subvolumes
./hardware-configuration.nix # fileSystems commented out, disko handles them
];
}
See btrfs.md for a complete disko + btrfs walkthrough.
Btrfs on NixOS
Why btrfs
| Feature | Benefit on NixOS |
|---|---|
| zstd compression | 20-30% space saved on /nix/store transparently |
| Subvolumes | Separate / and /nix — snapshot root without bloating snapshots with store data |
| Snapshots | Roll back a bad nixos-rebuild in seconds |
| Native kernel | No out-of-tree module — works on every kernel update (unlike ZFS) |
Compared to ext4: same simplicity, more features. Compared to ZFS: fewer features but zero maintenance overhead.
Layout used in this flake
Two subvolumes on a single btrfs partition:
/dev/<disk> (btrfs)
├── @root → / compress=zstd
└── @nix → /nix compress=zstd,noatime
@root: the OS, your home,/etc— everything except the Nix store.@nix: the Nix store only. Mounted withnoatimeto reduce metadata writes during package operations.- Why separate? Snapshots of
/don’t capture/nix/store, keeping them small. Restoring/from a snapshot doesn’t touch the store.
Disko configuration
The flake uses disko for declarative partitioning. Here’s a minimal btrfs layout:
{ inputs, lib, ... }:
{
imports = [ inputs.disko.nixosModules.disko ];
disko.devices = {
disk.main = {
device = lib.mkDefault "/dev/nvme0n1";
type = "disk";
content = {
type = "gpt";
partitions = {
boot = {
size = "1M";
type = "EF02";
};
esp = {
size = "512M";
type = "EF00";
content = {
type = "filesystem";
format = "vfat";
mountpoint = "/boot";
};
};
root = {
size = "100%";
content = {
type = "btrfs";
extraArgs = [ "-f" ];
subvolumes = {
"@root" = {
mountpoint = "/";
mountOptions = [ "compress=zstd" ];
};
"@nix" = {
mountpoint = "/nix";
mountOptions = [
"compress=zstd"
"noatime"
];
};
};
};
};
};
};
};
};
}
See systems/x86_64-linux/shinji/disk.nix for a real example.
Maintenance
Check filesystem usage
btrfs filesystem usage /
btrfs filesystem df /
Scrub (data integrity check)
Run occasionally to detect bit rot or disk errors:
sudo btrfs scrub start /
sudo btrfs scrub status /
Balance
Only needed when adding/removing drives or after heavy usage patterns. Not routine:
sudo btrfs balance start -dusage=50 /
Compression stats
See how much space compression saves:
sudo compsize /nix/store
Snapshots
Manual snapshot before a risky rebuild
sudo btrfs subvolume snapshot -r / /snapshots/root-$(date -I)
Roll back to a snapshot
# Boot from a live USB, mount the btrfs volume
sudo mount -o subvol=/ /dev/<disk> /mnt
sudo mv /mnt/@root /mnt/@root-broken
sudo btrfs subvolume snapshot /mnt/snapshots/root-2026-01-01 /mnt/@root
reboot
Delete old snapshots
sudo btrfs subvolume delete /snapshots/root-2025-12-01
Recovery
Mount subvolumes from a live USB
sudo mount /dev/<device> /mnt
# The default subvolume mounts automatically
# Access subvolumes:
ls /mnt/@root
ls /mnt/@nix
Mount a specific subvolume for chroot repair
sudo mount -o subvol=@root,compress=zstd /dev/<device> /mnt
sudo mount -o subvol=@nix,compress=zstd /dev/<device> /mnt/nix
sudo mount /dev/<esp> /mnt/boot
sudo nixos-enter
Filesystem creation (manual, without disko)
If you need to set up btrfs manually instead of using disko:
mkfs.btrfs -f /dev/<partition>
mount /dev/<partition> /mnt
btrfs subvolume create /mnt/@root
btrfs subvolume create /mnt/@nix
umount /mnt
mount -o subvol=@root,compress=zstd /dev/<partition> /mnt
mkdir -p /mnt/nix
mount -o subvol=@nix,compress=zstd,noatime /dev/<partition> /mnt/nix
ZFS on NixOS
Why ZFS for data pools
| Feature | Benefit |
|---|---|
| Checksums | Detects and repairs bit rot on every read |
| Snapshots | Instant, zero-overhead point-in-time copies |
| Dataset properties | Compression, recordsize, etc. per dataset — no chattr hacks |
| RAID-Z | Redundancy without a hardware RAID controller |
| Send/Receive | Efficient backup and replication |
Why NOT for root
- Out-of-tree kernel module — ZFS frequently breaks when the kernel updates. On NixOS with
boot.kernelPackages = linuxPackages_latest, this is a recurring headache. - Boot complexity — requires ZFSBootMenu or a separate
/boot+ manual pool import in initrd. - This flake’s convention — ext4 or btrfs for root, ZFS exclusively for data pools (
/mnt/flash,/mnt/storage).
Pool structure in this flake
sado — flash pool
flash
├── flash/appdata → /mnt/flash/appdata (immich, paperless, komga configs)
├── flash/photos → /mnt/flash/photos
├── flash/documents → /mnt/flash/documents
└── flash/books → /mnt/flash/books
Source: systems/x86_64-linux/sado/hardware-configuration.nix
kuchiki — storage pool
storage
├── storage/appdata → /mnt/storage/appdata (media service configs)
├── storage/downloads → /mnt/storage/downloads
└── storage/media → /mnt/storage/media
Source: systems/x86_64-linux/kuchiki/hardware-configuration.nix
Both hosts use hosts.zfs.enable = true (shared module at modules/nixos/system/zfs.nix) and systemd.services.zfs-mount.enable = false (mount via NixOS fileSystems declarations instead).
Creating a pool
Single disk — no redundancy
zpool create tank /dev/sda
Mirror — survives 1 disk failure
zpool create tank mirror /dev/sda /dev/sdb
Good for: root filesystem (if you must), appdata where uptime matters.
RAID-Z — 1 disk parity, minimum 3 disks
zpool create tank raidz /dev/sda /dev/sdb /dev/sdc
Good for: media storage where capacity > performance.
RAID-Z2 — 2 disk parity, minimum 4 disks
zpool create tank raidz2 /dev/sda /dev/sdb /dev/sdc /dev/sdd
Good for: large arrays where rebuild time is a concern.
Ashift — always set for modern drives
4K sector drives (all SSDs and most HDDs since ~2011) need ashift=12:
zpool create -o ashift=12 tank mirror /dev/sda /dev/sdb
Without it: write amplification and performance degradation on 4K-native drives.
Adding to NixOS config
After creating pools/datasets imperatively, declare them in hardware-configuration.nix so NixOS mounts them at boot:
fileSystems."/mnt/tank" = {
device = "tank";
fsType = "zfs";
};
fileSystems."/mnt/tank/media" = {
device = "tank/media";
fsType = "zfs";
};
Datasets
Creating
zfs create tank/appdata
zfs create tank/media
# Custom mountpoint
zfs create -o mountpoint=/custom/path tank/custom
Recommended properties
| Property | Value | Why |
|---|---|---|
compression | lz4 or zstd | Free space savings, negligible CPU cost. lz4 for speed, zstd for higher compression. |
recordsize | 1M (media) / 128K (general) / 16K (databases) | Match workload I/O patterns. |
atime | off | Eliminates metadata writes on every read access. |
xattr | sa | Store extended attributes in dnodes instead of hidden files — big speedup. |
acltype | posix | POSIX ACLs; enables nfs4 if you need that. |
aclinherit | passthrough | Inherit ACLs from parent if using ACLs. |
dedup | off | Keep it off — RAM consumption is enormous. |
Media dataset — large sequential files
zfs set compression=zstd recordsize=1M atime=off tank/media
Appdata dataset — small random I/O (databases, configs)
zfs set compression=lz4 recordsize=128K atime=off xattr=sa tank/appdata
Downloads dataset — mixed, discard-friendly
zfs set compression=zstd recordsize=1M atime=off tank/downloads
Snapshots
Manual
zfs snapshot tank/media@backup-$(date -I)
zfs list -t snapshot
Automatic (zfs-auto-snapshot)
services.zfs.autoSnapshot = {
enable = true;
frequent = 4; # keep 4 quarter-hourly snapshots
hourly = 24;
daily = 7;
weekly = 4;
monthly = 6;
};
Rollback
zfs rollback tank/media@backup-2026-01-01
# For older snapshots (destroys intermediate snapshots):
zfs rollback -r tank/media@backup-2025-12-01
Access files from a snapshot without rollback
Snapshots are mounted read-only under /.zfs/snapshot/<name>/:
ls /mnt/tank/media/.zfs/snapshot/backup-2026-01-01/
cp /mnt/tank/media/.zfs/snapshot/backup-2026-01-01/deleted-file.txt .
Maintenance
Scrubbing — data integrity check
zpool scrub tank
zpool status tank # watch progress
This flake enables auto-scrub in modules/nixos/system/zfs.nix:
services.zfs.autoScrub = {
enable = true;
interval = "*-*-1,15 02:30"; # 1st and 15th of every month at 2:30 AM
};
Pool and dataset status
zpool status # health, errors, scrub progress
zpool list # space per pool
zfs list # space per dataset
zfs list -t snapshot # list all snapshots
zfs get all tank/appdata # all properties of a dataset
Replacing a failed disk
# After physically swapping the disk:
zpool replace tank /dev/sdb /dev/sdc
zpool status tank # wait for resilver
Destroying a pool
zpool destroy tank
# If pool claims to be busy:
zpool destroy -f tank
Adding ZFS to a new host
- Enable the shared ZFS module:
hosts.zfs.enable = true;
- Set a host ID (required — pick a random hex string):
networking.hostId = "a1b2c3d4";
- Disable automatic zfs-mount service (NixOS handles mounting via
fileSystems):
systemd.services.zfs-mount.enable = false;
- Declare datasets in
hardware-configuration.nix:
fileSystems."/mnt/tank" = {
device = "tank";
fsType = "zfs";
};
fileSystems."/mnt/tank/appdata" = {
device = "tank/appdata";
fsType = "zfs";
};
See systems/x86_64-linux/sado/ and systems/x86_64-linux/kuchiki/ for working examples.
Daily Usage
Nix Helper (nh)
nh is a convenience wrapper around common Nix operations.
With FLAKE set in your environment:
nh os switch # Update NixOS
nh home switch # Update Home Manager
nh clean all # Garbage collection
Without FLAKE:
nh os switch ~/.config/flake#<device-name>
nh home switch ~/.config/flake#<username>
Set FLAKE via environment.sessionVariables.FLAKE in your system config.
Clean-up
nix-collect-garbage --delete-older-than 30d
nh clean all
Git Fetchers
When adding a package from a git source, you need the commit and the Nix hash.
nix run nixpkgs#nix-prefetch-git https://github.com/EliverLara/candy-icons
Alternatively, leave the hash field empty in your derivation, attempt a build, and copy the hash from the error message.
Dev Shells
Available shells (see shells/):
nix develop .#go
nix develop .#python
nix develop .#nix # includes statix, deadnix, nixfmt
Workflows & Automation
GitHub Workflow Token
If you edit the CI workflows, your token needs the workflow scope:
gh auth status # Check current scopes
gh auth login --scopes workflow
Facter
Generate a hardware report for a new system:
sudo nix run \
--option experimental-features "nix-command flakes" \
--option extra-substituters https://numtide.cachix.org \
--option extra-trusted-public-keys numtide.cachix.org-1:2ps1kLBUWjxIneOy1Ik6cQjb41X0iXVXeHigGmycPPE= \
github:numtide/nixos-facter -- -o facter.json
Place the output in systems/<arch>/<host>/facter.json.
Networking
Networking in this flake is split into small, focused modules under modules/nixos/services/network/.
Use this section for host ingress, tunnels, and service exposure patterns.
Cloudflared
This flake uses Cloudflare Tunnel (cloudflared) to expose selected services without opening direct inbound ports on the origin hosts.
Module Location
- Base wrapper module:
modules/nixos/services/network/cloudflared.nix - Cloudflared service module:
modules/nixos/services/network/cloudflared/new.nix
One-Time Tunnel Bootstrap
Run these commands on the host where you manage tunnels:
cloudflared login
cloudflared tunnel create <name>
sudo mkdir -p /etc/cloudflared
sudo cp /home/mirza/.cloudflared/cert.pem /etc/cloudflared/cert.pem
Notes:
cloudflared logincreates~/.cloudflared/cert.pem.cloudflared tunnel create <name>creates tunnel credentials JSON under~/.cloudflared/.- The cert file is needed for declarative tunnel management workflows.
Secrets Wiring
Store tunnel credentials in secrets/secrets.yaml under:
config.cloudflared.<host>
Example existing keys:
config.cloudflared.sadoconfig.cloudflared.kuchikiconfig.cloudflared.madara
For a new ingress host (for example aizen), add:
config.cloudflared.aizen
The module reads from config/cloudflared/${config.networking.hostName}.
Host Configuration Pattern
Enable cloudflared on the host and define ordered ingress rules:
services.cloudflared = {
enable = true;
defaultDomain = "arnaut.me";
tunnels."${config.networking.hostName}".ingress = [
{
hostname = "arr.${config.services.cloudflared.defaultDomain}";
path = "/radarr.*";
service = "http://kuchiki.${config.services.tailscale.tailnet}.ts.net:${builtins.toString config.services.radarr.settings.server.port}";
}
{
hostname = "arr.${config.services.cloudflared.defaultDomain}";
path = "/sonarr.*";
service = "http://kuchiki.${config.services.tailscale.tailnet}.ts.net:${builtins.toString config.services.sonarr.settings.server.port}";
}
{
hostname = "arr.${config.services.cloudflared.defaultDomain}";
path = "/lidarr.*";
service = "http://sado.${config.services.tailscale.tailnet}.ts.net:${builtins.toString config.services.lidarr.settings.server.port}";
}
{
hostname = "arr.${config.services.cloudflared.defaultDomain}";
path = "/prowlarr.*";
service = "http://shinji.${config.services.tailscale.tailnet}.ts.net:${builtins.toString config.services.prowlarr.settings.server.port}";
}
];
};
Important details:
- Rules are matched in order.
pathsupports regex matching (for example/radarr.*).- A default catch-all service (
http_status:404) is configured by module defaults.
Deploy and Verify
After secrets + host config are in place:
sudo nixos-rebuild switch --flake ~/.config/flake#<host>
Check service status:
systemctl status cloudflared-tunnel-<host>
Verify external routes:
https://arr.arnaut.me/radarrhttps://arr.arnaut.me/sonarrhttps://arr.arnaut.me/lidarrhttps://arr.arnaut.me/prowlarr
Troubleshooting
- Missing or wrong secret path: verify
config.cloudflared.<host>exists insecrets/secrets.yaml. - Service unreachable: verify Tailnet DNS/host reachability from ingress host.
- Path not matching: confirm
hostnameand regexpathvalues in ingress rules. - Auth behavior unexpected: check Cloudflare Access app/policy scope for the hostname and path.
Tips & Resources
Tutorials & Guides
- NixOS & Flakes Book — comprehensive intro
- A Gentle Introduction to Nix Flakes — flake anatomy
- Why you don’t need flake-utils — the case for plain Nix
Vimjoyer (YouTube)
- Nix explained from the ground up
- NixOS: Everything Everywhere All At Once
- Ultimate NixOS Guide
- Modularize NixOS and Home Manager
- Nixvim: Neovim Distro Powered By Nix
- Is NixOS The Best Gaming Distro
Other
Nix Language
- explainix — hover over Nix syntax to see what it means
- inherit keyword
High-Level Libraries
- flake-utils
- flake-parts
- snowfall lib — this flake’s directory structure is inspired by snowfall
Updating Custom Packages
Use nix-update:
nix-update legacyPackages.x86_64-linux.<pkg> --flake --override-filename packages/top-level/<pkg>/package.nix
ZFS
Maintainer Notes
TODO
- Integrate disko for each host
- Manage host types via Colmena host tags for finer-grained control
Fixes
Thick black borders in GNOME apps
Set GSK_RENDERER=gl. Tracked at GTK#6890.
Credits
- use-the-fork — help moving from standalone Home Manager to module-based setup
- u/paulgdp — advice on detecting
nixosConfigin module context olmokramer— example usinglib.genAttrsfor host generation