Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

dunkirk.sh

Kieran's opinionated NixOS infrastructure — declarative server config, self-hosted services, and automated deployments.

Layout

~/dots
├── .github/workflows  # CI/CD (deploy-rs + per-service reusable workflow)
├── dots               # config files symlinked by home-manager
│   └── wallpapers
├── machines
│   ├── atalanta       # macOS M4 (nix-darwin)
│   ├── ember          # dell r210 server (basement)
│   ├── moonlark       # framework 13 (dead)
│   ├── nest           # shared tilde server (home-manager only)
│   ├── prattle        # oracle cloud x86_64
│   ├── tacyon         # rpi 5
│   └── terebithia     # oracle cloud aarch64 (main server)
├── modules
│   ├── lib
│   │   └── mkService.nix  # service factory (see Deployment section)
│   ├── home           # home-manager modules
│   │   ├── aesthetics # theming and wallpapers
│   │   ├── apps       # app configs (ghostty, helix, git, ssh, etc.)
│   │   ├── system     # shell, environment
│   │   └── wm/hyprland
│   └── nixos          # nixos modules
│       ├── apps       # system-level app configs
│       ├── services   # self-hosted services (mkService-based + custom)
│       │   ├── restic # backup system with CLI
│       │   └── bore   # tunnel proxy
│       └── system     # pam, wifi
├── packages           # custom nix packages
└── secrets            # agenix-encrypted secrets

Machines

NamePlatformRole
terebithiaOracle Cloud aarch64Main server — runs all services
prattleOracle Cloud x86_64Secondary server
atalantamacOS M4Development laptop (nix-darwin)
emberDell R210Basement server
tacyonRaspberry Pi 5Edge device
nestShared tildeHome-manager only

Installation

Warning: This configuration will not work without changing the secrets since they are encrypted with agenix.

macOS with nix-darwin

  1. Install Nix:
curl -fsSL https://install.determinate.systems/nix | sh -s -- install
  1. Clone and apply:
git clone git@github.com:taciturnaxolotl/dots.git
cd dots
darwin-rebuild switch --flake .#atalanta

Home Manager

Install Nix, copy SSH keys, then:

curl -fsSL https://install.determinate.systems/nix | sh -s -- install --determinate
git clone git@github.com:taciturnaxolotl/dots.git
cd dots
nix-shell -p home-manager
home-manager switch --flake .#nest

Set up atuin for shell history sync:

atuin login
atuin import

NixOS

Only works with prattle and terebithia which have disko configs.

nix run github:nix-community/nixos-anywhere -- \
  --flake .#prattle \
  --generate-hardware-config nixos-facter ./machines/prattle/facter.json \
  --build-on-remote \
  root@<ip-address>

Using the install script

curl -L https://raw.githubusercontent.com/taciturnaxolotl/dots/main/install.sh -o install.sh
chmod +x install.sh
./install.sh

Post-install

After first boot, log in with user kierank and the default password, then:

passwd kierank
sudo mv /etc/nixos ~/dots
sudo ln -s ~/dots /etc/nixos
sudo chown -R $(id -un):users ~/dots
atuin login && atuin sync

Deployment

Two deploy paths: infrastructure (NixOS config changes) and application code (per-service repos).

Infrastructure

Pushing to main triggers .github/workflows/deploy.yaml which runs deploy-rs over Tailscale to rebuild NixOS on the target machine.

# manual deploy
nix run 'github:serokell/deploy-rs' -- --remote-build --ssh-user kierank .

Application Code

Each service repo has a minimal workflow calling the reusable .github/workflows/deploy-service.yml. On push to main:

  1. Connects to Tailscale (tag:deploy)
  2. SSHes as the service user (e.g., cachet@terebithia) via Tailscale SSH
  3. Snapshots the SQLite DB (if db_path is provided)
  4. git pull + bun install --frozen-lockfile + sudo systemctl restart
  5. Health check (HTTP URL or systemd status fallback)
  6. Auto-rollback on failure (restores DB snapshot + reverts to previous commit)

Per-app workflow — copy and change the with: values:

name: Deploy
on:
  push:
    branches: [main]
  workflow_dispatch:
jobs:
  deploy:
    uses: taciturnaxolotl/dots/.github/workflows/deploy-service.yml@main
    with:
      service: cachet
      health_url: https://cachet.dunkirk.sh/health
      db_path: /var/lib/cachet/data/cachet.db
    secrets:
      TS_OAUTH_CLIENT_ID: ${{ secrets.TS_OAUTH_CLIENT_ID }}
      TS_OAUTH_SECRET: ${{ secrets.TS_OAUTH_SECRET }}

Omit health_url to fall back to systemctl is-active. Omit db_path for stateless services.

mkService

modules/lib/mkService.nix standardizes service modules. A call to mkService { ... } provides:

  • Systemd service with initial git clone (subsequent deploys via GitHub Actions)
  • Caddy reverse proxy with TLS via Cloudflare DNS and optional rate limiting
  • Data declarations (sqlite, postgres, files) that feed into automatic backups
  • Dedicated system user with sudo for restart/stop/start (enables per-user Tailscale ACLs)
  • Port conflict detection, security hardening, agenix secrets

Adding a new service

  1. Create a module in modules/nixos/services/
  2. Enable it in machines/terebithia/default.nix
  3. Add a deploy workflow to the app repo

See modules/nixos/services/cachet.nix for a minimal example.

Machine health checks

Machines with Tailscale enabled automatically expose their hostname for reachability checks in the services manifest via atelier.machine.tailscaleHost. This defaults to networking.hostName when services.tailscale.enable is true.

Services

Services are grouped by machine in the services manifest. Machines with Tailscale enabled automatically expose their hostname for reachability checks via atelier.machine.tailscaleHost.

Machines

MachinePlatformTailscale
terebithiaOracle Cloud aarch64terebithia
moonlark
prattle

terebithia

All services run behind Caddy with Cloudflare DNS TLS.

mkService-based

ServiceDomainPortRuntimeDescription
cachetcachet.dunkirk.sh3000bunSlack emoji/profile cache
hn-alertshn.dunkirk.sh3001bunHacker News monitoring
indikoindiko.dunkirk.sh3003bunIndieAuth/OAuth2 server
l4l4.dunkirk.sh3004bunImage CDN — Slack image optimizer
canvas-mcpcanvas.dunkirk.sh3006bunCanvas MCP server
controlcontrol.dunkirk.sh3010bunAdmin dashboard for Caddy toggles
traversetraverse.dunkirk.sh4173bunCode walkthrough diagram server
cedarlogiccedarlogic.dunkirk.sh3100customCircuit simulator

Multi-instance

ServiceDomainPortDescription
emojibot-hackclubhc.emojibot.dunkirk.sh3002Emojibot for Hack Club
emojibot-df1317df.emojibot.dunkirk.sh3005Emojibot for df1317

Custom / external

ServiceDomainDescription
bore (frps)bore.dunkirk.shHTTP/TCP/UDP tunnel proxy
heraldherald.dunkirk.shGit SSH hosting + email
knotknot.dunkirk.shTangled git hosting
spindlespindle.dunkirk.shTangled CI
battleship-arenabattleship.dunkirk.shBattleship game server
n8nn8n.dunkirk.shWorkflow automation

Services manifest

The manifest is now grouped by machine. Evaluate with:

nix eval --json .#services-manifest

Output shape:

{
  "terebithia": {
    "hostname": "terebithia",
    "tailscale_host": "terebithia",
    "services": [{ "name": "cachet", "health_url": "https://cachet.dunkirk.sh/health", ... }]
  }
}

Architecture

Each mkService module provides:

  • Systemd service — initial git clone for scaffolding, subsequent deploys via GitHub Actions
  • Caddy reverse proxy — TLS via Cloudflare DNS challenge, optional rate limiting
  • Data declarationssqlite, postgres, files feed into automatic backups
  • Dedicated user — sudo for restart/stop/start, per-user Tailscale SSH ACLs
  • Port conflict detection — assertions prevent two services binding the same port

control

Admin dashboard for Caddy feature toggles. Provides a web UI to enable/disable paths on other services (e.g. blocking player tracking on the map).

Domain: control.dunkirk.sh · Port: 3010 · Runtime: bun

Extra options

flags

Defines per-domain feature flags that control blocks paths and redacts JSON fields.

atelier.services.control.flags."map.dunkirk.sh" = {
  name = "Map";
  flags = {
    "block-tracking" = {
      name = "Block Player Tracking";
      description = "Disable real-time player location updates";
      paths = [
        "/sse"
        "/sse/*"
        "/tiles/*/markers/pl3xmap_players.json"
      ];
      redact."/tiles/settings.json" = [ "players" ];
    };
  };
};
OptionTypeDefaultDescription
flagsattrsOf submodule{}Services and their feature flags, keyed by domain
flags.<domain>.namestringDisplay name for the service
flags.<domain>.flags.<id>.namestringDisplay name for the flag
flags.<domain>.flags.<id>.descriptionstringWhat the flag does
flags.<domain>.flags.<id>.pathslist of strings[]URL paths to block when flag is active
flags.<domain>.flags.<id>.redactattrsOf (list of strings){}JSON fields to redact from responses, keyed by path

The flags config is serialized to flags.json and passed to control via the FLAGS_CONFIG environment variable.

cedarlogic

Browser-based circuit simulator with real-time collaboration via WebSockets.

Domain: cedarlogic.dunkirk.sh · Port: 3100 · Runtime: custom

Extra options

OptionTypeDefaultDescription
wsPortport3101Hocuspocus WebSocket server for document collaboration
cursorPortport3102Cursor relay WebSocket server for live cursors
branchstring"web"Git branch to clone (uses web branch, not main)

Caddy routing

Cedarlogic disables the default mkService Caddy config and uses path-based routing to three backends:

PathBackend
/wswsPort (Hocuspocus)
/cursor-wscursorPort (cursor relay)
/api/*, /auth/*main port
Everything elseStatic files from dist/

Build step

On initial scaffold, cedarlogic installs deps and builds:

bun install → parse-gates → bun run build (Vite)

Subsequent deploys handle their own build via the deploy workflow. The build has a 120s timeout to accommodate Vite compilation.

emojibot

Slack emoji management service. Supports multiple instances for different workspaces.

Runtime: bun · Stateless (no database)

This is a custom module — it does not use mkService. Each instance gets its own systemd service, user, and Caddy virtual host.

Instance options

Instances are defined under atelier.services.emojibot.instances.<name>:

atelier.services.emojibot.instances = {
  hackclub = {
    enable = true;
    domain = "hc.emojibot.dunkirk.sh";
    port = 3002;
    workspace = "hackclub";
    channel = "C02T3CU03T3";
    repository = "https://github.com/taciturnaxolotl/emojibot";
    secretsFile = config.age.secrets."emojibot/hackclub".path;
    healthUrl = "https://hc.emojibot.dunkirk.sh/health";
  };
};
OptionTypeDefaultDescription
enableboolfalseEnable this instance
domainstringDomain for Caddy reverse proxy
portportPort to run on
secretsFilepathAgenix secrets file with Slack credentials
repositorystring"https://github.com/taciturnaxolotl/emojibot"Git repo URL
workspacestring or nullnullSlack workspace name (for identification)
channelstring or nullnullSlack channel ID
healthUrlstring or nullnullHealth check URL for monitoring

Current instances

InstanceDomainPortWorkspace
hackclubhc.emojibot.dunkirk.sh3002Hack Club
df1317df.emojibot.dunkirk.sh3005df1317

herald

Git SSH hosting with email notifications. Provides a git push interface over SSH and sends email via SMTP/DKIM.

Domain: herald.dunkirk.sh · SSH Port: 2223 · HTTP Port: 8085

This is a custom module — it does not use mkService.

Options

OptionTypeDefaultDescription
enableboolfalseEnable herald
domainstringDomain for Caddy reverse proxy
hoststring"0.0.0.0"Listen address
sshPortport2223SSH listen port
externalSshPortport2223External SSH port (if behind NAT)
httpPortport8085HTTP API port
dataDirpath"/var/lib/herald"Data directory
allowAllKeysbooltrueAllow all SSH keys
secretsFilepathAgenix secrets (must contain SMTP_PASS)
packagepackagepkgs.heraldHerald package

SMTP

OptionTypeDefaultDescription
smtp.hoststringSMTP server hostname
smtp.portport587SMTP server port
smtp.userstringSMTP username
smtp.fromstringSender address

DKIM

OptionTypeDefaultDescription
smtp.dkim.selectorstring or nullnullDKIM selector
smtp.dkim.domainstring or nullnullDKIM signing domain
smtp.dkim.privateKeyFilepath or nullnullPath to DKIM private key

knot-sync

Mirrors Tangled knot repositories to GitHub on a cron schedule.

This is a custom module — it does not use mkService. Runs as a systemd timer, not a long-running service.

Options

OptionTypeDefaultDescription
enableboolfalseEnable knot-sync
repoDirstring"/home/git/did:plc:..."Directory containing knot git repos
githubUsernamestring"taciturnaxolotl"GitHub username to mirror to
secretsFilepathAgenix secrets (must contain GITHUB_TOKEN)
logFilestring"/home/git/knot-sync.log"Log file path
intervalstring"*/5 * * * *"Cron schedule for sync

battleship-arena

Battleship game server with web interface and SSH-based bot submission.

Domain: battleship.dunkirk.sh · Web Port: 8081 · SSH Port: 2222

This is a custom module — it does not use mkService.

Options

OptionTypeDefaultDescription
enableboolfalseEnable battleship-arena
domainstring"battleship.dunkirk.sh"Domain for Caddy reverse proxy
sshPortport2222SSH port for bot submissions
webPortport8081Web interface port
uploadDirstring"/var/lib/battleship-arena/submissions"Bot upload directory
resultsDbstring"/var/lib/battleship-arena/results.db"SQLite results database path
adminPasscodestring"battleship-admin-override"Admin passcode
secretsFilepath or nullnullAgenix secrets file
packagepackageBattleship-arena package (from flake input)

bore (server)

Lightweight tunneling server built on frp. Supports HTTP (wildcard subdomains), TCP, and UDP tunnels with optional OAuth authentication via Indiko.

Domain: bore.dunkirk.sh · frp port: 7000

This is a custom module — it does not use mkService.

Options

OptionTypeDefaultDescription
enableboolfalseEnable bore server
domainstringBase domain for wildcard subdomains
bindAddrstring"0.0.0.0"frps bind address
bindPortport7000frps bind port
vhostHTTPPortport7080Virtual host HTTP port
allowedTCPPortslist of ports20000–20099Ports available for TCP tunnels
allowedUDPPortslist of ports20000–20099Ports available for UDP tunnels
authTokenstring or nullnullfrp auth token (use authTokenFile instead)
authTokenFilepath or nullnullPath to file containing frp auth token
enableCaddybooltrueAuto-configure Caddy wildcard vhost

Authentication

When enabled, all HTTP tunnels are gated behind Indiko OAuth. Users must sign in before accessing tunneled services.

OptionTypeDefaultDescription
auth.enableboolfalseEnable bore-auth OAuth middleware
auth.indikoURLstring"https://indiko.dunkirk.sh"Indiko server URL
auth.clientIDstringOAuth client ID from Indiko
auth.clientSecretFilepathPath to OAuth client secret
auth.cookieHashKeyFilepath32-byte cookie signing key
auth.cookieBlockKeyFilepath32-byte cookie encryption key

After authentication, these headers are passed to tunneled services:

  • X-Auth-User — user's profile URL
  • X-Auth-Name — display name
  • X-Auth-Email — email address

See bore (client) for the home-manager client module.

Backups

Services are automatically backed up nightly using restic to Backblaze B2. Backup targets are auto-discovered from data.sqlite/data.postgres/data.files declarations in mkService modules.

Schedule

  • Time: 02:00 AM daily
  • Random delay: 0–2 hours (spreads load across services)
  • Retention: 3 snapshots, 7 daily, 5 weekly, 12 monthly

CLI

The atelier-backup command provides an interactive TUI:

sudo atelier-backup              # Interactive menu
sudo atelier-backup status       # Show backup status for all services
sudo atelier-backup list         # Browse snapshots
sudo atelier-backup backup       # Trigger manual backup
sudo atelier-backup restore      # Interactive restore wizard
sudo atelier-backup dr           # Disaster recovery mode

Service integration

Automatic (mkService)

Services using mkService with data.* declarations get automatic backup:

mkService {
  name = "myapp";
  extraConfig = cfg: {
    atelier.services.myapp.data = {
      sqlite = "${cfg.dataDir}/data/app.db";  # Auto WAL checkpoint + stop/start
      files = [ "${cfg.dataDir}/uploads" ];    # Just backed up, no hooks
    };
  };
}

The backup system automatically checkpoints SQLite WAL, stops the service during backup, and restarts after completion.

Manual registration

For services not using mkService:

atelier.backup.services.myservice = {
  paths = [ "/var/lib/myservice" ];
  exclude = [ "*.log" "cache/*" ];
  preBackup = "systemctl stop myservice";
  postBackup = "systemctl start myservice";
};

Disaster recovery

On a fresh NixOS install:

  1. Rebuild from flake: nixos-rebuild switch --flake .#hostname
  2. Run: sudo atelier-backup dr
  3. All services restored from latest snapshots

Setup

  1. Create a B2 bucket and application key
  2. Create agenix secrets for restic/password, restic/env, restic/repo
  3. Enable: atelier.backup.enable = true;

See modules/nixos/services/restic/README.md for full setup details.

Secrets

Secrets are managed using agenix — encrypted at rest in the repo and decrypted at activation time to /run/agenix/.

Usage

Create or edit a secret:

cd secrets && agenix -e myapp.age

The secret file contains environment variables, one per line:

DATABASE_URL=postgres://...
API_KEY=xxxxx
SECRET_TOKEN=yyyyy

Adding a new secret

  1. Add the public key entry to secrets/secrets.nix:
"service-name.age".publicKeys = [ kierank ];
  1. Create and encrypt the secret:
agenix -e secrets/service-name.age
  1. Declare in machine config:
age.secrets.service-name = {
  file = ../../secrets/service-name.age;
  owner = "service-name";
};
  1. Reference as config.age.secrets.service-name.path in the service module.

Identity paths

The decryption keys are SSH keys configured per machine:

age.identityPaths = [
  "/home/kierank/.ssh/id_rsa"
  "/etc/ssh/id_rsa"
];

Modules

Custom NixOS and home-manager modules under the atelier.* namespace. These wrap and extend upstream packages with opinionated defaults and structured configuration.

NixOS modules

ModuleNamespaceDescription
tuigreetatelier.apps.tuigreetLogin greeter with 30+ typed options
wifiatelier.network.wifiDeclarative Wi-Fi profiles with eduroam support
authenticationatelier.authenticationFingerprint + PAM stack (fprintd, polkit, gnome-keyring)

Home-manager modules

ModuleNamespaceDescription
shellatelier.shellZsh + oh-my-posh + Tangled workflow tooling
sshatelier.sshSSH config with zmx persistent sessions
helixatelier.apps.helixEvil-helix with 15+ LSPs, wakatime, harper
bore (client)atelier.boreTunnel client CLI for the bore server
pbnjatelier.pbnjPastebin CLI with language detection
wutatelier.shell.wutGit worktree manager

tuigreet

Configures greetd with tuigreet as the login greeter. Exposes nearly every tuigreet CLI flag as a typed Nix option.

Options

All options under atelier.apps.tuigreet:

Core

OptionTypeDefaultDescription
enableboolfalseEnable tuigreet
commandstring"Hyprland"Session command to run after login
greetingstring(unauthorized access warning)Greeting message

Display

OptionTypeDefaultDescription
timeboolfalseShow clock
timeFormatstring"%H:%M"Clock format
issueboolfalseShow /etc/issue
widthint80UI width
themestring""Theme string
asterisksboolfalseShow asterisks for password
asterisksCharstring"*"Character for password masking

Layout

OptionTypeDefaultDescription
windowPaddingint0Window padding
containerPaddingint1Container padding
promptPaddingint1Prompt padding
greetAlignenum"center"Greeting alignment: left, center, right

Session management

OptionTypeDefaultDescription
rememberboolfalseRemember last username
rememberSessionboolfalseRemember last session
rememberUserSessionboolfalsePer-user session memory
sessionsstring""Wayland session search path
xsessionsstring""X11 session search path
sessionWrapperstring""Session wrapper command

User menu

OptionTypeDefaultDescription
userMenuboolfalseShow user selection menu
userMenuMinUidint1000Minimum UID in user menu
userMenuMaxUidint65534Maximum UID in user menu

Power commands

OptionTypeDefaultDescription
powerShutdownstring""Shutdown command
powerRebootstring""Reboot command

Keybindings

OptionTypeDefaultDescription
kbCommandenum"F2"Key to switch command
kbSessionsenum"F3"Key to switch session
kbPowerenum"F12"Key for power menu

wifi

Declarative Wi-Fi profile manager using NetworkManager. Supports three ways to supply passwords and has built-in eduroam (WPA-EAP) support.

Options

All options under atelier.network.wifi:

OptionTypeDefaultDescription
enableboolfalseEnable Wi-Fi management
hostNamestringSets networking.hostName
nameserverslist of strings[]Custom DNS servers
envFilepathEnvironment file providing PSK variables for all profiles

Profiles

Defined under atelier.network.wifi.profiles.<ssid>:

OptionTypeDefaultDescription
pskstring or nullnullLiteral WPA-PSK passphrase
pskVarstring or nullnullEnvironment variable name containing the PSK (from envFile)
pskFilepath or nullnullPath to file containing the PSK
eduroamboolfalseUse WPA-EAP with MSCHAPV2 (for eduroam networks)
identitystring or nullnullEAP identity (required when eduroam = true)

Only one of psk, pskVar, or pskFile should be set per profile.

Example

atelier.network.wifi = {
  enable = true;
  hostName = "moonlark";
  nameservers = [ "1.1.1.1" "8.8.8.8" ];
  envFile = config.age.secrets.wifi.path;

  profiles = {
    "Home Network" = {
      pskVar = "HOME_PSK";  # read from envFile
    };
    "eduroam" = {
      eduroam = true;
      identity = "user@university.edu";
      pskVar = "EDUROAM_PSK";
    };
    "Phone Hotspot" = {
      pskFile = config.age.secrets.hotspot.path;
    };
  };
};

shell

Zsh configuration with oh-my-posh prompt, syntax highlighting, fzf-tab, zoxide, and Tangled git workflow tooling.

Options

All options under atelier.shell:

OptionTypeDefaultDescription
enableboolfalseEnable shell configuration

Tangled

Options for the tangled-setup and mkdev scripts that manage dual-remote git workflows (Tangled knot + GitHub).

OptionTypeDefaultDescription
tangled.plcIdstringATProto DID for Tangled identity
tangled.githubUserstringGitHub username
tangled.knotHoststringKnot git host (e.g. knot.dunkirk.sh)
tangled.domainstringTangled domain for repo URLs
tangled.defaultBranchstring"main"Default branch name

Included tools

  • tangled-setup — configures a repo with origin pointing to knot and github pointing to GitHub
  • mkdev — creates a new repo on both Tangled and GitHub simultaneously
  • oh-my-posh — custom prompt showing path, git status (ahead/behind), exec time, nix-shell indicator, ZMX session, SSH hostname
  • Aliasescat=bat, ls=eza, cd=z (zoxide), and more

ssh

Declarative SSH config with per-host options and zmx (persistent tmux-like sessions over SSH) integration.

Options

All options under atelier.ssh:

OptionTypeDefaultDescription
enableboolfalseEnable SSH config management
extraConfigstring""Raw SSH config appended to the end

zmx

OptionTypeDefaultDescription
zmx.enableboolfalseInstall zmx and autossh
zmx.hostslist of strings[]Host patterns to auto-attach via zmx

When zmx is enabled for a host, the SSH config injects RemoteCommand, RequestTTY force, and ControlMaster/ControlPersist settings. Shell aliases are also added: zmls, zmk, zma, ash.

Hosts

Per-host config under atelier.ssh.hosts.<name>:

OptionTypeDefaultDescription
hostnamestringSSH hostname or IP
portint or nullnullSSH port
userstring or nullnullSSH user
identityFilestring or nullnullPath to SSH key
forwardAgentboolfalseForward SSH agent
zmxboolfalseEnable zmx for this host
extraOptionsattrsOf string{}Arbitrary SSH options

Example

atelier.ssh = {
  enable = true;
  zmx.enable = true;
  zmx.hosts = [ "terebithia" "ember" ];

  hosts = {
    terebithia = {
      hostname = "terebithia";
      user = "kierank";
      forwardAgent = true;
      zmx = true;
    };
    "github.com" = {
      identityFile = "~/.ssh/id_rsa";
    };
  };
};

helix

Evil-helix (vim-mode fork) with comprehensive LSP setup, wakatime tracking on every language, and harper grammar checking.

Options

All options under atelier.apps.helix:

OptionTypeDefaultDescription
enableboolfalseEnable helix configuration
swiftboolfalseAdd sourcekit-lsp for Swift (platform-conditional)

Language servers

The module configures 15+ language servers out of the box:

LanguageServer
Nixnixd + nil
TypeScript/JavaScripttypescript-language-server + biome
Gogopls
Pythonpylsp
Rustrust-analyzer
HTML/CSSvscode-html-language-server, vscode-css-language-server
JSONvscode-json-language-server + biome
TOMLtaplo
Markdownmarksman
YAMLyaml-language-server
Swiftsourcekit-lsp (when swift = true)

All languages also get:

  • wakatime-ls — coding time tracking
  • harper-ls — grammar and spell checking

Note: After install, run hx -g fetch && hx -g build to compile tree-sitter grammars.

bore (client)

Interactive CLI for creating tunnels to the bore server. Built with gum, supports HTTP, TCP, and UDP tunnels.

Options

All options under atelier.bore:

OptionTypeDefaultDescription
enableboolfalseInstall the bore CLI
serverAddrstring"bore.dunkirk.sh"frps server address
serverPortport7000frps server port
domainstring"bore.dunkirk.sh"Base domain for constructing public URLs
authTokenFilepathPath to frp auth token file

Usage

bore                  # Interactive menu
bore myapp 3000       # Quick HTTP tunnel: myapp.bore.dunkirk.sh → localhost:3000
bore myapp 3000 --auth  # With OAuth authentication
bore myapp 3000 --save  # Save to bore.toml for reuse

Tunnels can also be defined in a bore.toml:

[myapp]
port = 3000
auth = true
labels = ["dev"]

pbnj

Pastebin CLI with automatic language detection, clipboard integration, and agenix auth.

Options

All options under atelier.pbnj:

OptionTypeDefaultDescription
enableboolfalseInstall the pbnj CLI
hoststringPastebin instance URL
authKeyFilepathPath to auth key file (e.g. agenix secret)

Usage

pbnj                          # Interactive menu
pbnj upload myfile.py         # Upload file (auto-detects Python)
cat output.log | pbnj upload  # Upload from stdin
pbnj list                     # List pastes
pbnj delete <id>              # Delete a paste

Supports 25+ languages via file extension detection. Automatically copies the URL to clipboard (wl-copy/xclip/pbcopy depending on platform).

wut

Worktrees Unexpectedly Tolerable — a git worktree manager that keeps worktrees organized under .worktrees/.

Options

OptionTypeDefaultDescription
atelier.shell.wut.enableboolfalseInstall wut and the zsh shell wrapper

Usage

wut new feat/my-feature   # Create worktree + branch under .worktrees/
wut list                   # Show all worktrees
wut go feat/my-feature     # cd into worktree (via shell wrapper)
wut go                     # Interactive picker
wut path feat/my-feature   # Print worktree path
wut rm feat/my-feature     # Remove worktree + delete branch

Shell integration

Wut needs to cd the calling shell, which a subprocess can't do directly. It works by printing a __WUT_CD__=/path marker that a zsh wrapper function intercepts:

wut() {
  output=$(/path/to/wut "$@")
  if [[ "$output" == *"__WUT_CD__="* ]]; then
    cd "${output##*__WUT_CD__=}"
  else
    echo "$output"
  fi
}

This wrapper is automatically injected into initContent when the module is enabled.

Safety

  • wut rm refuses to delete worktrees with uncommitted changes (use --force to override)
  • wut rm warns before deleting unmerged branches
  • The main/master branch worktree cannot be removed

mkService

modules/lib/mkService.nix is the service factory used by most atelier services. It takes a set of parameters and returns a NixOS module with standardized options, systemd service, Caddy reverse proxy, and backup integration.

Factory parameters

ParameterTypeDefaultDescription
namestringrequiredService identity — used for user, group, systemd unit, and option namespace
descriptionstring"<name> service"Human-readable description
defaultPortint3000Default port if not overridden in config
runtimestring"bun""bun", "node", or "custom"
entryPointstring"src/index.ts"Script to run (ignored if startCommand is set)
startCommandstringnullOverride the full start command
extraOptionsattrset{}Additional NixOS options for this service
extraConfigfunctioncfg: {}Additional NixOS config when enabled (receives the service config)

Options

Every mkService module creates options under atelier.services.<name>:

Core

OptionTypeDefaultDescription
enableboolfalseEnable the service
domainstringrequiredDomain for Caddy reverse proxy
portportdefaultPortPort the service listens on
dataDirpath"/var/lib/<name>"Data storage directory
secretsFilepath or nullnullAgenix secrets environment file
repositorystring or nullnullGit repo URL — cloned once on first start
healthUrlstring or nullnullHealth check URL for monitoring
environmentattrset{}Additional environment variables

Data declarations

Used by the backup system to automatically discover what to back up.

OptionTypeDefaultDescription
data.sqlitestring or nullnullSQLite database path (WAL checkpoint + stop/start during backup)
data.postgresstring or nullnullPostgreSQL database name (pg_dump during backup)
data.fileslist of strings[]Additional file paths to back up
data.excludelist of strings["*.log", "node_modules", ...]Glob patterns to exclude

Caddy

OptionTypeDefaultDescription
caddy.enablebooltrueEnable Caddy reverse proxy
caddy.extraConfigstring""Additional Caddy directives
caddy.rateLimit.enableboolfalseEnable rate limiting
caddy.rateLimit.eventsint60Requests per window
caddy.rateLimit.windowstring"1m"Rate limit time window

What it sets up

  • System user and group — dedicated user in the services group with sudo for systemctl restart/stop/start/status
  • Systemd serviceExecStartPre creates dirs as root, preStart clones repo and installs deps, ExecStart runs the application
  • Caddy virtual host — TLS via Cloudflare DNS challenge, reverse proxy to localhost port
  • Port conflict detection — assertions prevent two services from binding the same port
  • Security hardeningNoNewPrivileges, ProtectSystem=strict, ProtectHome, PrivateTmp

Example

Minimal service module:

let
  mkService = import ../../lib/mkService.nix;
in
mkService {
  name = "myapp";
  description = "My application";
  defaultPort = 3000;
  runtime = "bun";
  entryPoint = "src/index.ts";

  extraConfig = cfg: {
    systemd.services.myapp.serviceConfig.Environment = [
      "DATABASE_PATH=${cfg.dataDir}/data/app.db"
    ];

    atelier.services.myapp.data = {
      sqlite = "${cfg.dataDir}/data/app.db";
    };
  };
}

Then enable in the machine config:

atelier.services.myapp = {
  enable = true;
  domain = "myapp.dunkirk.sh";
  repository = "https://github.com/taciturnaxolotl/myapp";
  secretsFile = config.age.secrets.myapp.path;
  healthUrl = "https://myapp.dunkirk.sh/health";
};

Service utility functions

Service utility functions for the atelier infrastructure.

These functions operate on NixOS configurations to extract service metadata for dashboards, monitoring, and documentation.

services.isMkService

Check whether an atelier service config value has the standard mkService shape (has enable, domain, port, _description).

Arguments

  • cfg — an attribute set from config.atelier.services.<name>

Type

AttrSet -> Bool

Example

isMkService config.atelier.services.cachet
=> true

services.mkServiceEntry

Convert a single mkService config into a manifest entry.

Arguments

  • name — the service name (attribute key)
  • cfg — the service config attrset

Type

String -> AttrSet -> AttrSet

Example

mkServiceEntry "cachet" config.atelier.services.cachet
=> { name = "cachet"; domain = "cachet.dunkirk.sh"; ... }

services.mkManifest

Build a services manifest from an evaluated NixOS config.

Discovers all enabled mkService-based services plus emojibot instances. Returns a sorted list of service entries suitable for JSON serialisation.

Arguments

  • config — the fully evaluated NixOS configuration

Type

AttrSet -> [ AttrSet ]

Example

mkManifest config
=> [ { name = "cachet"; domain = "cachet.dunkirk.sh"; ... } ... ]

services.mkMachinesManifest

Build a manifest of all machines and their services.

Takes one or more attrsets of system configurations (NixOS, Darwin, or home-manager) and returns an attrset keyed by machine name. Only machines with atelier.machine.enable = true are included.

Arguments

  • configSets — list of attrsets of system configurations

Type

[ AttrSet ] -> AttrSet

Example

mkMachinesManifest [ self.nixosConfigurations self.darwinConfigurations ]
=> { terebithia = { hostname = "terebithia"; services = [ ... ]; }; }