Legacy Deployment with Nix Flake Apps and systemd User Services

9 min read

You have a nice NixOS module for production. It builds the service, wires up secrets, configures systemd, maybe nginx too. Then reality intrudes: one box is still Debian, another is some inherited Ubuntu VM, and you still need to deploy the same service there without inventing a second operational universe.

The usual answer is “use Ansible,” or Docker, or a pile of shell scripts in ~/bin that quietly turn into a homegrown deployment system. But if the service already lives in a flake, you can keep the operational logic there too.

The pattern is simple: use flake apps outputs as your legacy deployment interface. nix run .#setup, nix run .#install, nix run .#build-legacy, nix run .#dump-db and so on. Same repo, same package graph, same binary. NixOS hosts use the module. Non-NixOS hosts use the apps.

This gives you one source of truth for build inputs and a second, lighter orchestration path for machines that are not ready for full NixOS.

The goal

You want the same flake to support two deployment modes:

  • NixOS: declarative module, system service, hardened options, proper secret management
  • Legacy Linux: unprivileged user account, systemd --user, environment files, wrapper scripts, and one-command deploys

Not identical orchestration. Identical artifact.

That distinction matters. You do not want two build systems. You want one build and two ways to run it.

First: make the deploy user a trusted Nix user

Before anything else, the user running nix build on the deploy target needs permission to pass Nix settings to the daemon. This is especially important if your flake has private GitHub inputs.

On the target host:

# /etc/nix/nix.conf
trusted-users = root deploy-user

Then restart the daemon:

sudo systemctl restart nix-daemon

Without this, the deploy user can invoke Nix, but the daemon may ignore user-supplied settings such as access tokens or custom substituters. That turns into mysterious failures when a private flake input suddenly looks “missing.”

If your flake pulls from private GitHub repos, add an access token in the deploy user’s config:

# ~/.config/nix/nix.conf
access-tokens = github.com=ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Now nix build on that host can fetch private github: inputs directly. On NixOS you’d normally do this declaratively and inject the token from sops-nix or agenix. On a legacy host, this is usually a one-time machine bootstrap step.

Shared paths in the flake

The apps all share the same path conventions. Put those once near the top of your perSystem:

let
  projectDir = "$HOME/git/my-service";
  binDir = "$HOME/bin/my-service";
  aliasPrefix = "my_service";

  loadEnv = ''
    cd "${projectDir}"
    if [ -f .env.production ]; then
      set -a; source .env.production; set +a
    elif [ -f .env.local ]; then
      set -a; source .env.local; set +a
    fi
  '';
in

This is the first useful trick: treat environment loading as a reusable shell fragment, not something each script reimplements badly.

.env.production wins over .env.local. That gives you one convention for deploy targets and another for local testing. set -a means everything sourced gets exported automatically, so your wrapper scripts and systemd unit see the same variables.

App 1: setup for first-time validation

Your first deploy should not start with “run a giant script and hope.” Give yourself a small setup app that creates directories and validates the build output:

apps.setup = flake-utils.lib.mkApp {
  drv = pkgs.writeShellApplication {
    name = "setup";
    text = ''
      ${loadEnv}

      echo "Creating data directory..."
      mkdir -p "${projectDir}/.data"

      if [ ! -L "${projectDir}/result" ]; then
        echo "Run 'nix build' first, then re-run setup."
        exit 1
      fi

      echo "Validating configuration..."
      "${projectDir}/result/bin/my-service" --check-config

      echo "Setup complete."
    '';
  };
};

That --check-config flag is optional. The point is to let the service validate itself before you wire it into systemd. If your app doesn’t have a config check command, replace it with whatever proves the environment is sane: a database ping, a migration status command, a dry-run boot.

App 2: build-legacy as the deploy button

This is the core of the pattern. You want a single command that stops the old service, builds the current flake, writes the systemd user unit, reloads it, and starts the service again.

apps.build-legacy = flake-utils.lib.mkApp {
  drv = pkgs.writeShellApplication {
    name = "build-legacy";
    runtimeInputs = with pkgs; [ systemd ];
    text = ''
      set -euo pipefail
      ${loadEnv}

      echo "Stopping service..."
      systemctl --user stop my-service || true

      echo "Building..."
      nix build

      echo "Writing systemd unit..."
      mkdir -p "$HOME/.config/systemd/user"
      cat > "$HOME/.config/systemd/user/my-service.service" <<UNIT
      [Unit]
      Description=My Service Daemon
      After=network.target

      [Service]
      Type=exec
      ExecStart=${projectDir}/result/bin/my-service
      Environment=SERVICE_PORT=''${SERVICE_PORT:-8080}
      Environment=SERVICE_BIND_ADDRESS=''${SERVICE_BIND_ADDRESS:-127.0.0.1}
      Environment=SERVICE_DB_PATH=''${SERVICE_DB_PATH:-${projectDir}/.data/my-service.db}
      EnvironmentFile=-${projectDir}/.env.production
      Restart=on-failure
      RestartSec=5

      [Install]
      WantedBy=default.target
      UNIT

      systemctl --user daemon-reload
      systemctl --user enable --now my-service
      echo "Service started."
    '';
  };
};

A few design choices here are doing real work:

  • systemctl --user keeps the whole thing unprivileged. No root-owned unit, no sudo systemctl restart, no system-level service management for a simple single-user deploy.
  • systemctl --user stop ... || true makes first deploy idempotent. There might not be anything running yet.
  • ExecStart=${projectDir}/result/bin/my-service points at the current nix build symlink, so each deploy naturally flips the unit to the newest build result.
  • Environment= lines define safe defaults, while EnvironmentFile=-... lets .env.production override them at runtime.
  • The - on EnvironmentFile is important. It tells systemd “missing file is fine.”

That last point keeps first boot and bootstrap paths simple. Your unit should not explode just because the env file is absent.

Why generate the unit from Nix instead of committing it

Because the unit is part of the deployment interface, and the deployment interface belongs next to the binary definition.

If you commit deploy/my-service.service separately, it will drift. Someone tweaks the binary path, or adds a new env var, or changes restart behavior, and now the Nix package and the checked-in unit file disagree. Generating the unit inside writeShellApplication keeps the operational wrapper versioned with the build logic.

It also makes the unit templated by construction. Paths, ports, binary names, env conventions: they all come from the same Nix values used elsewhere in the flake.

App 3: install for management scripts and aliases

Once the service exists, you want a decent operator experience. Not “remember six long commands.” Real commands in ~/bin/my-service, plus shell aliases for fish, zsh, and bash.

The basic shape looks like this:

apps.install = flake-utils.lib.mkApp {
  drv = pkgs.writeShellApplication {
    name = "install";
    text = ''
      set -euo pipefail

      INSTALL_DIR="${binDir}"
      mkdir -p "$INSTALL_DIR"

      cat > "$INSTALL_DIR/build" <<'EOF'
      #!/usr/bin/env bash
      set -euo pipefail
      cd "${projectDir}"
      git pull --ff-only
      nix run .#build-legacy
      EOF
      chmod +x "$INSTALL_DIR/build"

      cat > "$INSTALL_DIR/tail-log" <<'EOF'
      #!/usr/bin/env bash
      exec journalctl --user -u my-service -f
      EOF
      chmod +x "$INSTALL_DIR/tail-log"

      cat > "$INSTALL_DIR/bash_setup" <<EOF
      echo "alias ${aliasPrefix}_build='${binDir}/build'"
      echo "alias ${aliasPrefix}_tail_log='${binDir}/tail-log'"
      EOF

      cat > "$INSTALL_DIR/fish_setup" <<EOF
      echo "alias ${aliasPrefix}_build '${binDir}/build'"
      echo "alias ${aliasPrefix}_tail_log '${binDir}/tail-log'"
      EOF
    '';
  };
};

In a real flake you’d factor that more cleanly, probably with a small helper that writes wrapped scripts and another that emits shell-specific alias syntax. The important part is the architecture:

  • install creates a stable operator-facing command set in ~/bin/my-service/
  • each command is a tiny wrapper around one task
  • shell integration is generated, not handwritten in three places

That generated alias layer is worth more than it looks. It means the deploy user logs in and has my_service_build, my_service_tail_log, my_service_dump_db, my_service_shell, whatever else you provide, with no shell-specific maintenance burden.

App 4: database dumps as part of the flake

If your “legacy deployment workflow” does not include a backup command, it is not a deployment workflow. It is an optimism framework.

Make a dump-db app and pin the toolchain with Nix just like everything else.

SQLite

apps.dump-db = flake-utils.lib.mkApp {
  drv = pkgs.writeShellApplication {
    name = "dump-db";
    runtimeInputs = with pkgs; [ sqlite bzip2 ];
    text = ''
      set -euo pipefail
      ${loadEnv}

      DB_PATH="''${SERVICE_DB_PATH:-${projectDir}/.data/my-service.db}"
      OUTPUT="db-export-$(date +%Y%m%d_%H%M%S).sql.bz2"

      sqlite3 "$DB_PATH" .dump | bzip2 > "$OUTPUT"
      echo "Exported to $OUTPUT"
    '';
  };
};

PostgreSQL

runtimeInputs = with pkgs; [ postgresql bzip2 ];
text = ''
  set -euo pipefail
  ${loadEnv}

  OUTPUT="db-export-$(date +%Y%m%d_%H%M%S).sql.bz2"
  pg_dump \
    --host="''${DB_HOST:-localhost}" \
    --port="''${DB_PORT:-5432}" \
    --username="''${DB_USER}" \
    --dbname="''${DB_NAME}" \
    --no-owner --no-acl \
    | bzip2 > "$OUTPUT"
'';

MySQL or MariaDB

runtimeInputs = with pkgs; [ mariadb bzip2 ];
text = ''
  set -euo pipefail
  ${loadEnv}

  OUTPUT="db-export-$(date +%Y%m%d_%H%M%S).sql.bz2"
  mysqldump \
    --host="''${DB_HOST:-localhost}" \
    --port="''${DB_PORT:-3306}" \
    --user="''${DB_USER}" \
    --single-transaction \
    --routines \
    "''${DB_NAME}" \
    | bzip2 > "$OUTPUT"
'';

This is exactly the kind of thing people leave to tribal knowledge: “oh, just remember the right pg_dump flags.” Put it in the flake instead. Then your deploy workflow, your operator docs, and your actual tooling all agree.

App 5: REPL access for the running service

The same pattern works for interactive operational tools.

For an Elixir release:

replScript = mkScript "iex" ''
  ${loadEnv}
  exec "${projectDir}/_build/prod/rel/my_app/bin/my_app" remote
'';

That connects an IEx shell to the running BEAM node. Same idea for Rails:

replScript = mkScript "console" ''
  ${loadEnv}
  cd "${projectDir}"
  exec bundle exec rails console -e production
'';

Or Django:

replScript = mkScript "shell" ''
  ${loadEnv}
  cd "${projectDir}"
  exec python manage.py shell
'';

The trick is not language-specific. Load the environment, move into the right directory, run the service-specific console command. Then wire that script into install and export an alias for it like any other operational command.

The MOTD banner is worth it

If the deploy target has a dedicated service user, make login useful. A small generated MOTD script gives you a dashboard instead of a blank prompt:

motdScript = mkScript "motd" ''
  printf '\n'
  printf '  \033[1;36m%s\033[0m\n' "My Service"
  printf '\n'
  printf '  Location:          %s\n' "${projectDir}"
  printf '  Environment file:  %s/.env.production\n' "${projectDir}"
  printf '\n'
  printf '  \033[1mAliases:\033[0m\n'
  printf '    %-28s %s\n' "${aliasPrefix}_build"    "stop, pull, build, restart"
  printf '    %-28s %s\n' "${aliasPrefix}_tail_log" "stream service logs"
  printf '    %-28s %s\n' "${aliasPrefix}_dump_db"  "export database backup"
  printf '    %-28s %s\n' "${aliasPrefix}_db_shell" "open database shell"
  printf '    %-28s %s\n' "${aliasPrefix}_iex"      "attach remote console"
  printf '\n'
'';

Then call it from the service user’s shell init after loading the alias file:

eval "$(~/bin/my-service/bash_setup)"
~/bin/my-service/motd

Or in fish:

eval (~/bin/my-service/fish_setup)
~/bin/my-service/motd

This is not just cosmetic. The banner becomes a tiny operational contract: what service this account owns, where it lives, and which commands matter.

The environment loading pattern

The env-loading fragment from earlier is doing two jobs:

if [ -f .env.production ]; then
  set -a; source .env.production; set +a
elif [ -f .env.local ]; then
  set -a; source .env.local; set +a
fi

First, it gives production and local environments a deterministic priority order.

Second, it keeps every app consistent. setup, build-legacy, dump-db, db-shell, iex, any future admin script: they all see the same environment variables from the same place.

That consistency is the real benefit. Once you have six or eight wrapper commands, the fastest way to create weird behavior is to let each one load config differently.

The contrast with NixOS

At this point the pattern should be clear: the legacy path is not a compromise build. It is a compromise orchestration model.

Here is the practical comparison:

AspectLegacy host via flake appsNixOS host via module
Service managersystemd --usersystem-level systemd
Privilegesunprivileged service userhardened service user / DynamicUser
Secrets.env.productionsops-nix / agenix / declarative injection
Updatesnix run .#build-legacynixos-rebuild switch
Reverse proxymanual nginx or caddy configdeclarative module options
Backupsnix run .#dump-dbtimer or service declared in NixOS
Monitoringmanual integrationdeclarative Prometheus/Grafana wiring

That is exactly the right relationship. NixOS should be the better deployment target. It gives you stronger isolation, stronger secret handling, stronger service definitions. But the legacy path still uses the same flake, same package graph, same env surface, and same operational commands.

That makes migration gradual instead of traumatic.

Daily workflow after install

Once you have installed the wrappers and sourced the aliases, operating the service should look boring:

my_service_build
my_service_tail_log
my_service_dump_db
my_service_db_shell
my_service_iex
my_service_shell

That is the whole point. If the workflow on a non-NixOS box still feels bespoke, you have not pushed enough of it into the flake yet.

Why this pattern holds up

The nice part of this approach is not that it is “fully declarative.” It isn’t. A legacy host is still a legacy host. You are still relying on a user account, a checked-out repo, a shell profile, and some one-time machine bootstrap.

The nice part is narrower and more useful:

  • your deploy commands are versioned with the code
  • your build inputs stay pinned
  • your operational wrappers stay reproducible
  • your non-NixOS path does not fork into another toolchain

You stop treating “legacy deployment” as a totally separate system. It becomes another interface on the same flake.

If you eventually migrate the box to NixOS, great. The service definition gets better, but the build and runtime assumptions stay familiar. If you do not migrate it yet, you still have something coherent: one repo, one flake, one set of commands, two orchestration targets.

That is usually enough to keep the infrastructure sane.