PostgreSQL 14→18 on NixOS — A Flake-Native Migration
9 min readYou check your NixOS server and PostgreSQL is on 14.22. The server’s been running system.stateVersion = "22.05" since it was provisioned, and NixOS dutifully kept the default PostgreSQL version for that era. NixOS 25.11 ships PostgreSQL 17 as the default for new installs, and PG 18 is in nixpkgs. PG 14 reaches end-of-life in November 2026. Thirteen databases across multiple application instances need to move.
PostgreSQL does not support in-place major version upgrades. The on-disk format changes between major versions, so you either use pg_upgrade — which rewrites catalog metadata while preserving data files — or do a full pg_dumpall/restore cycle. pg_upgrade is dramatically faster because it links or copies existing data files rather than re-inserting every row through SQL. That’s what we’ll use.
Why flake scripts instead of ad-hoc shell commands
The conventional approach involves SSHing into the server and running a series of commands copied from a wiki page. This has problems:
- Reproducibility — ad-hoc commands are easy to mistype, especially the
pg_upgradeinvocation which requires exact paths to both old and new PostgreSQL binaries. - Binary path management — on a flake-based NixOS system, there’s no
<nixpkgs>channel. You can’t runnix-build '<nixpkgs>' -A postgresql_14to get a package path. The packages live in the Nix store, and their paths are determined at evaluation time by the flake lock file. - Atomicity — by encoding the procedure in the flake, the exact PG 14 and PG 18 store paths are baked into the scripts at build time. The scripts and the NixOS configuration reference the same nixpkgs revision, eliminating version skew.
Flake structure
Pinning the nixpkgs revision
The flake uses a nixpkgs-stable input pinned to nixos-25.11:
inputs = {
nixpkgs-stable = {
url = "github:NixOS/nixpkgs/nixos-25.11";
};
};
In the per-system block:
let
pkgs = nixpkgs.legacyPackages.${system};
pkgs-stable = nixpkgs-stable.legacyPackages.${system};
in
This matters: the server uses nixpkgs-stable (NixOS 25.11), not nixpkgs (unstable). The PostgreSQL packages baked into the scripts must come from the same nixpkgs revision that the server’s NixOS configuration uses. The PG 18 binary in the upgrade script will be byte-for-byte identical to the one NixOS runs as a service after deployment.
Script definitions
The scripts are defined inside the flake’s eachDefaultSystem block, guarded by optionalAttrs so they only exist on x86_64-linux:
pgUpgradeScripts = pkgs.lib.optionalAttrs (system == "x86_64-linux") (
let
pg14 = pkgs-stable.postgresql_14;
pg18 = pkgs-stable.postgresql_18;
oldDir = "/var/lib/postgresql/14";
newDir = "/var/lib/postgresql/18";
in
{
# step1, step2, step3 defined here
}
);
When Nix evaluates ${pg14}, it becomes something like /nix/store/abc123-postgresql-14.22. When it evaluates ${pg18}, it becomes /nix/store/xyz789-postgresql-18.3. The resulting shell scripts contain hardcoded, fully-qualified store paths. No runtime resolution, no PATH dependency, no risk of picking up the wrong binary. Both PostgreSQL versions are pulled into each script’s closure, meaning nix run .#postgresql-upgrade-14-18-step1 on the server automatically fetches both PG 14 and PG 18 binaries from the cache before executing.
Exposing as packages and apps
The scripts are merged into the flake’s packages output:
packages = {
default = myPackage;
} // pgUpgradeScripts;
And automatically mapped to apps:
apps = builtins.mapAttrs (name: pkg: {
type = "app";
program = "${pkg}/bin/${name}";
}) pgUpgradeScripts;
This enables nix run .#postgresql-upgrade-14-18-step1 syntax without repeating the app definition for each script.
Why three steps with human checkpoints
The migration is deliberately split into three scripts:
- Step 1 runs while PG 14 is live. If anything fails, abort with zero consequences.
- Step 2 performs the irreversible
pg_upgrade. Separated so the operator reviews step 1’s output before committing. - Step 3 runs after deploying the new NixOS configuration with PG 18. A
nixos-rebuild switchmust happen between step 2 and step 3.
You could automate all three into one script. You probably shouldn’t. The gap between “preflight passed” and “actually rewrite my production database catalogs” is exactly where you want a human reading terminal output and deciding whether to proceed.
Step 1: backup and preflight
set -euo pipefail
echo "=== Step 1: Backup and preflight check ==="
[[ $EUID -eq 0 ]] || { echo "Run as root"; exit 1; }
echo "Checking current checksum status..."
sudo -u postgres ${pg14}/bin/pg_controldata ${oldDir} | grep -i checksum
echo ""
echo "Checking for MD5 passwords..."
sudo -u postgres ${pg14}/bin/psql -c \
"SELECT rolname,
CASE WHEN rolpassword LIKE 'md5%'
THEN 'MD5 (migrate to SCRAM!)'
ELSE 'OK'
END AS auth
FROM pg_authid WHERE rolpassword IS NOT NULL;"
echo ""
echo "Checking for expression indexes..."
sudo -u postgres ${pg14}/bin/psql -At -c \
"SELECT schemaname || '.' || indexname || ': ' || indexdef
FROM pg_indexes
WHERE indexdef ~ '\\(.*\\('" 2>/dev/null || true
echo ""
echo "Checking for FTS indexes..."
sudo -u postgres ${pg14}/bin/psql -At -c \
"SELECT schemaname || '.' || indexname
FROM pg_indexes
WHERE indexdef LIKE '%tsvector%'
OR indexdef LIKE '%gin%'
OR indexdef LIKE '%gist%'" 2>/dev/null || true
echo ""
echo "Taking pg_dumpall backup..."
mkdir -p /var/backup
sudo -u postgres ${pg14}/bin/pg_dumpall \
> "/var/backup/postgresql-14-pre-upgrade-$(date +%Y%m%d).sql"
echo "Backup saved to /var/backup/"
echo ""
echo "Listing databases for reference..."
sudo -u postgres ${pg14}/bin/psql -l
echo ""
echo "Stopping PostgreSQL..."
systemctl stop postgresql.service
echo "Creating socket directory..."
mkdir -p /var/run/postgresql
chown postgres:postgres /var/run/postgresql
checksum_status=$(sudo -u postgres ${pg14}/bin/pg_controldata ${oldDir} \
| grep "Data page checksum" | awk '{print $NF}')
initdb_flags=""
if [ "$checksum_status" = "0" ] \
|| echo "$checksum_status" | grep -qi "off\|disabled"; then
echo "Old cluster has checksums DISABLED — passing --no-data-checksums to initdb"
initdb_flags="--no-data-checksums"
fi
echo "Initializing new data directory..."
sudo -u postgres ${pg18}/bin/initdb $initdb_flags -D ${newDir}
echo "Running pg_upgrade --check (dry run)..."
cd /var/lib/postgresql
sudo -u postgres ${pg18}/bin/pg_upgrade \
--socketdir=/var/run/postgresql \
--old-bindir=${pg14}/bin \
--new-bindir=${pg18}/bin \
--old-datadir=${oldDir} \
--new-datadir=${newDir} \
--check
echo ""
echo "=== Preflight passed. Proceed to step 2. ==="
echo "NOTE: PostgreSQL is stopped. If aborting, run:"
echo " rm -rf ${newDir} && systemctl start postgresql.service"
What each check does
Checksum status — this is the single most critical check for a 14→18 upgrade. PG 18 changed the initdb default to enable data page checksums. If the old cluster has checksums disabled (Data page checksum version: 0), the new cluster must also be initialized without them. Miss this and pg_upgrade refuses to proceed because the checksum settings don’t match.
MD5 password audit — PG 18 deprecates MD5 password hashing in favor of SCRAM-SHA-256. The query against pg_authid identifies roles still on MD5 so you can plan migration before or after the upgrade, rather than discovering deprecation warnings flooding your logs at 2 AM.
Expression indexes — PG 17 tightened security around search_path in functions used by expression indexes and materialized views. The check identifies expression indexes so you can verify they’ll survive the upgrade.
FTS/GIN/GiST indexes — PG 18 changed the full-text search collation provider. After pg_upgrade, FTS and trigram indexes need reindexing because the collation metadata embedded in the index may no longer match. Identifying them upfront tells you what step 3’s reindex will cover.
pg_dumpall — a full logical backup as a safety net. If something goes catastrophically wrong, this SQL dump restores the entire cluster to a fresh PG 14 installation. The old data directory is also preserved until explicitly deleted in cleanup, providing a second recovery path.
Conditional initdb — creates the PG 18 data directory at /var/lib/postgresql/18, passing --no-data-checksums only if the old cluster has them disabled. NixOS convention uses /var/lib/postgresql/<major-version> as the data directory.
pg_upgrade –check — the --check flag performs a complete dry run. It verifies binary compatibility, checks for data types that changed representation, validates that required shared libraries exist, and confirms the clusters are upgrade-compatible — all without modifying any data. If this passes, the real upgrade will pass.
Step 2: the actual migration
set -euo pipefail
echo "=== Step 2: Run pg_upgrade ==="
[[ $EUID -eq 0 ]] || { echo "Run as root"; exit 1; }
echo "Ensuring PostgreSQL is stopped..."
systemctl stop postgresql.service 2>/dev/null || true
mkdir -p /var/run/postgresql
chown postgres:postgres /var/run/postgresql
echo "Running pg_upgrade..."
cd /var/lib/postgresql
sudo -u postgres ${pg18}/bin/pg_upgrade \
--socketdir=/var/run/postgresql \
--old-bindir=${pg14}/bin \
--new-bindir=${pg18}/bin \
--old-datadir=${oldDir} \
--new-datadir=${newDir}
echo ""
echo "=== pg_upgrade completed. ==="
echo ""
echo "Next steps:"
echo " 1. Update postgresql config to set package = pkgs.postgresql_18"
echo " 2. Commit, push, deploy"
echo " 3. Run step 3 for post-upgrade verification"
This is the short one. It does exactly one thing: run pg_upgrade for real. The operation proceeds in phases internally:
- Global objects — roles, tablespaces, and other cluster-wide objects are dumped from the old cluster and restored into the new one.
- Schema dump/restore — each database’s schema is dumped as SQL from PG 14 and replayed into PG 18. This is where catalog format differences are resolved.
- Data file copy — the actual table and index data files are copied from old to new. Since we’re on the same filesystem, this is a straightforward copy. The data files are forward-compatible at the page level.
- Transaction metadata — XID counters, multixact state, and WAL position are carried forward for transactional continuity.
- Extension update script —
pg_upgradegeneratesupdate_extensions.sqlfor extensions that needALTER EXTENSION ... UPDATE. Step 3 applies this.
The entire operation completes in seconds for moderately-sized databases because pg_upgrade doesn’t re-process row data. It just rewrites the catalog and links the data files.
The deploy
Between step 2 and step 3, deploy the new NixOS configuration. The PostgreSQL module change is minimal:
# services/postgresql.nix
{ pkgs, ... }:
{
services.postgresql = {
enable = true;
package = pkgs.postgresql_18;
};
}
nixos-rebuild switch activates this. NixOS builds a new system closure referencing postgresql-18.3 instead of postgresql-14.22, generates new systemd unit files pointing to the PG 18 binary, starts postgresql.service against the /var/lib/postgresql/18 data directory that pg_upgrade prepared, and starts all dependent services via systemd dependency ordering. One atomic transition.
Step 3: post-upgrade verification
set -euo pipefail
echo "=== Step 3: Post-upgrade verification ==="
[[ $EUID -eq 0 ]] || { echo "Run as root"; exit 1; }
echo "Checking PostgreSQL service status..."
systemctl status postgresql.service --no-pager
echo ""
echo "Checking PostgreSQL version..."
sudo -u postgres ${pg18}/bin/psql -c "SELECT version();"
echo ""
echo "Listing databases..."
sudo -u postgres ${pg18}/bin/psql -l
if [ -f /var/lib/postgresql/update_extensions.sql ]; then
echo ""
echo "Applying extension updates..."
sudo -u postgres ${pg18}/bin/psql \
-f /var/lib/postgresql/update_extensions.sql
fi
echo ""
echo "Reindexing all databases (required for FTS collation changes)..."
for db in $(sudo -u postgres ${pg18}/bin/psql -At -c \
"SELECT datname FROM pg_database WHERE datistemplate = false;"); do
echo " Reindexing $db..."
sudo -u postgres ${pg18}/bin/reindexdb "$db" \
|| echo " WARNING: reindex of $db failed"
done
echo ""
echo "Checking for MD5 passwords (deprecated in PG18)..."
sudo -u postgres ${pg18}/bin/psql -c \
"SELECT rolname,
CASE WHEN rolpassword LIKE 'md5%'
THEN 'MD5 — MIGRATE TO SCRAM!'
ELSE 'SCRAM (ok)'
END AS auth
FROM pg_authid WHERE rolpassword IS NOT NULL;"
echo ""
echo "Running ANALYZE on all databases..."
sudo -u postgres ${pg18}/bin/vacuumdb --all --analyze-only
echo ""
echo "Checking dependent services..."
for svc in service-a service-b service-c analytics webmail; do
status=$(systemctl is-active "$svc" 2>/dev/null || echo "not found")
printf " %-25s %s\n" "$svc" "$status"
done
echo ""
echo "=== Verification complete ==="
echo ""
echo "If everything looks good:"
echo " 1. Remove old data directory: rm -rf ${oldDir}"
echo " 2. Delete pg_upgrade logs/scripts in /var/lib/postgresql/"
echo " 3. If MD5 passwords were found, migrate them to SCRAM-SHA-256"
Version confirmation — SELECT version() should return “PostgreSQL 18.x”, confirming the new binary is serving the migrated data.
Extension updates — applies update_extensions.sql generated by pg_upgrade. This runs ALTER EXTENSION ... UPDATE for extensions whose catalog entries need updating (e.g., citext).
Full reindex — every database is reindexed with reindexdb. Essential after a major upgrade to PG 18 because of the FTS collation provider change — text search indexes built under PG 14’s collation assumptions may produce incorrect results under PG 18. Rebuilds all indexes from source data, guaranteeing correctness.
ANALYZE — vacuumdb --all --analyze-only refreshes the query planner’s statistics for every table in every database. pg_upgrade preserves data files but not planner statistics, so without this the planner defaults to sequential scans until autovacuum eventually collects fresh stats. You don’t want to discover this via a production latency spike.
Service health check — confirms all dependent services reconnected and are running after the PG version change.
What makes this NixOS-specific
No package manager state — on Debian you’d apt install postgresql-18, manage pg_lsclusters, deal with the postgresql-common wrapper, and worry about APT restarting services. On NixOS the package is a store path. Installing PG 18 doesn’t affect the running PG 14 until you activate a new system configuration.
stateVersion controls defaults — the server was still on PG 14 despite running NixOS 25.11 which ships PG 17 for new installs. This is by design — NixOS won’t silently change your database version. The explicit package = pkgs.postgresql_18; pin overrides the stateVersion default and documents the intentional upgrade.
Atomic system activation — nixos-rebuild switch doesn’t upgrade PostgreSQL in isolation. It atomically transitions the entire system — PostgreSQL, dependent services, systemd units, environment — in a single operation. If the activation fails, NixOS rolls back to the previous generation with PG 14 still configured.
Flake-pinned binaries — the upgrade scripts and the NixOS configuration derive from the same flake.lock. The PG 18 binary used for pg_upgrade is byte-for-byte identical to the one NixOS runs as a service. On a traditional system, there’s always a risk that the PG 18 you installed for the upgrade is a slightly different build than what the package manager configures as the service.
Garbage collection awareness — after the upgrade, nix-collect-garbage will eventually remove the PG 14 store path since nothing references it. The upgrade scripts still reference it (PG 14 is in their closure), so as long as the scripts exist in the flake, the PG 14 binary remains available for rollback. Remove the scripts from the flake and PG 14 becomes eligible for garbage collection. This is a nice property — your safety net exists exactly as long as you keep the scripts around, and it disappears automatically when you clean up.
Cleanup
After verification confirms everything is healthy:
- Remove
/var/lib/postgresql/14— the old data directory - Delete
delete_old_cluster.sh—pg_upgrade’s generated cleanup script - Delete
update_extensions.sql— already applied
The final state of /var/lib/postgresql/ should contain only the 18 directory. Remove the upgrade scripts from your flake when you’re confident you won’t need to roll back. Until then, they serve as both documentation and a safety net — keeping PG 14 in the Nix store costs a few hundred megabytes and buys you the option to investigate if something subtle surfaces later.