Self-Hosting a Private Nix Binary Cache with Attic and Garage
7 min readYou want a private Nix binary cache. Not “throw artifacts into S3 and hope for the best” private. A real cache server, scoped credentials, multiple teams, and CI that pushes build outputs automatically. Basically the Cachix model, but self-hosted and under your control.
This stack does that cleanly:
- Garage provides S3-compatible object storage for NAR files
- Attic provides the cache API, signing keys, cache metadata, and auth tokens
- PostgreSQL stores Attic metadata
- sops-nix wires in secrets without hardcoding credentials in your Nix config
- GitHub Actions pushes build outputs into the cache on trusted builds
The useful twist is Attic’s token model. You can hand a team one token scoped to teamname-*, and they can create and manage teamname-dev, teamname-prod, teamname-whatever without touching anyone else’s caches. That’s the self-hosted private Cachix pitch.
This post walks through the whole setup on NixOS. It does not cover per-cache vanity hostnames through nginx. Attic still exposes one global substituter endpoint, so path-based cache URLs are the model today.
Why split Attic and Garage
Attic wants durable blob storage plus a relational database. Garage gives you the blob layer without needing MinIO, Ceph, or a cloud bucket. PostgreSQL handles Attic’s metadata. That separation matters:
- NAR files live in S3-compatible object storage
- Cache definitions, ACLs, and token state live in PostgreSQL
- The cache server stays stateless apart from its DB and signing key
For a single-node deployment, Garage with SQLite is enough. If you need more later, you can grow from there without replacing the cache server.
Garage: S3 backend for NAR storage
Garage is the object store Attic writes to. The NixOS side is straightforward: run Garage locally, expose the S3 API through nginx, and bootstrap the initial layout, S3 key, and bucket with an idempotent oneshot service.
{ config, pkgs, ... }:
let
garageS3Host = "s3.example.com";
garageZone = "est1";
garageCapacity = "1T";
garageBootstrap = pkgs.writeShellScript "garage-bootstrap" ''
set -euo pipefail
for _ in $(seq 1 30); do
if node_id="$(${config.services.garage.package}/bin/garage node id 2>/dev/null | cut -d@ -f1)" && [ -n "$node_id" ]; then
break
fi
sleep 1
done
if [ -z "''${node_id:-}" ]; then
echo "garage node id did not become available in time" >&2
exit 1
fi
if ${config.services.garage.package}/bin/garage status | grep -q 'NO ROLE ASSIGNED'; then
${config.services.garage.package}/bin/garage layout assign -z ${garageZone} -c ${garageCapacity} "$node_id"
current_version="$(${config.services.garage.package}/bin/garage layout show | sed -n 's/^Current cluster layout version: \([0-9][0-9]*\)$/\1/p')"
if [ -n "$current_version" ]; then
next_version=$((current_version + 1))
else
next_version=1
fi
${config.services.garage.package}/bin/garage layout apply --version "$next_version"
fi
if ! ${config.services.garage.package}/bin/garage key info "$(cat ${config.sops.secrets.garage_attic_key_id.path})" >/dev/null 2>&1; then
${config.services.garage.package}/bin/garage key import \
"$(cat ${config.sops.secrets.garage_attic_key_id.path})" \
"$(cat ${config.sops.secrets.garage_attic_secret_key.path})" \
-n attic \
--yes
fi
if ! ${config.services.garage.package}/bin/garage bucket info attic >/dev/null 2>&1; then
${config.services.garage.package}/bin/garage bucket create attic
fi
${config.services.garage.package}/bin/garage bucket allow \
--read \
--write \
--owner \
attic \
--key "$(cat ${config.sops.secrets.garage_attic_key_id.path})"
'';
in
{
users.groups.garage = { };
users.users.garage = {
isSystemUser = true;
group = "garage";
description = "Garage object storage service";
};
sops.secrets.garage_rpc_secret = {
mode = "0400";
owner = "garage";
group = "garage";
};
sops.secrets.garage_admin_token = {
mode = "0400";
owner = "garage";
group = "garage";
};
sops.secrets.garage_metrics_token = {
mode = "0400";
owner = "garage";
group = "garage";
};
sops.secrets.garage_attic_key_id = { };
sops.secrets.garage_attic_secret_key = { };
services.garage = {
enable = true;
package = pkgs.garage;
logLevel = "info";
settings = {
replication_factor = 1;
db_engine = "sqlite";
rpc_bind_addr = "127.0.0.1:3901";
rpc_public_addr = "127.0.0.1:3901";
rpc_secret_file = config.sops.secrets.garage_rpc_secret.path;
allow_world_readable_secrets = false;
s3_api = {
api_bind_addr = "127.0.0.1:3900";
s3_region = "garage";
};
admin = {
api_bind_addr = "127.0.0.1:3903";
admin_token_file = config.sops.secrets.garage_admin_token.path;
metrics_token_file = config.sops.secrets.garage_metrics_token.path;
};
};
};
systemd.services.garage.serviceConfig = {
DynamicUser = false;
User = "garage";
Group = "garage";
};
services.nginx.virtualHosts.${garageS3Host} = {
forceSSL = true;
enableACME = true;
acmeRoot = null;
locations."/" = {
proxyPass = "http://127.0.0.1:3900";
extraConfig = ''
client_max_body_size 0;
proxy_request_buffering off;
proxy_buffering off;
'';
};
};
systemd.services.garage-bootstrap = {
description = "Bootstrap Garage layout and Attic bucket";
wantedBy = [ "multi-user.target" ];
after = [ "garage.service" ];
requires = [ "garage.service" ];
path = with pkgs; [
coreutils
gnugrep
gnused
];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = garageBootstrap;
};
};
}
The important part is the bootstrap script. It makes the single-node layout assignment, imports the access key Attic will use, creates the attic bucket, and grants that key ownership of the bucket. Because it’s idempotent, you can leave it in the boot path without fear.
Attic: cache server, metadata, and tokens
Attic sits in front of Garage and PostgreSQL. It serves the binary cache API, signs cache metadata, and issues scoped JWTs for pushing and administration.
{ config, pkgs, ... }:
let
atticApiHost = "nix-cache.example.com";
atticBootstrapCache = "myuser";
atticClient = "${pkgs.attic-client}/bin/attic";
atticEnv = config.sops.templates."atticd-env";
atticBootstrap = pkgs.writeShellScript "attic-bootstrap" ''
set -euo pipefail
state_root=/var/lib/attic-bootstrap
export HOME="$state_root"
export XDG_CONFIG_HOME="$state_root/xdg"
rm -rf "$XDG_CONFIG_HOME"
mkdir -p "$XDG_CONFIG_HOME"
token=$(/run/current-system/sw/bin/atticd-atticadm make-token \
--sub bootstrap \
--validity '10 years' \
--pull '*' \
--push '*' \
--delete '*' \
--create-cache '*' \
--configure-cache '*' \
--configure-cache-retention '*' \
--destroy-cache '*')
${atticClient} login bootstrap http://127.0.0.1:8080/ "$token"
if ! ${atticClient} cache info bootstrap:${atticBootstrapCache} >/dev/null 2>&1; then
${atticClient} cache create bootstrap:${atticBootstrapCache} --public
fi
${atticClient} cache configure bootstrap:${atticBootstrapCache} --public
${atticClient} cache info bootstrap:${atticBootstrapCache} > "$state_root/${atticBootstrapCache}.info"
'';
in
{
sops.secrets.attic_token_rs256_secret_base64 = {
mode = "0400";
owner = "root";
group = "root";
};
sops.templates."atticd-env" = {
content = ''
ATTIC_SERVER_TOKEN_RS256_SECRET_BASE64=${config.sops.placeholder.attic_token_rs256_secret_base64}
AWS_ACCESS_KEY_ID=${config.sops.placeholder.garage_attic_key_id}
AWS_SECRET_ACCESS_KEY=${config.sops.placeholder.garage_attic_secret_key}
'';
mode = "0400";
owner = "root";
group = "root";
};
services.postgresql.ensureDatabases = [ "atticd" ];
services.postgresql.ensureUsers = [
{
name = "atticd";
ensureDBOwnership = true;
}
];
services.atticd = {
enable = true;
package = pkgs.attic-server;
environmentFile = atticEnv.path;
settings = {
listen = "127.0.0.1:8080";
allowed-hosts = [
atticApiHost
"${atticBootstrapCache}.${atticApiHost}"
"127.0.0.1:8080"
"localhost:8080"
];
api-endpoint = "https://${atticApiHost}/";
substituter-endpoint = "https://${atticApiHost}/";
database.url = "postgres:///atticd?host=/run/postgresql&user=atticd";
storage = {
type = "s3";
region = "garage";
bucket = "attic";
endpoint = "https://s3.example.com";
};
};
};
systemd.services.atticd = {
after = [ "garage-bootstrap.service" ];
requires = [ "garage-bootstrap.service" ];
};
systemd.services.attic-bootstrap = {
description = "Bootstrap the initial public Attic cache";
wantedBy = [ "multi-user.target" ];
after = [ "atticd.service" ];
requires = [ "atticd.service" ];
path = with pkgs; [
coreutils
findutils
];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = atticBootstrap;
StateDirectory = "attic-bootstrap";
};
};
}
Three details matter here:
services.atticd.settings.storagepoints at Garage’s S3 endpoint, not local disk.- The Attic signing key and S3 credentials come from a
sops.templatesenvironment file, so the unit gets exactly the secrets it needs. attic-bootstraplogs into the local API with an all-powerful bootstrap token and creates the first cache declaratively.
That last point is the difference between “deployed a service” and “deployed a usable cache.”
The boot dependency chain
The full boot ordering looks like this:
garage.service
└─ garage-bootstrap.service
└─ atticd.service
└─ attic-bootstrap.service
Garage has to be up before the S3 bucket can be created. The bucket and access key have to exist before Attic can start cleanly. Attic has to be live before you can create the initial cache through its API. All of those bootstrap steps are oneshot units with RemainAfterExit = true, so they run once, stay “active”, and don’t flap on every check.
Secrets with sops-nix
This stack has six secrets:
| Secret | Purpose |
|---|---|
garage_rpc_secret | Garage RPC authentication |
garage_admin_token | Garage admin API authentication |
garage_metrics_token | Garage metrics API authentication |
garage_attic_key_id | S3 access key for Attic |
garage_attic_secret_key | S3 secret key for Attic |
attic_token_rs256_secret_base64 | Attic JWT signing key |
The nice part is the direction of flow:
- Garage consumes the RPC and admin secrets directly
- Garage bootstrap consumes the S3 key pair to import it and grant bucket access
- Attic consumes the JWT secret and the same S3 key pair through one rendered environment file
No plaintext credentials in git. No hand-written Environment= lines in systemd units.
Multi-tenant token scoping is the whole point
If all you wanted was a single shared cache, you could stop at “CI can push and clients can pull.” The more interesting setup is when multiple teams or projects share one Attic server without sharing one namespace.
Attic’s token generator gives you that directly.
For one personal cache:
atticd-atticadm make-token \
--sub myuser-push \
--validity '1 year' \
--pull myuser \
--push myuser
That token can read and push exactly one cache: myuser.
For a team namespace:
atticd-atticadm make-token \
--sub teamname \
--validity '1 year' \
--pull 'teamname-*' \
--push 'teamname-*' \
--create-cache 'teamname-*' \
--configure-cache 'teamname-*' \
--configure-cache-retention 'teamname-*'
This is the private Cachix pattern. One token, one namespace prefix. The team can create teamname-dev, teamname-staging, and teamname-prod on demand. They can push to them, configure them, and rotate retention settings. They cannot touch otherteam-*.
That gives you one shared Attic deployment with isolation by scoped capability rather than by running one cache server per team.
Client-side Nix configuration
Clients only need the global Attic endpoint and the public key for the cache they consume:
{ ... }:
{
nix.settings = {
substituters = [
"https://nix-cache.example.com/myuser"
];
trusted-public-keys = [
"myuser:55EJTBFbq5pCYx2tf+aR8pmVPvCmP7QlafHH90/kikw="
];
};
}
That’s valid for NixOS and nix-darwin alike. If you have multiple team caches, add multiple path-based substituters and their public keys.
The thing to remember is that the cache name lives in the path. Attic does not currently give you a first-class “one hostname per cache” model through attic use, so don’t design around vanity subdomains here.
CI: push trusted build outputs, never PR outputs
The cache gets interesting when CI writes to it automatically. The safe pattern is:
- pull from the cache on every build
- push to the cache only on trusted events
- keep pull requests read-only to avoid cache poisoning
Here’s a GitHub Actions workflow that does exactly that:
name: "Build and populate cache"
on:
pull_request:
push:
workflow_dispatch:
schedule:
- cron: '42 5 * * *'
jobs:
tests:
strategy:
matrix:
nurRepo:
- myuser
nixPath:
- nixpkgs=channel:nixos-unstable
- nixpkgs=channel:nixpkgs-unstable
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install nix
uses: cachix/install-nix-action@v31
with:
nix_path: "${{ matrix.nixPath }}"
extra_nix_config: |
experimental-features = nix-command flakes
access-tokens = github.com=${{ secrets.GITHUB_TOKEN }}
extra-substituters = https://nix-cache.example.com/myuser
extra-trusted-public-keys = myuser:55EJTBFbq5pCYx2tf+aR8pmVPvCmP7QlafHH90/kikw=
- name: Show nixpkgs version
run: nix-instantiate --eval -E '(import <nixpkgs> {}).lib.version'
- name: Login to Attic
if: github.event_name != 'pull_request'
run: nix shell nixpkgs#attic-client -c attic login ci https://nix-cache.example.com/ "${{ secrets.ATTIC_TOKEN }}"
- name: Build nix packages
run: nix shell nixpkgs#nix-build-uncached -c nix-build-uncached ci.nix -A cacheOutputs
- name: Push build outputs to Attic
if: github.event_name != 'pull_request'
run: nix shell nixpkgs#attic-client -c sh -lc 'attic push ci:myuser result*'
- name: Trigger NUR update
if: ${{ matrix.nurRepo != '<YOUR_REPO_NAME>' }}
run: curl -XPOST "https://nur-update.nix-community.org/update?repo=${{ matrix.nurRepo }}"
The key behavior:
- PRs can use the cache as a substituter, but they do not get an Attic login token
- pushes, scheduled builds, and manual runs can log in and push results
ATTIC_TOKENshould be a scoped token, not a bootstrap tokennix-build-uncachedavoids pointlessly rebuilding outputs that are already available from substituters
If you’re handing out CI tokens per repository or per team, use the same wildcard scoping model as the human tokens. repo-a-* and repo-b-* stay isolated even though they hit the same server.
Operating model
Once this is deployed, the workflow is simple:
- Garage stores the blobs.
- Attic serves the cache API and tracks metadata in PostgreSQL.
- Teams get scoped tokens limited to their namespace.
- Developers add the cache URL and public key to
nix.settings. - CI logs in on trusted runs and pushes build outputs.
That gives you most of what people actually want from Cachix:
- faster CI
- faster local builds
- a shared cache for private code
- scoped write access
- one central service instead of ad-hoc per-project buckets
But you keep control of the storage, keys, and tenancy model.
One limitation worth planning around
Attic still exposes one global substituter-endpoint. attic use <cache> advertises a path-based URL like https://nix-cache.example.com/myuser, not a dedicated hostname per cache. So if you’re thinking “I’ll front each cache with its own vanity nginx vhost,” stop there. That’s not the product surface Attic exposes today.
Path-based cache names are the stable interface. Design your client config, CI config, and team docs around that and the setup stays boring in the good way.
Final thought
This stack hits a nice balance. Garage is lightweight enough for single-node self-hosting. Attic gives you a real multi-user cache server instead of raw object storage. PostgreSQL handles the metadata cleanly. And scoped tokens let you share one service across teams without turning it into a free-for-all.
If you already run NixOS and sops-nix, the whole thing fits naturally into the rest of your infrastructure. Which is the main reason to do it this way in the first place.