Running Hickory-DNS as a Full Authoritative + Recursive DNS Server on NixOS
10 min readBIND has been around since the 1980s. Unbound does recursive resolution well but doesn’t serve authoritative zones. CoreDNS is fine until you want DDNS with TSIG authentication and it’s suddenly less fine. Then there’s hickory-dns — a Rust DNS server that handles authoritative zones, recursive resolution, sqlite-backed DDNS zones, TSIG key verification, and Prometheus metrics. All in one binary.
The catch is it’s not in nixpkgs with the features you need. So you build it from source, wire it into a NixOS module, and generate your zone files declaratively from a host registry. This post walks through the entire working setup — not a toy example, but a production config with multiple VLANs, IPv4 and IPv6 reverse DNS, Kea DHCP-DDNS integration, systemd hardening, and a custom Grafana dashboard built from scratch because nobody’s made one yet.
Why hickory-dns
The pitch is short:
- Rust — memory-safe, single static-ish binary, no garbage collector pauses
- Built-in recursor — one process handles both authoritative and recursive queries
- Prometheus metrics — native
/metricsendpoint, no sidecar exporter - SQLite-backed DDNS zones — RFC 2136 dynamic updates with TSIG authentication, journal files for durability
- DNSSEC support — ring-based crypto for TSIG key verification
One binary replaces what would otherwise be BIND (or Unbound + a separate authoritative server) plus a Prometheus exporter. The config is TOML, which is a nice change from BIND’s syntax.
Building from source in Nix
Hickory-dns isn’t packaged in nixpkgs with the feature combination you need — sqlite, recursor, prometheus-metrics, and dnssec-ring. Build it inline:
hickory-dns = pkgs.rustPlatform.buildRustPackage rec {
pname = "hickory-dns";
version = "0.26.0-beta.2";
src = pkgs.fetchFromGitHub {
owner = "hickory-dns";
repo = "hickory-dns";
hash = "sha256-7kra6MbLcv0P6iiUJ+hQ0ezqgXh/1KskCrZvFYDqiXQ=";
rev = "v${version}";
};
cargoHash = "sha256-FfckN+qhSqbc8jnL0xThdAMQEgluocSY1ksEyT8rFFY=";
buildAndTestSubdir = "bin";
buildFeatures = [
"sqlite" "resolver" "recursor"
"prometheus-metrics" "dnssec-ring"
];
nativeBuildInputs = [ pkgs.pkg-config ];
buildInputs = [ pkgs.openssl pkgs.sqlite ];
doCheck = false;
meta.mainProgram = "hickory-dns";
};
A few things to note. buildAndTestSubdir = "bin" is critical — the workspace has many crates and you only want the server binary. The buildFeatures list is the whole reason you’re building from source: sqlite for DDNS journal zones, recursor for upstream resolution, prometheus-metrics for monitoring, and dnssec-ring for TSIG key verification on dynamic updates. Tests are disabled because they require network access — doCheck = false and move on.
Generating zone files from a host registry
Hard-coding zone files is fine for five hosts. At twenty, with multiple VLANs, reverse zones, and IPv6, it’s a maintenance disaster. The better approach is generating everything from a single Nix attrset — a host registry:
hosts = {
myhost = { ip = "10.0.1.10"; mac = "aa:bb:cc:dd:ee:ff"; };
another = { ip = "10.0.2.20"; ip6 = "fd00:255:101::20";
dns = [ "another" "alias" ]; };
};
Every host has an IP and optional fields — MAC for DHCP reservations, ip6 for IPv6, dns for additional names. From this, you generate forward zones, reverse zones, and DHCP configs. One source of truth, zero drift.
Forward zone (A and AAAA records)
The SOA boilerplate goes into a helper:
mkSoa = zone: ''
$ORIGIN ${zone}.
$TTL 3600
@ IN SOA ns1.${domain}. admin.${domain}. (
1 ; serial
3600 ; refresh
900 ; retry
604800 ; expire
300 ; minimum
)
@ IN NS ns1.${domain}.
'';
Then you filter and map:
isInZone = name: !(lib.hasInfix "." name);
hostARecords = lib.flatten (lib.mapAttrsToList (name: host:
let names = builtins.filter isInZone (hostDnsNames name host);
in map (n: "${n} IN A ${host.ip}") names
) network.hosts);
hostAAAARecords = lib.optionals network.enableIPv6ULA (lib.flatten (
lib.mapAttrsToList (name: host:
let names = builtins.filter isInZone (hostDnsNames name host);
in map (n: "${n} IN AAAA ${host.ip6}") names
) hostsWithIp6));
forwardZoneContent =
mkSoa domain
+ "ns1 IN A ${gateway.ip}\n"
+ lib.concatStringsSep "\n" (hostARecords ++ extraARecords ++ hostAAAARecords)
+ "\n";
forwardZoneFile = pkgs.writeText "${domain}.zone" forwardZoneContent;
The isInZone filter catches an easy mistake — names with dots in them (like k8s-master.local) belong in other domains and can’t be served from this authoritative zone. Filter them out at the Nix level instead of debugging broken DNS resolution later.
IPv4 reverse zones (PTR records per /24 subnet)
Reverse zones are per-subnet, so you group hosts by their first three octets:
hostsBySubnet = lib.groupBy (h:
let parts = lib.splitString "." h.ip;
in "${builtins.elemAt parts 0}.${builtins.elemAt parts 1}.${builtins.elemAt parts 2}"
) allHostEntries;
mkReverseZone = subnet: entries:
let
parts = lib.splitString "." subnet;
revSubnet = "${builtins.elemAt parts 2}.${builtins.elemAt parts 1}.${builtins.elemAt parts 0}";
zoneName = "${revSubnet}.in-addr.arpa";
records = map (h:
let lastOctet = lib.last (lib.splitString "." h.ip);
in "${lastOctet} IN PTR ${h.dnsName}.${domain}."
) entries;
in {
name = zoneName;
content = mkSoa zoneName
+ "ns1.${domain}. IN A ${gateway.ip}\n"
+ lib.concatStringsSep "\n" records + "\n";
};
reverseZones = lib.mapAttrsToList mkReverseZone hostsBySubnet;
Each /24 subnet gets its own .in-addr.arpa zone with PTR records pointing back to the canonical hostname. Add a host to the registry, rebuild, and forward and reverse DNS update together.
IPv6 reverse zones (nibble-based PTR records)
IPv6 reverse DNS is where things get tedious. Each address expands to individual hex nibbles, reversed and dot-separated under .ip6.arpa. A helper does the expansion:
ip6Nibbles = addr:
let expanded = network.expandIp6 addr;
in lib.stringToCharacters (lib.replaceStrings [":"] [""] expanded);
ip6ZoneName = addr:
let
nibbles = ip6Nibbles addr;
rev12 = lib.concatStringsSep "." (lib.reverseList (lib.take 12 nibbles));
in "${rev12}.ip6.arpa";
The first 12 nibbles (48 bits) form the zone name — this covers a /48 prefix. The remaining 20 nibbles become the host part of each PTR record:
mkIp6ReverseZone = zoneName: entries:
let
records = map (h:
let
nibbles = ip6Nibbles h.ip6;
allRev = lib.reverseList nibbles;
relName = lib.concatStringsSep "." (lib.take 20 allRev);
in "${relName} IN PTR ${h.dnsName}.${domain}."
) entries;
in {
name = zoneName;
content = mkSoa zoneName
+ "ns1.${domain}. IN A ${gateway.ip}\n"
+ lib.concatStringsSep "\n" records + "\n";
};
ip6ReverseZones = lib.optionals network.enableIPv6ULA
(lib.mapAttrsToList mkIp6ReverseZone hostsByIp6Zone);
Manually maintaining nibble-format PTR records for even a dozen IPv6 hosts is a recipe for typos. Generating them from the registry makes it mechanical.
Splitting zones: static vs DDNS-updatable
Here’s the key architectural decision. Reverse zones for subnets with DHCP clients need to accept dynamic updates — Kea D2 will send RFC 2136 updates to create PTR records when leases are handed out. But reverse zones for static-only subnets should be plain zone files, read-only. You split them at the Nix level:
ddnsSubnets = [ "10.0.1" "10.0.2" "10.0.3" ];
zoneToSubnet = zoneName:
let
stripped = lib.removeSuffix ".in-addr.arpa" zoneName;
parts = lib.splitString "." stripped;
in "${builtins.elemAt parts 2}.${builtins.elemAt parts 1}.${builtins.elemAt parts 0}";
isDdnsReverseZone = z: builtins.elem (zoneToSubnet z.name) ddnsSubnets;
ddnsReverseZones = builtins.filter isDdnsReverseZone reverseZones;
staticReverseZones = builtins.filter (z: !(isDdnsReverseZone z)) reverseZones;
Same split for IPv6 — DHCP-served subnets get sqlite-backed DDNS zones, management subnets get static files. This matters because DDNS zones require sqlite storage, TSIG key configuration, and journal files. You don’t want that overhead on zones that never change.
The DDNS forward zones start empty — they’re populated at runtime by Kea:
ddnsZoneContent =
mkSoa "dhcp.${domain}"
+ "ns1.${domain}. IN A ${gateway.ip}\n";
guestZoneContent =
mkSoa "guest.${domain}"
+ "ns1.${domain}. IN A ${gateway.ip}\n";
TOML configuration
With all the zone files generated, the TOML config ties everything together. Four categories of zones, plus a recursor for upstream resolution:
listen_addrs_ipv4 = ["10.0.0.1", "10.0.1.1", "10.0.2.1", "127.0.0.1"]
listen_addrs_ipv6 = ["fd00:255:100::1", "::1"]
listen_port = 53
directory = "/var/lib/hickory-dns"
tcp_request_timeout = 5
allow_networks = ["10.0.0.0/8", "127.0.0.0/8", "fd00::/8", "::1/128"]
prometheus_listen_addr = "127.0.0.1:9153"
# Static authoritative forward zone
[[zones]]
zone = "example.net."
zone_type = "Primary"
file = "/nix/store/...-example.net.zone"
# DDNS forward zone (sqlite + TSIG)
[[zones]]
zone = "dhcp.example.net."
zone_type = "Primary"
[zones.stores]
type = "sqlite"
zone_path = "/nix/store/...-dhcp.example.net.zone"
journal_path = "/var/lib/hickory-dns/dhcp.example.net.jrnl"
allow_update = true
[[zones.stores.tsig_keys]]
name = "kea-ddns-key."
algorithm = "hmac-sha256"
key_file = "/var/lib/hickory-dns/tsig-key.bin"
# Root hints recursor (upstream resolution)
[[zones]]
zone = "."
zone_type = "External"
[zones.stores]
type = "recursor"
roots = "/nix/store/...-root.hints"
ns_cache_size = 1024
record_cache_size = 1048576
A few important details. You must list every interface IP explicitly in listen_addrs_ipv4 — hickory-dns doesn’t support 0.0.0.0 binding reliably. The allow_networks list restricts which clients can query. DDNS zones use type = "sqlite" with allow_update = true and a TSIG key for authentication. The root hints file comes from pkgs.dns-root-data. And record_cache_size = 1048576 gives the recursor a generous cache — one million entries.
The Nix side generates this TOML with string interpolation, rendering static reverse zones as plain file zones and DDNS reverse zones with the sqlite/TSIG configuration:
# Static reverse zones — plain file, read-only
staticReverseZoneToml = lib.concatMapStringsSep "\n" (z: ''
[[zones]]
zone = "${z.name}."
zone_type = "Primary"
file = "${reverseZoneFilesByName.${z.name}}"
'') staticReverseZones;
# DDNS reverse zones — sqlite + journal + TSIG key
ddnsReverseZoneToml = lib.concatMapStringsSep "\n" (z: ''
[[zones]]
zone = "${z.name}."
zone_type = "Primary"
[zones.stores]
type = "sqlite"
zone_path = "${reverseZoneFilesByName.${z.name}}"
journal_path = "${dataDir}/${z.name}.jrnl"
allow_update = true
${tsigKeyToml}
'') ddnsReverseZones;
Same pattern for IPv6 — static and DDNS variants, generated from the zone split you did earlier.
SystemD service with hardening
The service definition is straightforward but the hardening is thorough:
users.users.hickory-dns = {
isSystemUser = true;
group = "hickory-dns";
};
users.groups.hickory-dns = {};
systemd.services.hickory-dns = {
description = "hickory-dns DNS server";
after = [ "network-online.target" ];
wants = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStartPre = "${pkgs.bash}/bin/bash -c '${pkgs.coreutils}/bin/base64 -d \
< ${config.sops.secrets.hickory_dns_private_key.path} \
> ${tsigKeyRawPath}'";
ExecStart = "${hickory-dns}/bin/hickory-dns -c ${configFile}";
User = "hickory-dns";
Group = "hickory-dns";
StateDirectory = "hickory-dns";
AmbientCapabilities = [ "CAP_NET_BIND_SERVICE" ];
CapabilityBoundingSet = [ "CAP_NET_BIND_SERVICE" ];
LockPersonality = true;
MemoryDenyWriteExecute = true;
NoNewPrivileges = true;
PrivateDevices = true;
PrivateTmp = true;
ProtectClock = true;
ProtectControlGroups = true;
ProtectHome = true;
ProtectHostname = true;
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
ProtectSystem = "strict";
ReadWritePaths = [ dataDir ];
RestrictAddressFamilies = [ "AF_INET" "AF_INET6" "AF_UNIX" ];
RestrictNamespaces = true;
RestrictRealtime = true;
SystemCallArchitectures = "native";
};
};
The ExecStartPre step decodes the TSIG key from a sops-nix secret before the server starts — the raw key file lives in the state directory and is readable only by the service user. CAP_NET_BIND_SERVICE is the only capability needed (port 53). StateDirectory = "hickory-dns" tells systemd to create /var/lib/hickory-dns owned by the service user. ProtectSystem = "strict" plus ReadWritePaths means only the state dir is writable — for sqlite journals and the decoded TSIG key. Everything else is locked down.
Kea DHCP-DDNS integration
This is where most of the debugging happens. Three pieces need to align: Kea’s DHCP servers, the D2 daemon, and hickory-dns. Get one wrong and you’ll have hostnames that resolve for some devices but not others, or PTR records that point to the wrong zone.
Skipping static reservations
Hosts with static DNS entries in the forward zone must not get DDNS records. Otherwise Kea creates myhost.dhcp.example.net entries that shadow the authoritative myhost.example.net records — or worse, you get duplicate names in different zones with different TTLs and spend an evening figuring out why dig returns different answers depending on which resolver cache you hit.
The fix: tag all static reservations with a SKIP_DDNS client class:
skipDdnsReservations = rs: map (r: r // { client-classes = [ "SKIP_DDNS" ]; }) rs;
# Applied to both DHCPv4 and DHCPv6 reservations
reservations = skipDdnsReservations network.dhcpReservations;
This works because Kea’s ddns_tuning hooks library respects the class and suppresses DDNS updates for matching clients:
hooks-libraries = [{
library = "${pkgs.kea}/lib/kea/hooks/libdhcp_ddns_tuning.so";
parameters = {};
}];
Load this hook in both dhcp4 and dhcp6 settings.
D2 config with TSIG and reverse DNS
The Kea D2 daemon handles the actual RFC 2136 updates. Its config contains the TSIG key secret, so it’s rendered via a sops template:
systemd.services.kea-dhcp-ddns-server = {
after = [ "hickory-dns.service" ];
wants = [ "hickory-dns.service" ];
serviceConfig.ExecStart = lib.mkForce
"${pkgs.kea}/bin/kea-dhcp-ddns -c ${config.sops.templates."kea-dhcp-ddns.conf".path}";
};
The template itself defines forward and reverse DDNS domains:
sops.templates."kea-dhcp-ddns.conf" = {
mode = "0444";
restartUnits = [ "kea-dhcp-ddns-server.service" ];
content = builtins.toJSON {
DhcpDdns = {
ip-address = "127.0.0.1";
port = 53001;
dns-server-timeout = 3000;
tsig-keys = [{
name = "kea-ddns-key.";
algorithm = "HMAC-SHA256";
secret = config.sops.placeholder.hickory_dns_private_key;
}];
forward-ddns = {
ddns-domains = [
{
name = "dhcp.example.net.";
key-name = "kea-ddns-key.";
dns-servers = [{ ip-address = "127.0.0.1"; port = 53; }];
}
{
name = "guest.example.net.";
key-name = "kea-ddns-key.";
dns-servers = [{ ip-address = "127.0.0.1"; port = 53; }];
}
];
};
reverse-ddns = {
ddns-domains = [
# IPv4 reverse zones for DHCP subnets
{
name = "1.0.10.in-addr.arpa.";
key-name = "kea-ddns-key.";
dns-servers = [{ ip-address = "127.0.0.1"; port = 53; }];
}
{
name = "2.0.10.in-addr.arpa.";
key-name = "kea-ddns-key.";
dns-servers = [{ ip-address = "127.0.0.1"; port = 53; }];
}
# IPv6 reverse zones (generated from ULA prefix)
{
name = wiredIp6RevZone;
key-name = "kea-ddns-key.";
dns-servers = [{ ip-address = "127.0.0.1"; port = 53; }];
}
{
name = wifiIp6RevZone;
key-name = "kea-ddns-key.";
dns-servers = [{ ip-address = "127.0.0.1"; port = 53; }];
}
];
};
};
};
};
Gotcha worth highlighting: because ExecStart is overridden with lib.mkForce to point at the sops template, the NixOS kea module’s own restart triggers no longer cover the actual config. Without restartUnits on the sops template, changing the D2 config deploys a new template file but the running D2 process keeps the old one. The restartUnits = [ "kea-dhcp-ddns-server.service" ] line tells sops-nix to restart D2 whenever the rendered template content changes. This pattern is needed any time you mkForce a service’s ExecStart to use a sops template.
Per-subnet DDNS settings
Each VLAN gets its own qualifying suffix — or no DDNS at all:
# WiFi and wired subnets — register in dhcp.example.net
ddns-send-updates = true;
ddns-qualifying-suffix = "dhcp.example.net.";
# Guest subnet — register in guest.example.net
ddns-send-updates = true;
ddns-qualifying-suffix = "guest.example.net.";
# Camera and management subnets — no DDNS
ddns-send-updates = false;
Global DDNS settings that apply to both DHCPv4 and DHCPv6:
ddns-override-client-update = true;
ddns-replace-client-name = "never";
ddns-update-on-renew = true;
hostname-char-set = "[^A-Za-z0-9.-]";
hostname-char-replacement = "-";
The hostname-char-set and replacement ensure that devices sending garbage hostnames — and they will — get sanitized into valid DNS labels.
Search domains per VLAN
Each network gets appropriate search domains so short names resolve without qualification:
# WiFi and wired: dhcp.example.net, example.net, parent.net
searchDomainWifiWired = {
code = 119;
data = "dhcp.example.net, example.net, parent.net";
name = "domain-search";
space = "dhcp4";
};
# Management: example.net, parent.net (no dhcp subdomain)
searchDomainMgnt = {
code = 119;
data = "example.net, parent.net";
name = "domain-search";
space = "dhcp4";
};
# Guest: only guest.example.net
searchDomainGuest = {
code = 119;
data = "guest.example.net";
name = "domain-search";
space = "dhcp4";
};
Trusted VLANs search both the DDNS zone and the authoritative zone — so ssh myhost resolves whether myhost got its name from a static zone entry or a DHCP lease. Guest gets only its own zone. Management doesn’t need the DDNS subdomain because everything on that VLAN has a static reservation.
The complete data flow
When it all comes together:
- Client gets a DHCP lease from Kea (v4 or v6)
- If
ddns-send-updates = truefor that subnet and the client isn’t taggedSKIP_DDNS:- Kea sends an RFC 2136 UPDATE to the D2 daemon (port 53001)
- D2 sends a TSIG-authenticated forward update to hickory-dns (A/AAAA in
dhcp.orguest.zone) - D2 sends a TSIG-authenticated reverse update to hickory-dns (PTR in
.in-addr.arpaor.ip6.arpazone)
- Hickory-dns validates the TSIG signature and writes to the sqlite journal
- Static reservations — servers with known IPs — only appear in the authoritative forward zone, never duplicated in the DDNS zone
Prometheus monitoring
The prometheus-metrics build feature exposes metrics on the address you configured. Scrape config is minimal:
services.prometheus.scrapeConfigs = [{
job_name = "hickory-dns";
honor_labels = true;
static_configs = [{
targets = [ "127.0.0.1:9153" ];
}];
}];
The metrics you get are genuinely useful:
hickory_request_record_types_total— query types (A, AAAA, PTR, HTTPS, MX, SRV, TXT, DS, DNSKEY, SOA, NS)hickory_response_codes_total— response codes (NOERROR, NXDOMAIN, SERVFAIL, etc.)hickory_request_protocols_total— TCP vs UDP splithickory_recursor_cache_hit_total/hickory_recursor_cache_miss_total— cache effectivenesshickory_recursor_cache_hit_duration_seconds_bucket/hickory_recursor_cache_miss_duration_seconds_bucket— latency histogramshickory_recursor_response_cache_size/hickory_recursor_name_server_cache_size— cache fill levelshickory_recursor_in_flight_queries— concurrent query counthickory_zone_lookups_total— per-zone lookups by handler and successhickory_zone_records_total— record count per zone- Standard process metrics — RSS, virtual memory, CPU seconds, threads, open FDs
Grafana dashboard
There is no pre-made Grafana dashboard for hickory-dns. Had to build one from scratch by reading the metrics endpoint. It has four sections and fourteen panels.
Overview
- Query Rate (by type) — stacked area chart:
sum by (type) (rate(hickory_request_record_types_total{job="hickory-dns", type=~"a|aaaa|ptr|https|mx|srv|txt|ds|dnskey|soa|ns"}[$__rate_interval])) - Response Rate (by rcode) —
sum by (code) (rate(hickory_response_codes_total{job="hickory-dns"}[$__rate_interval])) > 0 - Recursor Latency — p50/p95 from
histogram_quantileon bothhickory_recursor_cache_miss_duration_seconds_bucketandhickory_recursor_cache_hit_duration_seconds_bucket - Requests by Protocol —
sum by (protocol) (rate(hickory_request_protocols_total{job="hickory-dns"}[$__rate_interval]))
Cache and recursor
- Cache Hit/Miss Rate —
rate(hickory_recursor_cache_hit_total{...}[$__rate_interval])and the miss equivalent - Cache Hit Ratio — gauge panel:
hit_rate / (hit_rate + miss_rate)with thresholds: red below 50%, yellow 50–80%, green above 80% - Cache Sizes & In-Flight —
hickory_recursor_response_cache_size,hickory_recursor_name_server_cache_size,hickory_recursor_in_flight_queries
Zones
- Zone Lookups (by handler) —
sum by (zone_handler, success) (rate(hickory_zone_lookups_total{...}[$__rate_interval])) - Zone Records Count —
hickory_zone_records_totalwith legend format{{zone_handler}} {{type}} {{role}}
Process
- Memory Usage — RSS vs virtual
- CPU / Threads / FDs —
rate(process_cpu_seconds_total{...}[$__rate_interval]),process_threads,process_open_fds
Design decisions: a ${datasource} template variable for Prometheus datasource selection, $__rate_interval everywhere for proper rate calculations, stacked area charts for rates, line charts for latencies, table legends with mean/max calcs. Default time range is six hours.
Firewall rules
DNS needs to be reachable from every VLAN that should resolve. For nftables:
# Trusted networks (full access)
ip saddr 10.0.0.0/8 tcp dport 53 counter accept comment "lan dns tcp"
ip saddr 10.0.0.0/8 udp dport 53 counter accept comment "lan dns udp"
# Isolated networks (DNS + DHCP only, everything else dropped)
iifname "guest" udp dport { 53, 67, 68 } counter accept comment "guest dns+dhcp"
iifname "guest" tcp dport 53 counter accept comment "guest dns tcp"
iifname "camera" udp dport { 53, 67, 68 } counter accept comment "camera dns+dhcp"
iifname "camera" tcp dport 53 counter accept comment "camera dns tcp"
Guest and camera networks are isolated — they can reach DNS and DHCP, nothing else. Trusted interfaces get full access. The DNS server is the gateway, so it’s reachable on every VLAN interface without extra routing.
Wrapping up
The whole setup is one Nix module that generates zone files from a host registry, builds hickory-dns with the right features, renders a TOML config, runs a hardened systemd service, and wires up Kea D2 for dynamic updates. Add a host to the registry, rebuild, and forward DNS, reverse DNS, and DHCP reservations all update together. The Prometheus metrics are there from day one, and the Grafana dashboard — since nobody else has built one yet — gives you query rates, cache hit ratios, recursor latency, and per-zone lookup breakdowns. It’s a lot of moving parts, but once it’s declarative, the moving parts stop being your problem.