Syncing qBittorrent Ports with ProtonVPN NAT-PMP on NixOS
7 min readEverything works until the port changes. You’re running qBittorrent behind ProtonVPN, port forwarding is enabled, peers are connecting — and then the VPN gateway silently reassigns your external port. qBittorrent doesn’t know. Peers try the old port, get nothing, and your swarm participation drops to zero. You might not notice for hours.
ProtonVPN handles port forwarding through NAT-PMP (RFC 6886). The gateway assigns an external port dynamically — it changes on reconnect and can change mid-session. There’s no static port option. If you want incoming connections, something needs to continuously track the assigned port and tell qBittorrent about it.
Manual configuration doesn’t survive a single VPN reconnect. You need a daemon.
I wrote one — proton-port-sync. If you just want to use it, the README has everything you need. This post is about how it works and why it’s built the way it is.
Why not the natpmp crate
The Rust natpmp crate exists and implements the protocol. It has one critical problem with policy-based routing.
When you run WireGuard with policy routing — say, routing table 51820 matching traffic from 10.2.0.2 — the natpmp crate binds its UDP socket to 0.0.0.0:0. The NAT-PMP request goes out over the VPN tunnel correctly, but the response from the gateway doesn’t route back. The kernel sees a packet for an unbound socket with no interface affinity and has no reason to route it through the VPN’s routing table. The request times out and you’re debugging network issues that don’t exist.
The fix: bind the UDP socket directly to the WireGuard interface IP. That’s the whole trick — the socket is now associated with the VPN interface, responses traverse the correct routing table, and everything works. It’s obvious in hindsight and confusing for about fifteen minutes.
There’s a second issue. RFC 6886 says internal port 0 means “delete all mappings.” ProtonVPN expects internal port 1 — the actual value is irrelevant since ProtonVPN assigns the port server-side, but it must be non-zero. The natpmp crate sends 0 by default.
Two bugs, both trivial individually, both invisible until you’ve wasted an evening on packet captures.
The protocol
NAT-PMP is refreshingly simple. Twelve bytes out, sixteen bytes back, over UDP to port 5351:
fn request_protocol_mapping(&self, opcode: u8, lifetime_secs: u32) -> Result<u16> {
let socket = UdpSocket::bind(SocketAddr::new(self.bind_address, 0))?;
socket.connect(SocketAddr::new(self.gateway, NATPMP_PORT))?;
let mut request = [0u8; 12];
request[1] = opcode;
request[4..6].copy_from_slice(&1u16.to_be_bytes()); // internal port = 1
request[8..12].copy_from_slice(&lifetime_secs.to_be_bytes());
// Exponential backoff per RFC 6886: 250ms initial, doubling, up to 9 attempts
let mut timeout = Duration::from_millis(250);
for attempt in 0..9 {
socket.set_read_timeout(Some(timeout))?;
socket.send(&request)?;
let mut buf = [0u8; 16];
match socket.recv(&mut buf) {
Ok(16) => {
let result_code = u16::from_be_bytes([buf[2], buf[3]]);
if result_code != 0 {
anyhow::bail!("NAT-PMP error: result code {result_code}");
}
let external_port = u16::from_be_bytes([buf[10], buf[11]]);
return Ok(external_port);
}
Ok(n) => anyhow::bail!("Unexpected response size: {n}"),
Err(_) if attempt < 8 => {
timeout *= 2;
continue;
}
Err(e) => return Err(e.into()),
}
}
anyhow::bail!("NAT-PMP request timed out after 9 attempts")
}
The request format is version (1 byte), opcode (1 byte), reserved (2 bytes), internal port (2 bytes), external port (2 bytes), and lifetime (4 bytes). The response mirrors it with a result code and the actual assigned external port. The RFC specifies exponential backoff starting at 250ms, doubling each attempt — nine attempts covers about two minutes of retries before giving up.
The daemon requests both UDP and TCP mappings. If they differ — which shouldn’t happen with ProtonVPN but can with other NAT-PMP gateways — it logs the discrepancy and uses the TCP port, since that’s what qBittorrent primarily uses for peer connections.
The main loop
The core logic is a loop that renews the NAT-PMP mapping, detects port changes, and pushes updates to qBittorrent:
loop {
match natpmp_client.request_mapping(60) {
Ok(port) => {
fail_count = 0;
if current_port != Some(port) {
info!(%port, "Port changed, updating qBittorrent");
qbt.set_listen_port(port).await?;
current_port = Some(port);
// Update Prometheus metrics
}
}
Err(e) => {
warn!(?e, "NAT-PMP renewal failed");
fail_count += 1;
if fail_count >= max_failures {
warn!("Too many failures, restarting WireGuard");
Command::new("systemctl")
.args(["restart", &wg_unit])
.status()?;
fail_count = 0;
current_port = None;
sleep(Duration::from_secs(10)).await;
continue;
}
sleep(Duration::from_secs(15)).await;
continue;
}
}
sleep(Duration::from_secs(renew_interval)).await;
}
Three design choices worth noting:
45-second renewal interval. NAT-PMP mappings are requested with a 60-second lifetime and renewed at 45 seconds. That’s a 15-second buffer — enough to absorb a slow response without the mapping expiring.
3-failure threshold. After three consecutive NAT-PMP failures, the daemon restarts the WireGuard unit. This sounds aggressive, but in practice NAT-PMP failures almost always mean the tunnel is in a bad state. A stale gateway, a half-torn-down connection, a routing table that’s out of sync — restarting WireGuard clears all of it. Three failures at 15 seconds each means you wait 45 seconds before pulling the trigger.
10-second post-restart cooldown. After restarting WireGuard, the daemon sleeps for 10 seconds before retrying. The tunnel needs time to re-establish — handshake, key exchange, routing table update. Hammering NAT-PMP requests during that window just adds noise to the logs.
Talking to qBittorrent
qBittorrent exposes a WebUI API. The daemon uses two endpoints — login and set preferences:
pub struct QbtClient {
client: reqwest::Client, // with cookie store
base_url: String,
username: String,
password: String,
}
impl QbtClient {
pub async fn set_listen_port(&self, port: u16) -> Result<()> {
self.login().await?;
self.client
.post(format!("{}/api/v2/app/setPreferences", self.base_url))
.form(&[("json", format!(r#"{{"listen_port":{port}}}"#))])
.send().await?
.error_for_status()?;
Ok(())
}
async fn login(&self) -> Result<()> {
self.client
.post(format!("{}/api/v2/auth/login", self.base_url))
.form(&[("username", &self.username), ("password", &self.password)])
.send().await?
.error_for_status()?;
Ok(())
}
}
The reqwest client is configured with a cookie store, so the session cookie from login() persists across requests. The daemon re-authenticates on every port change — qBittorrent sessions can expire, and re-logging in is cheaper than tracking session state.
The password comes from a file (--qbt-password-file), loaded once at startup. No passwords in CLI args, no passwords in environment variables. The file path is the only thing that appears in process listings or systemd unit files.
Prometheus metrics
Six metrics on an optional /metrics endpoint:
proton_port_sync_current_port — currently mapped port (gauge)
proton_port_sync_port_changes_total — total port changes (counter)
proton_port_sync_last_change_timestamp — unix timestamp of last change (gauge)
proton_port_sync_renewals_total — successful NAT-PMP renewals (counter)
proton_port_sync_failures_total — NAT-PMP request failures (counter)
proton_port_sync_wg_restarts_total — WireGuard restarts triggered (counter)
The useful alerts: port_changes_total increasing rapidly means the gateway is reassigning ports faster than expected — possibly a VPN issue. failures_total spiking means the tunnel is unstable. wg_restarts_total going up means the daemon is repeatedly cycling WireGuard, which warrants investigation.
The metrics endpoint is served via axum on a configurable address and port. It’s optional — skip --metrics-addr and the daemon runs without it. But if you’re already running Prometheus, the five minutes to wire it up will save you the next time something goes silently wrong at 3 AM.
The NixOS module
The module wraps all of this in a declarative systemd service:
{ config, lib, pkgs, ... }:
let
cfg = config.services.proton-port-sync;
in {
options.services.proton-port-sync = {
enable = lib.mkEnableOption "proton-port-sync";
gateway = lib.mkOption { type = lib.types.str; default = "10.2.0.1"; };
bindAddress = lib.mkOption { type = lib.types.str; default = "10.2.0.2"; };
qbtUrl = lib.mkOption { type = lib.types.str; default = "http://127.0.0.1:8080"; };
qbtUser = lib.mkOption { type = lib.types.str; default = "admin"; };
qbtPasswordFile = lib.mkOption { type = lib.types.path; };
renewInterval = lib.mkOption { type = lib.types.int; default = 45; };
maxFailures = lib.mkOption { type = lib.types.int; default = 3; };
wgUnit = lib.mkOption { type = lib.types.str; default = "wireguard-wg0.service"; };
metrics = {
enable = lib.mkEnableOption "Prometheus metrics";
address = lib.mkOption { type = lib.types.str; default = "127.0.0.1"; };
port = lib.mkOption { type = lib.types.port; default = 9834; };
};
};
config = lib.mkIf cfg.enable {
systemd.services.proton-port-sync = {
description = "Proton VPN NAT-PMP port sync for qBittorrent";
after = [ "network-online.target" cfg.wgUnit ];
bindsTo = [ cfg.wgUnit ];
wants = [ "qbittorrent.service" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${pkgs.proton-port-sync}/bin/proton-port-sync"
+ " --gateway ${cfg.gateway}"
+ " --bind-address ${cfg.bindAddress}"
+ " --qbt-url ${cfg.qbtUrl}"
+ " --qbt-user ${cfg.qbtUser}"
+ " --qbt-password-file \${CREDENTIALS_DIRECTORY}/qbt-password"
+ lib.optionalString cfg.metrics.enable
" --metrics-addr ${cfg.metrics.address}:${toString cfg.metrics.port}";
LoadCredential = "qbt-password:${cfg.qbtPasswordFile}";
Restart = "on-failure";
RestartSec = "5s";
# Security hardening
ProtectSystem = "strict";
ProtectHome = true;
NoNewPrivileges = true;
PrivateTmp = true;
};
};
};
}
The systemd wiring matters more than it looks:
bindsTo ties the service lifecycle to WireGuard. If WireGuard stops — whether from a manual stop, a crash, or a restart — this service stops too. No orphaned daemon sending NAT-PMP requests into the void.
wants qBittorrent as a soft dependency. The daemon starts regardless of whether qBittorrent is running — it’ll just fail to update the port until qBittorrent comes up. A hard dependency (requires) would be wrong here because the daemon should survive a qBittorrent restart without being killed.
LoadCredential handles the password file. systemd copies the file into a private credentials directory, accessible only to the service. The original file path never appears in the process’s environment or arguments — only the path under $CREDENTIALS_DIRECTORY. This is systemd’s credential passing mechanism, and it’s strictly better than EnvironmentFile or passing secrets via CLI args.
The hardening options — ProtectSystem=strict, ProtectHome=true, NoNewPrivileges=true — are standard for a daemon that only needs network access and one credential file. The service can’t write to the filesystem, can’t read home directories, and can’t escalate privileges. If the binary is compromised, the blast radius is minimal.
Wiring it up
Flake input
Add the flake to your inputs with nixpkgs.follows to avoid pulling a second copy of nixpkgs:
inputs = {
proton-port-sync.url = "github:ijohanne/proton-port-sync";
proton-port-sync.inputs.nixpkgs.follows = "nixpkgs";
};
Host configuration
Import the module and configure it:
imports = [ proton-port-sync.nixosModules.default ];
services.proton-port-sync = {
enable = true;
gateway = "10.2.0.1";
qbtUser = "admin";
qbtPasswordFile = config.sops.secrets."qbittorrent/webui_password".path;
metrics = {
enable = true;
address = "10.100.0.10"; # private backhaul IP
port = 9834;
};
};
The qbtPasswordFile points to a sops-nix secret. On activation, sops-nix decrypts it to /run/secrets/qbittorrent/webui_password, and systemd’s LoadCredential picks it up from there. The password never appears in your Nix configuration, never lands in the Nix store, and only exists decrypted in a tmpfs mount.
Prometheus scraping
On your monitoring host, add a scrape target:
services.prometheus.scrapeConfigs = [
{
job_name = "proton-port-sync";
honor_labels = true;
static_configs = [
{
targets = [ "10.100.0.10:9834" ];
labels = { instance = "myhost"; };
}
];
}
];
That’s the full stack — NAT-PMP renewals, automatic qBittorrent updates, WireGuard failure recovery, Prometheus metrics, and sops-nix secrets management, all declared in a few dozen lines of Nix. The daemon itself is about 300 lines of Rust. The interesting part wasn’t the protocol or the API integration — it was the one-line socket bind fix that makes it actually work behind policy routing.