Mantrinol.

Centrally-managed Restic backups, built in plain Go.

Restic does the work — encrypted, deduplicated, snapshot-based. Mantrinol does the orchestration: a small agent on every machine, a single control plane to see them all, and a NATS spine that holds the loop together. Cross-platform from day one — no Windows-first then ports later. The agent has no excuse to drift.

01  Schematic

How it fits together.

Three parts. One loop.
NODE A
Agent

One Go binary per machine. Runs as a service. Wraps the Restic CLI, schedules jobs, persists everything to a local SQLite. Serves a small portal on 127.0.0.1:8080.

Win · Mac · LinuxPure Go
NODE B · CENTRAL
Control plane

Tenant registry, agent inventory, job history. Issues commands over a NATS spine, watches the lifecycle traffic come back, and surfaces the lot in a single portal.

Postgres · NATSSingle binary
NODE C
Repository

Where the bytes actually live. Local disk, NAS over UNC, S3, B2, anything Restic supports. Two distinct passwords kept apart — encryption is not network credentials.

Restic-nativeBring your own
02  Specifications

What it actually does.

Things that ship today.
i

One binary, three platforms

A single Go source tree cross-compiles to Windows, macOS and Linux with CGO_ENABLED=0. Pure-Go dependencies throughout — no platform forks, no second- class citizens.

ii

Restic under the hood

The backup engine is Restic — proven, audited, encrypted by default. Mantrinol orchestrates it, never replaces it. You can read the snapshots with stock restic if Mantrinol ever falls over.

iii

Two portals, one truth

A local portal on every agent for hands-on configuration, and a central portal for the fleet view. Same letterpress instrument-panel design — both look like they came out of the same workshop because they did.

iv

NATS spine, JetStream-durable

Commands and status flow over per-agent NATS subjects. JetStream holds messages while agents are offline. Reconnects use exponential backoff with jitter; the UI shows “offline since 4m” until the link comes back.

v

Airgap-tolerant by design

Every repo, job and override is persisted on the agent. When the control plane is unreachable the schedule keeps running; when it comes back, inventory reconciles. No server, no panic.

vi

Auth that actually exists

Bearer tokens on the admin API, HMAC-SHA256 over agent-to-server NATS payloads, secrets pulled from Infisical at startup. NATS JWT per-agent isolation is the next layer up.

03  State of play

Where we are.

Read this before you stake anything on it.
2026 · Q2

Done The end-to-end spine is wired: enrollment, ad-hoc backup, status, persisted job runs. Bearer + HMAC auth shipped. Self-serve install scripts and the central portal go too.

Now

In flight Phase B — owner model for repos and jobs (local vs server), local-vault encryption of sensitive columns, inventory sync, override governance.

Next

Phase C NATS JWT per-agent credentials so multi-tenant isolation stops being honour-system. Resume in-flight jobs after agent restart.

Later

Phase E Restore workflows in the portal, Keycloak SSO for admins, installer generation, agent auto-update with signed manifests.

Production today: single-org, lean rebuild. Open the control plane ↗