FAQ
Does it work with Consul Enterprise?
Yes. Consul Guardian uses the standard Consul HTTP API, which is the same across Community Edition and Enterprise. It works with both.
Enterprise features like namespaces and admin partitions are not yet supported, but the core KV watching, snapshots, drift detection, and restore work with any Consul deployment.
Do I need to change my application code?
No. Guardian runs alongside your existing Consul setup as a separate process. Your applications continue reading from Consul as usual. Guardian only reads KV data (for watching and drift detection) and writes KV data (only during restore operations you explicitly trigger).
How much disk space does the Git repo use?
Very little for typical workloads. Configuration data is usually small -- a KV store with 1,000 keys averaging 500 bytes each takes about 500KB per snapshot in Git. With daily changes to 50 keys, you'll use roughly 1GB of Git storage over a year before garbage collection.
Run git gc periodically (or let Git do it automatically) to keep the repo compact. For CI environments, use shallow clones (git clone --depth 1) to avoid downloading full history.
What happens if Guardian goes down?
Nothing bad happens to Consul. Guardian is a passive observer -- it only reads from Consul (except during explicit restore operations). If Guardian stops:
- KV changes continue happening in Consul as normal.
- Changes made while Guardian is down are not individually tracked in Git.
- When Guardian restarts, it does a full sync of the current state against its last known state. Any keys that changed while it was down appear as a single "catch-up" commit.
No data is lost in Consul. You lose per-change granularity in Git only for the period Guardian was offline.
Can I use it with multiple datacenters?
Yes, but each datacenter needs its own Guardian instance. Consul KV is per-datacenter, so you'd run:
# DC1
consul-guardian watch \
--consul-addr http://consul-dc1:8500 \
--consul-datacenter dc1 \
--git-repo /data/backup-dc1
# DC2
consul-guardian watch \
--consul-addr http://consul-dc2:8500 \
--consul-datacenter dc2 \
--git-repo /data/backup-dc2
You can push both repos to the same Git remote using different branches or separate repos.
Is it safe for production?
Yes, with these considerations:
- Read-only by default. The
watchanddriftcommands only read from Consul. They never modify your data. - Restore is explicit. The
restorecommand only runs when you tell it to. It always shows a plan first, and supports--dry-run. - CAS prevents overwrites. Restore operations use Check-And-Set. If another process modified a key since Guardian last read it, the restore fails safely instead of overwriting.
- Single replica. Run exactly one instance per Consul datacenter. Multiple instances would create duplicate commits and race on the Git repo.
Guardian has integration tests that run against a real Consul instance via testcontainers. The core paths (watch, drift, restore) are tested with actual Consul API calls.
Can I watch all keys (empty prefix)?
Yes. Use --prefix "" to watch every key in the KV store. Be aware this generates more Git commits if you have high-churn keys. Use --exclude-prefix to filter out noisy keys.
How fast does it detect changes?
Typically under 5 seconds. Guardian uses Consul blocking queries, which return immediately when the watched index changes. The actual latency depends on your Consul cluster's response time and network conditions.
Can I use my own Git remote?
Yes. Initialize the Git repo and add a remote:
git init /data/consul-backup
cd /data/consul-backup
git remote add origin git@github.com:yourorg/consul-backup.git
Then run with --auto-push:
consul-guardian watch --git-repo /data/consul-backup --auto-push
If the push fails (network issue, auth problem), the local commit is preserved and Guardian retries on the next change.
Does it support Consul Connect or service mesh?
Guardian backs up KV data, not service mesh configuration. Consul Connect intentions, service defaults, and other config entries are not stored in the KV store -- they use a separate API. Cluster snapshots (via snapshot save) do include all cluster state including Connect configuration.