Kubernetes Deployment
This guide covers deploying Consul Guardian on Kubernetes with a PersistentVolumeClaim for the Git repository, a Secret for the Consul ACL token, and a Service for the dashboard.
PersistentVolumeClaim
The Git repository needs persistent storage across pod restarts:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: consul-guardian-pvc
namespace: consul
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: gp3
5Gi is generous for most setups. A KV store with 1,000 keys and daily changes will use less than 500Mi over a year.
Secret
Store the Consul ACL token as a Kubernetes Secret:
kubectl create secret generic consul-guardian-token \
--namespace consul \
--from-literal=token=your-consul-acl-token
Or as YAML:
apiVersion: v1
kind: Secret
metadata:
name: consul-guardian-token
namespace: consul
type: Opaque
stringData:
token: "your-consul-acl-token"
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: consul-guardian
namespace: consul
labels:
app: consul-guardian
spec:
replicas: 1
selector:
matchLabels:
app: consul-guardian
template:
metadata:
labels:
app: consul-guardian
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: guardian
image: ghcr.io/consul-guardian/consul-guardian:latest
args:
- dashboard
- --prefix
- "config/,env/,feature-flags/"
- --git-repo
- /data/backup
- --listen
- ":9090"
ports:
- name: http
containerPort: 9090
protocol: TCP
env:
- name: CONSUL_GUARDIAN_CONSUL_ADDRESS
value: "http://consul-server.consul.svc.cluster.local:8500"
- name: CONSUL_GUARDIAN_CONSUL_TOKEN
valueFrom:
secretKeyRef:
name: consul-guardian-token
key: token
- name: CONSUL_GUARDIAN_LOGGING_FORMAT
value: "json"
volumeMounts:
- name: backup
mountPath: /data/backup
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 256Mi
livenessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: 5
periodSeconds: 10
volumes:
- name: backup
persistentVolumeClaim:
claimName: consul-guardian-pvc
Guardian is lightweight. 50m CPU and 64Mi memory handles most workloads. Increase if you're watching thousands of keys with frequent changes.
Service
Expose the dashboard within the cluster:
apiVersion: v1
kind: Service
metadata:
name: consul-guardian
namespace: consul
labels:
app: consul-guardian
spec:
type: ClusterIP
ports:
- port: 9090
targetPort: http
protocol: TCP
name: http
selector:
app: consul-guardian
To expose externally, use an Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: consul-guardian
namespace: consul
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: guardian-basic-auth
spec:
rules:
- host: guardian.internal.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: consul-guardian
port:
number: 9090
Add authentication -- the dashboard has write access to Consul KV.
Health checks
The /api/status endpoint returns the Consul leader, watched prefixes, and recent change count. Both liveness and readiness probes hit this endpoint.
A healthy response:
{
"consul_leader": "10.0.1.5:8300",
"consul_address": "http://consul-server:8500",
"prefixes": ["config/", "env/"],
"recent_changes": 12,
"timestamp": "2026-04-04T14:30:00Z"
}
If Consul is unreachable, the endpoint returns HTTP 503.
Single replica
Run exactly one replica. Multiple replicas would create duplicate Git commits and race on the Git repository. If you need high availability, use a standby deployment with leader election (not yet built-in).