infra/docs/runbooks/restore-postgresql.md
Viktor Barzin b345b086ef update backup/DR docs and runbooks for 3-2-1 architecture
- Full rewrite of backup-dr.md: 3-2-1 strategy with sda backup disk,
  PVC file-level copy from LVM snapshots, pfsense backup, two offsite
  paths. 4 Mermaid diagrams (data flow, timeline, disk layout, restore tree).
- Update storage.md: 65 proxmox-lvm PVCs, sda backup tier
- Update restore-full-cluster.md: add Phase 3.5 for PVC restore from sda
- Update restore-{mysql,postgresql,vault,vaultwarden}.md: add sda fallback paths
- New runbook: restore-pvc-from-backup.md (file-level restore from sda)
- Update CLAUDE.md Storage & Backup section for 3-2-1 architecture
2026-04-06 15:06:01 +03:00

4.9 KiB

Restore PostgreSQL (CNPG)

Last updated: 2026-04-06

Prerequisites

  • kubectl access to the cluster
  • CNPG operator running in the cluster
  • Backup dump available on NFS at /mnt/main/postgresql-backup/
  • PostgreSQL superuser password (from pg-cluster-superuser secret in dbaas namespace)

Backup Location

  • NFS: /mnt/main/postgresql-backup/dump_YYYY_MM_DD_HH_MM.sql.gz
  • Mirrored to sda: /mnt/backup/nfs-mirror/postgresql-backup/ (PVE host 192.168.1.127)
  • Replicated to Synology NAS: Synology/Backup/Viki/pve-backup/nfs-mirror/postgresql-backup/
  • Retention: 14 days (on NFS), latest only (on sda), unlimited (on Synology)

Restore from pg_dumpall

1. Identify the backup to restore

# List available backups (from any node with NFS access)
ls -lt /mnt/main/postgresql-backup/dump_*.sql | head -20

# Or via a pod:
kubectl run pg-restore --rm -it --image=postgres:16.4-bullseye \
  --overrides='{"spec":{"volumes":[{"name":"backup","persistentVolumeClaim":{"claimName":"dbaas-postgresql-backup"}}],"containers":[{"name":"pg-restore","image":"postgres:16.4-bullseye","volumeMounts":[{"name":"backup","mountPath":"/backup"}],"command":["ls","-lt","/backup/"]}]}}' \
  -n dbaas

2. Get the superuser password

kubectl get secret pg-cluster-superuser -n dbaas -o jsonpath='{.data.password}' | base64 -d

3. Option A: Restore into existing CNPG cluster

# Port-forward to the CNPG primary
kubectl port-forward svc/pg-cluster-rw -n dbaas 5433:5432 &

# Restore (decompress and pipe to psql — this will overwrite existing data)
PGPASSWORD=$(kubectl get secret pg-cluster-superuser -n dbaas -o jsonpath='{.data.password}' | base64 -d) \
  zcat /path/to/dump_YYYY_MM_DD_HH_MM.sql.gz | psql -h 127.0.0.1 -p 5433 -U postgres

3. Option B: Rebuild CNPG cluster from scratch

# 1. Delete the existing cluster
kubectl delete cluster pg-cluster -n dbaas

# 2. Wait for PVCs to be cleaned up
kubectl get pvc -n dbaas -l cnpg.io/cluster=pg-cluster

# 3. Re-apply the cluster manifest (via terragrunt)
cd infra && scripts/tg apply -target=null_resource.pg_cluster stacks/dbaas

# 4. Wait for cluster to be ready
kubectl wait --for=condition=Ready cluster/pg-cluster -n dbaas --timeout=300s

# 5. Restore the dump
PGPASSWORD=$(kubectl get secret pg-cluster-superuser -n dbaas -o jsonpath='{.data.password}' | base64 -d) \
  kubectl run pg-restore --rm -it --image=postgres:16.4-bullseye \
  --overrides='{"spec":{"volumes":[{"name":"backup","persistentVolumeClaim":{"claimName":"dbaas-postgresql-backup"}}],"containers":[{"name":"pg-restore","image":"postgres:16.4-bullseye","env":[{"name":"PGPASSWORD","value":"'$PGPASSWORD'"}],"volumeMounts":[{"name":"backup","mountPath":"/backup"}],"command":["/bin/sh","-c","zcat /backup/dump_YYYY_MM_DD_HH_MM.sql.gz | psql -h pg-cluster-rw.dbaas -U postgres"]}]}}' \
  -n dbaas

4. Verify restoration

# Check databases exist
PGPASSWORD=$PGPASSWORD psql -h 127.0.0.1 -p 5433 -U postgres -c "\l"

# Check table counts for critical databases
for db in health linkwarden affine woodpecker claude_memory; do
  echo "=== $db ==="
  PGPASSWORD=$PGPASSWORD psql -h 127.0.0.1 -p 5433 -U postgres -d $db -c \
    "SELECT schemaname, tablename, n_live_tup FROM pg_stat_user_tables ORDER BY n_live_tup DESC LIMIT 5;"
done

5. Restart dependent services

After restore, restart services that connect to PostgreSQL to pick up fresh connections:

kubectl rollout restart deployment -n health
kubectl rollout restart deployment -n linkwarden
# ... repeat for all PG-dependent services (excluding trading — disabled)

Alternative: Restore from sda Backup

If TrueNAS NFS is unavailable but the PVE host is accessible:

# 1. SSH to PVE host
ssh root@192.168.1.127

# 2. Find the latest backup
ls -lt /mnt/backup/nfs-mirror/postgresql-backup/

# 3. Mount sda backup on a pod
PGPASSWORD=$(kubectl get secret pg-cluster-superuser -n dbaas -o jsonpath='{.data.password}' | base64 -d)

kubectl run pg-restore --rm -it --image=postgres:16.4-bullseye \
  --overrides='{"spec":{"volumes":[{"name":"backup","hostPath":{"path":"/mnt/backup/nfs-mirror/postgresql-backup"}}],"containers":[{"name":"pg-restore","image":"postgres:16.4-bullseye","env":[{"name":"PGPASSWORD","value":"'$PGPASSWORD'"}],"volumeMounts":[{"name":"backup","mountPath":"/backup"}],"command":["/bin/sh","-c","zcat /backup/dump_YYYY_MM_DD_HH_MM.sql.gz | psql -h pg-cluster-rw.dbaas -U postgres"]}],"nodeName":"k8s-master"}}' \
  -n dbaas

Alternative: Restore from Synology (if PVE host is down)

If both TrueNAS and PVE host are unavailable:

# 1. SSH to Synology NAS
ssh Administrator@192.168.1.13

# 2. Navigate to backup directory
cd /volume1/Backup/Viki/pve-backup/nfs-mirror/postgresql-backup/

# 3. Copy dump to a temporary location accessible from cluster
# (e.g., via rsync to a surviving node, or restore TrueNAS first)

Estimated Time

  • Restore into existing cluster: ~10 minutes (depends on dump size)
  • Full rebuild: ~20-30 minutes