backup & DR: add alerting, fix rotation, secure MySQL password, add runbooks
Phase 1: Add 12 PrometheusRules for backup health alerting - PostgreSQL, MySQL, Vault, Vaultwarden, Redis staleness + never-succeeded alerts - CSIDriverCrashLoop alert for nfs-csi/iscsi-csi namespaces - Generic BackupCronJobFailed alert Phase 2: Fix backup rotation - etcd: timestamped snapshots instead of overwriting single file - Redis: timestamped RDB files with 7-day retention purge - PostgreSQL: retention increased from 7 to 14 days Phase 3: Fix MySQL password exposure - Move root password from command line arg to MYSQL_PWD env var via secretKeyRef Phase 5: Add restore runbooks - PostgreSQL, MySQL, Vault, etcd, Vaultwarden, full cluster rebuild
This commit is contained in:
parent
62d42657e6
commit
af2222fce8
9 changed files with 657 additions and 4 deletions
96
docs/runbooks/restore-etcd.md
Normal file
96
docs/runbooks/restore-etcd.md
Normal file
|
|
@ -0,0 +1,96 @@
|
|||
# Restore etcd
|
||||
|
||||
## Prerequisites
|
||||
- SSH access to `k8s-master` node
|
||||
- etcd snapshot available on NFS at `/mnt/main/etcd-backup/`
|
||||
- etcd PKI certs at `/etc/kubernetes/pki/etcd/` on master node
|
||||
|
||||
## Backup Location
|
||||
- NFS: `/mnt/main/etcd-backup/etcd-snapshot-YYYYMMDD-HHMMSS.db`
|
||||
- Replicated to Synology NAS (192.168.1.13) via TrueNAS ZFS replication
|
||||
- Retention: 30 days
|
||||
- Schedule: Daily at 00:00
|
||||
|
||||
## CRITICAL: etcd is the foundation of the cluster
|
||||
Restoring etcd will reset the entire Kubernetes state to the snapshot time. All objects created after the snapshot will be lost. This is a last-resort operation.
|
||||
|
||||
**Only restore etcd if the control plane is completely broken.**
|
||||
|
||||
## Restore Procedure
|
||||
|
||||
### 1. SSH to the master node
|
||||
```bash
|
||||
ssh k8s-master
|
||||
```
|
||||
|
||||
### 2. Identify the snapshot to restore
|
||||
```bash
|
||||
ls -lt /mnt/main/etcd-backup/etcd-snapshot-*.db | head -10
|
||||
```
|
||||
|
||||
### 3. Stop the API server and etcd
|
||||
```bash
|
||||
# Move static pod manifests to stop them
|
||||
sudo mv /etc/kubernetes/manifests/kube-apiserver.yaml /etc/kubernetes/
|
||||
sudo mv /etc/kubernetes/manifests/etcd.yaml /etc/kubernetes/
|
||||
|
||||
# Wait for pods to stop
|
||||
sudo crictl ps | grep -E "etcd|apiserver"
|
||||
```
|
||||
|
||||
### 4. Back up current etcd data
|
||||
```bash
|
||||
sudo mv /var/lib/etcd /var/lib/etcd.bak.$(date +%Y%m%d-%H%M%S)
|
||||
```
|
||||
|
||||
### 5. Restore the snapshot
|
||||
```bash
|
||||
sudo ETCDCTL_API=3 etcdctl snapshot restore /mnt/main/etcd-backup/etcd-snapshot-YYYYMMDD-HHMMSS.db \
|
||||
--data-dir=/var/lib/etcd \
|
||||
--name=k8s-master \
|
||||
--initial-cluster=k8s-master=https://127.0.0.1:2380 \
|
||||
--initial-advertise-peer-urls=https://127.0.0.1:2380
|
||||
```
|
||||
|
||||
### 6. Fix permissions
|
||||
```bash
|
||||
sudo chown -R root:root /var/lib/etcd
|
||||
```
|
||||
|
||||
### 7. Restart etcd and API server
|
||||
```bash
|
||||
sudo mv /etc/kubernetes/etcd.yaml /etc/kubernetes/manifests/
|
||||
# Wait for etcd to be ready
|
||||
sleep 30
|
||||
sudo mv /etc/kubernetes/kube-apiserver.yaml /etc/kubernetes/manifests/
|
||||
```
|
||||
|
||||
### 8. Verify restoration
|
||||
```bash
|
||||
# Check etcd health
|
||||
sudo ETCDCTL_API=3 etcdctl \
|
||||
--endpoints=https://127.0.0.1:2379 \
|
||||
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
|
||||
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \
|
||||
--key=/etc/kubernetes/pki/etcd/healthcheck-client.key \
|
||||
endpoint health
|
||||
|
||||
# Check cluster status
|
||||
kubectl get nodes
|
||||
kubectl get pods -A | head -20
|
||||
```
|
||||
|
||||
### 9. Reconcile state
|
||||
After etcd restore, some objects may be stale:
|
||||
```bash
|
||||
# Re-apply critical infrastructure
|
||||
cd /path/to/infra
|
||||
scripts/tg apply stacks/platform
|
||||
|
||||
# Check for orphaned resources
|
||||
kubectl get pods -A | grep -E "Terminating|Error|Unknown"
|
||||
```
|
||||
|
||||
## Estimated Time
|
||||
- Snapshot restore: ~10-15 minutes
|
||||
- Full reconciliation: ~30-60 minutes (depends on drift)
|
||||
128
docs/runbooks/restore-full-cluster.md
Normal file
128
docs/runbooks/restore-full-cluster.md
Normal file
|
|
@ -0,0 +1,128 @@
|
|||
# Full Cluster Rebuild
|
||||
|
||||
## When to Use
|
||||
- Complete cluster failure (all VMs lost)
|
||||
- etcd corruption requiring full rebuild
|
||||
- Proxmox host failure requiring fresh VM provisioning
|
||||
|
||||
## Prerequisites
|
||||
- Proxmox host (192.168.1.127) accessible
|
||||
- TrueNAS NFS server (192.168.1.2) accessible — or Synology NAS (192.168.1.13) for backups
|
||||
- Git repo with infra code
|
||||
- SOPS age keys for state decryption (`~/.config/sops/age/keys.txt`)
|
||||
- Vault unseal keys (emergency kit)
|
||||
|
||||
## Rebuild Order
|
||||
|
||||
The rebuild must follow dependency order. Each layer depends on the one before it.
|
||||
|
||||
### Phase 1: Infrastructure (Proxmox VMs)
|
||||
```bash
|
||||
# 1. Provision VMs via Terraform
|
||||
cd infra
|
||||
scripts/tg apply stacks/infra
|
||||
|
||||
# 2. Wait for VMs to boot and be reachable
|
||||
# k8s-master, k8s-node3, k8s-node4, k8s-node5 (node1/2 excluded)
|
||||
```
|
||||
|
||||
### Phase 2: Kubernetes Control Plane
|
||||
```bash
|
||||
# 3. Initialize kubeadm on master (if starting fresh)
|
||||
sudo kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
|
||||
|
||||
# 4. Join worker nodes
|
||||
# Get join command from master, run on each node
|
||||
|
||||
# 5. OR restore etcd from snapshot (see restore-etcd.md)
|
||||
# This restores all K8s objects from the snapshot time
|
||||
```
|
||||
|
||||
### Phase 3: Storage Layer
|
||||
```bash
|
||||
# 6. Deploy CSI drivers (NFS + iSCSI)
|
||||
scripts/tg apply stacks/nfs-csi
|
||||
scripts/tg apply stacks/iscsi-csi
|
||||
|
||||
# 7. Verify PVs are accessible
|
||||
kubectl get pv
|
||||
kubectl get pvc -A | grep -v Bound
|
||||
```
|
||||
|
||||
### Phase 4: Vault (secrets foundation)
|
||||
```bash
|
||||
# 8. Deploy Vault (see restore-vault.md for full procedure)
|
||||
scripts/tg apply stacks/vault
|
||||
|
||||
# 9. Initialize/unseal/restore raft snapshot
|
||||
# 10. Verify ESO can connect
|
||||
scripts/tg apply stacks/external-secrets
|
||||
kubectl get externalsecrets -A
|
||||
```
|
||||
|
||||
### Phase 5: Platform Services
|
||||
```bash
|
||||
# 11. Deploy platform stack (Traefik, monitoring, Kyverno, etc.)
|
||||
scripts/tg apply stacks/platform
|
||||
|
||||
# 12. Verify ingress is working
|
||||
curl -s -o /dev/null -w "%{http_code}" https://viktorbarzin.me/
|
||||
```
|
||||
|
||||
### Phase 6: Databases
|
||||
```bash
|
||||
# 13. Deploy database stack
|
||||
scripts/tg apply stacks/dbaas
|
||||
|
||||
# 14. Wait for CNPG and InnoDB clusters to initialize
|
||||
kubectl wait --for=condition=Ready cluster/pg-cluster -n dbaas --timeout=600s
|
||||
|
||||
# 15. Restore PostgreSQL from dump (see restore-postgresql.md)
|
||||
# 16. Restore MySQL from dump (see restore-mysql.md)
|
||||
```
|
||||
|
||||
### Phase 7: Application Services
|
||||
```bash
|
||||
# 17. Deploy remaining stacks in any order
|
||||
for stack in vaultwarden immich nextcloud linkwarden trading health; do
|
||||
scripts/tg apply stacks/$stack
|
||||
done
|
||||
|
||||
# 18. Restore Vaultwarden (see restore-vaultwarden.md)
|
||||
```
|
||||
|
||||
### Phase 8: Verification
|
||||
```bash
|
||||
# 19. Check all pods are running
|
||||
kubectl get pods -A | grep -v Running | grep -v Completed
|
||||
|
||||
# 20. Check all ingresses respond
|
||||
kubectl get ingress -A -o jsonpath='{range .items[*]}{.spec.rules[0].host}{"\n"}{end}' | while read host; do
|
||||
code=$(curl -s -o /dev/null -w "%{http_code}" "https://$host/" 2>/dev/null)
|
||||
echo "$host: $code"
|
||||
done
|
||||
|
||||
# 21. Check monitoring
|
||||
# Verify Prometheus targets: https://prometheus.viktorbarzin.me/targets
|
||||
# Verify Alertmanager: https://alertmanager.viktorbarzin.me/
|
||||
|
||||
# 22. Run backup CronJobs manually to establish baseline
|
||||
kubectl create job --from=cronjob/backup-etcd manual-etcd-backup -n default
|
||||
kubectl create job --from=cronjob/postgresql-backup manual-pg-backup -n dbaas
|
||||
kubectl create job --from=cronjob/mysql-backup manual-mysql-backup -n dbaas
|
||||
kubectl create job --from=cronjob/vault-raft-backup manual-vault-backup -n vault
|
||||
kubectl create job --from=cronjob/vaultwarden-backup manual-vw-backup -n vaultwarden
|
||||
```
|
||||
|
||||
## Dependency Graph
|
||||
```
|
||||
etcd → K8s API → CSI Drivers → Vault → ESO → Platform → Databases → Apps
|
||||
↓
|
||||
Restore data from
|
||||
NFS/Synology backups
|
||||
```
|
||||
|
||||
## Estimated Time
|
||||
- Full cluster rebuild from scratch: ~2-4 hours
|
||||
- With etcd restore (objects preserved): ~1-2 hours
|
||||
- Individual service restore: ~10-30 minutes each
|
||||
77
docs/runbooks/restore-mysql.md
Normal file
77
docs/runbooks/restore-mysql.md
Normal file
|
|
@ -0,0 +1,77 @@
|
|||
# Restore MySQL (InnoDB Cluster)
|
||||
|
||||
## Prerequisites
|
||||
- `kubectl` access to the cluster
|
||||
- MySQL root password (from `cluster-secret` in `dbaas` namespace, key `ROOT_PASSWORD`)
|
||||
- Backup dump available on NFS at `/mnt/main/mysql-backup/`
|
||||
|
||||
## Backup Location
|
||||
- NFS: `/mnt/main/mysql-backup/dump_YYYY_MM_DD_HH_MM.sql`
|
||||
- Replicated to Synology NAS (192.168.1.13) via TrueNAS ZFS replication
|
||||
- Retention: 14 days
|
||||
- Size: ~11MB per dump
|
||||
|
||||
## Restore Procedure
|
||||
|
||||
### 1. Identify the backup to restore
|
||||
```bash
|
||||
# List available backups
|
||||
kubectl run mysql-ls --rm -it --image=mysql \
|
||||
--overrides='{"spec":{"volumes":[{"name":"backup","persistentVolumeClaim":{"claimName":"dbaas-mysql-backup"}}],"containers":[{"name":"mysql-ls","image":"mysql","volumeMounts":[{"name":"backup","mountPath":"/backup"}],"command":["ls","-lt","/backup/"]}]}}' \
|
||||
-n dbaas
|
||||
```
|
||||
|
||||
### 2. Get the root password
|
||||
```bash
|
||||
kubectl get secret cluster-secret -n dbaas -o jsonpath='{.data.ROOT_PASSWORD}' | base64 -d
|
||||
```
|
||||
|
||||
### 3. Option A: Restore via port-forward (from outside cluster)
|
||||
```bash
|
||||
# Port-forward to MySQL primary
|
||||
kubectl port-forward svc/mysql -n dbaas 3307:3306 &
|
||||
|
||||
# Get root password
|
||||
ROOT_PWD=$(kubectl get secret cluster-secret -n dbaas -o jsonpath='{.data.ROOT_PASSWORD}' | base64 -d)
|
||||
|
||||
# Restore (use --host to avoid unix socket, specify non-default port)
|
||||
mysql -u root -p"$ROOT_PWD" --host 127.0.0.1 --port 3307 < /path/to/dump_YYYY_MM_DD_HH_MM.sql
|
||||
```
|
||||
|
||||
### 3. Option B: Restore via in-cluster pod
|
||||
```bash
|
||||
ROOT_PWD=$(kubectl get secret cluster-secret -n dbaas -o jsonpath='{.data.ROOT_PASSWORD}' | base64 -d)
|
||||
|
||||
kubectl run mysql-restore --rm -it --image=mysql \
|
||||
--overrides='{"spec":{"volumes":[{"name":"backup","persistentVolumeClaim":{"claimName":"dbaas-mysql-backup"}}],"containers":[{"name":"mysql-restore","image":"mysql","env":[{"name":"MYSQL_PWD","value":"'$ROOT_PWD'"}],"volumeMounts":[{"name":"backup","mountPath":"/backup"}],"command":["mysql","-u","root","--host","mysql.dbaas.svc.cluster.local","<","/backup/dump_YYYY_MM_DD_HH_MM.sql"]}]}}' \
|
||||
-n dbaas
|
||||
```
|
||||
|
||||
### 4. Verify restoration
|
||||
```bash
|
||||
# Check databases exist
|
||||
mysql -u root -p"$ROOT_PWD" --host 127.0.0.1 --port 3307 -e "SHOW DATABASES;"
|
||||
|
||||
# Check InnoDB Cluster status
|
||||
mysql -u root -p"$ROOT_PWD" --host 127.0.0.1 --port 3307 -e "SELECT * FROM performance_schema.replication_group_members;"
|
||||
|
||||
# Check table counts for key databases
|
||||
for db in speedtest wrongmove codimd nextcloud shlink grafana; do
|
||||
echo "=== $db ==="
|
||||
mysql -u root -p"$ROOT_PWD" --host 127.0.0.1 --port 3307 -e "SELECT TABLE_NAME, TABLE_ROWS FROM information_schema.TABLES WHERE TABLE_SCHEMA='$db' ORDER BY TABLE_ROWS DESC LIMIT 5;"
|
||||
done
|
||||
```
|
||||
|
||||
### 5. InnoDB Cluster Recovery
|
||||
If the InnoDB Cluster itself is broken (not just data loss):
|
||||
```bash
|
||||
# Check cluster status via MySQL Shell
|
||||
kubectl exec -it mysql-cluster-0 -n dbaas -c mysql -- mysqlsh root@localhost --password="$ROOT_PWD" -- cluster status
|
||||
|
||||
# Force rejoin a member
|
||||
kubectl exec -it mysql-cluster-0 -n dbaas -c mysql -- mysqlsh root@localhost --password="$ROOT_PWD" -- cluster rejoinInstance root@mysql-cluster-1:3306
|
||||
```
|
||||
|
||||
## Estimated Time
|
||||
- Data restore: ~5 minutes (11MB dump)
|
||||
- InnoDB Cluster recovery: ~15-20 minutes (init containers are slow)
|
||||
93
docs/runbooks/restore-postgresql.md
Normal file
93
docs/runbooks/restore-postgresql.md
Normal file
|
|
@ -0,0 +1,93 @@
|
|||
# Restore PostgreSQL (CNPG)
|
||||
|
||||
## Prerequisites
|
||||
- `kubectl` access to the cluster
|
||||
- CNPG operator running in the cluster
|
||||
- Backup dump available on NFS at `/mnt/main/postgresql-backup/`
|
||||
- PostgreSQL superuser password (from `pg-cluster-superuser` secret in `dbaas` namespace)
|
||||
|
||||
## Backup Location
|
||||
- NFS: `/mnt/main/postgresql-backup/dump_YYYY_MM_DD_HH_MM.sql`
|
||||
- Replicated to Synology NAS (192.168.1.13) via TrueNAS ZFS replication
|
||||
- Retention: 14 days
|
||||
|
||||
## Restore from pg_dumpall
|
||||
|
||||
### 1. Identify the backup to restore
|
||||
```bash
|
||||
# List available backups (from any node with NFS access)
|
||||
ls -lt /mnt/main/postgresql-backup/dump_*.sql | head -20
|
||||
|
||||
# Or via a pod:
|
||||
kubectl run pg-restore --rm -it --image=postgres:16.4-bullseye \
|
||||
--overrides='{"spec":{"volumes":[{"name":"backup","persistentVolumeClaim":{"claimName":"dbaas-postgresql-backup"}}],"containers":[{"name":"pg-restore","image":"postgres:16.4-bullseye","volumeMounts":[{"name":"backup","mountPath":"/backup"}],"command":["ls","-lt","/backup/"]}]}}' \
|
||||
-n dbaas
|
||||
```
|
||||
|
||||
### 2. Get the superuser password
|
||||
```bash
|
||||
kubectl get secret pg-cluster-superuser -n dbaas -o jsonpath='{.data.password}' | base64 -d
|
||||
```
|
||||
|
||||
### 3. Option A: Restore into existing CNPG cluster
|
||||
```bash
|
||||
# Port-forward to the CNPG primary
|
||||
kubectl port-forward svc/pg-cluster-rw -n dbaas 5433:5432 &
|
||||
|
||||
# Restore (this will overwrite existing data)
|
||||
PGPASSWORD=$(kubectl get secret pg-cluster-superuser -n dbaas -o jsonpath='{.data.password}' | base64 -d) \
|
||||
psql -h 127.0.0.1 -p 5433 -U postgres -f /path/to/dump_YYYY_MM_DD_HH_MM.sql
|
||||
```
|
||||
|
||||
### 3. Option B: Rebuild CNPG cluster from scratch
|
||||
```bash
|
||||
# 1. Delete the existing cluster
|
||||
kubectl delete cluster pg-cluster -n dbaas
|
||||
|
||||
# 2. Wait for PVCs to be cleaned up
|
||||
kubectl get pvc -n dbaas -l cnpg.io/cluster=pg-cluster
|
||||
|
||||
# 3. Re-apply the cluster manifest (via terragrunt)
|
||||
cd infra && scripts/tg apply -target=null_resource.pg_cluster stacks/dbaas
|
||||
|
||||
# 4. Wait for cluster to be ready
|
||||
kubectl wait --for=condition=Ready cluster/pg-cluster -n dbaas --timeout=300s
|
||||
|
||||
# 5. Restore the dump
|
||||
PGPASSWORD=$(kubectl get secret pg-cluster-superuser -n dbaas -o jsonpath='{.data.password}' | base64 -d) \
|
||||
kubectl run pg-restore --rm -it --image=postgres:16.4-bullseye \
|
||||
--overrides='{"spec":{"volumes":[{"name":"backup","persistentVolumeClaim":{"claimName":"dbaas-postgresql-backup"}}],"containers":[{"name":"pg-restore","image":"postgres:16.4-bullseye","env":[{"name":"PGPASSWORD","value":"'$PGPASSWORD'"}],"volumeMounts":[{"name":"backup","mountPath":"/backup"}],"command":["psql","-h","pg-cluster-rw.dbaas","-U","postgres","-f","/backup/dump_YYYY_MM_DD_HH_MM.sql"]}]}}' \
|
||||
-n dbaas
|
||||
```
|
||||
|
||||
### 4. Verify restoration
|
||||
```bash
|
||||
# Check databases exist
|
||||
PGPASSWORD=$PGPASSWORD psql -h 127.0.0.1 -p 5433 -U postgres -c "\l"
|
||||
|
||||
# Check table counts for critical databases
|
||||
for db in trading health linkwarden affine woodpecker claude_memory; do
|
||||
echo "=== $db ==="
|
||||
PGPASSWORD=$PGPASSWORD psql -h 127.0.0.1 -p 5433 -U postgres -d $db -c \
|
||||
"SELECT schemaname, tablename, n_live_tup FROM pg_stat_user_tables ORDER BY n_live_tup DESC LIMIT 5;"
|
||||
done
|
||||
```
|
||||
|
||||
### 5. Restart dependent services
|
||||
After restore, restart services that connect to PostgreSQL to pick up fresh connections:
|
||||
```bash
|
||||
kubectl rollout restart deployment -n trading
|
||||
kubectl rollout restart deployment -n health
|
||||
kubectl rollout restart deployment -n linkwarden
|
||||
# ... repeat for all 12 PG-dependent services
|
||||
```
|
||||
|
||||
## Restore from Synology (if TrueNAS is down)
|
||||
1. SSH to Synology NAS (192.168.1.13)
|
||||
2. Find the replicated dataset: `zfs list | grep postgresql-backup`
|
||||
3. Mount or copy the backup file to a location accessible from the cluster
|
||||
4. Follow the restore procedure above
|
||||
|
||||
## Estimated Time
|
||||
- Restore into existing cluster: ~10 minutes (depends on dump size)
|
||||
- Full rebuild: ~20-30 minutes
|
||||
99
docs/runbooks/restore-vault.md
Normal file
99
docs/runbooks/restore-vault.md
Normal file
|
|
@ -0,0 +1,99 @@
|
|||
# Restore Vault (Raft)
|
||||
|
||||
## Prerequisites
|
||||
- `kubectl` access to the cluster
|
||||
- Vault root token (from `vault-root-token` secret in `vault` namespace — manually created, independent of automation)
|
||||
- Raft snapshot available on NFS at `/mnt/main/vault-backup/`
|
||||
- Unseal keys (stored securely — check `secret/viktor` in Vault or emergency kit)
|
||||
|
||||
## Backup Location
|
||||
- NFS: `/mnt/main/vault-backup/vault-raft-YYYYMMDD-HHMMSS.db`
|
||||
- Replicated to Synology NAS (192.168.1.13) via TrueNAS ZFS replication
|
||||
- Retention: 30 days
|
||||
- Schedule: Daily at 02:00
|
||||
|
||||
## CRITICAL: Vault is a dependency for many services
|
||||
Vault provides secrets to the entire cluster via ESO (External Secrets Operator). A Vault outage affects:
|
||||
- All ExternalSecrets (43 secrets + 9 DB-creds secrets)
|
||||
- Vault DB engine password rotation
|
||||
- K8s credentials engine
|
||||
- CI/CD secret sync
|
||||
|
||||
**Priority: Restore Vault before any other service (except etcd).**
|
||||
|
||||
## Restore Procedure
|
||||
|
||||
### 1. Identify the snapshot to restore
|
||||
```bash
|
||||
# List available snapshots
|
||||
ls -lt /mnt/main/vault-backup/vault-raft-*.db | head -10
|
||||
```
|
||||
|
||||
### 2. Restore Raft snapshot
|
||||
```bash
|
||||
# Get root token
|
||||
VAULT_TOKEN=$(kubectl get secret vault-root-token -n vault -o jsonpath='{.data.vault-root-token}' | base64 -d)
|
||||
|
||||
# Port-forward to Vault
|
||||
kubectl port-forward svc/vault-active -n vault 8200:8200 &
|
||||
|
||||
# Restore the snapshot (this will overwrite current state)
|
||||
export VAULT_ADDR=http://127.0.0.1:8200
|
||||
export VAULT_TOKEN
|
||||
vault operator raft snapshot restore -force /path/to/vault-raft-YYYYMMDD-HHMMSS.db
|
||||
```
|
||||
|
||||
### 3. Unseal Vault (if sealed after restore)
|
||||
```bash
|
||||
# Check seal status
|
||||
vault status
|
||||
|
||||
# If sealed, unseal with keys (need threshold number of keys)
|
||||
vault operator unseal <key1>
|
||||
vault operator unseal <key2>
|
||||
vault operator unseal <key3>
|
||||
```
|
||||
|
||||
### 4. Verify restoration
|
||||
```bash
|
||||
# Check Vault health
|
||||
vault status
|
||||
|
||||
# Check raft peers
|
||||
vault operator raft list-peers
|
||||
|
||||
# Verify key secrets exist
|
||||
vault kv get secret/viktor
|
||||
vault kv list secret/
|
||||
|
||||
# Check DB engine
|
||||
vault list database/roles
|
||||
|
||||
# Check K8s engine
|
||||
vault list kubernetes/roles
|
||||
```
|
||||
|
||||
### 5. Trigger ESO refresh
|
||||
After Vault restore, ExternalSecrets may need a refresh:
|
||||
```bash
|
||||
# Restart ESO to force re-sync
|
||||
kubectl rollout restart deployment -n external-secrets
|
||||
|
||||
# Check ExternalSecret status
|
||||
kubectl get externalsecrets -A | grep -v "SecretSynced"
|
||||
```
|
||||
|
||||
## Full Vault Rebuild (from zero)
|
||||
If Vault needs to be rebuilt from scratch:
|
||||
1. Comment out data sources + OIDC config in `stacks/vault/main.tf`
|
||||
2. Apply Helm release: `scripts/tg apply -target=helm_release.vault stacks/vault`
|
||||
3. Initialize: `vault operator init`
|
||||
4. Unseal with generated keys
|
||||
5. Restore raft snapshot (step 2 above)
|
||||
6. Populate `secret/vault` with OIDC credentials
|
||||
7. Uncomment data sources + OIDC
|
||||
8. Re-apply: `scripts/tg apply stacks/vault`
|
||||
|
||||
## Estimated Time
|
||||
- Snapshot restore + unseal: ~10 minutes
|
||||
- Full rebuild: ~30-45 minutes
|
||||
73
docs/runbooks/restore-vaultwarden.md
Normal file
73
docs/runbooks/restore-vaultwarden.md
Normal file
|
|
@ -0,0 +1,73 @@
|
|||
# Restore Vaultwarden
|
||||
|
||||
## Prerequisites
|
||||
- `kubectl` access to the cluster
|
||||
- Backup available on NFS at `/mnt/main/vaultwarden-backup/`
|
||||
|
||||
## Backup Location
|
||||
- NFS: `/mnt/main/vaultwarden-backup/YYYY_MM_DD_HH_MM/` (directory per backup)
|
||||
- Each backup contains: `db.sqlite3`, `rsa_key.pem`, `rsa_key.pub.pem`, `attachments/`, `sends/`, `config.json`
|
||||
- Replicated to Synology NAS (192.168.1.13) via TrueNAS ZFS replication
|
||||
- Retention: 30 days
|
||||
- Schedule: Daily at 00:00
|
||||
|
||||
## Backup Contents
|
||||
| File | Purpose | Critical? |
|
||||
|------|---------|-----------|
|
||||
| `db.sqlite3` | All passwords, TOTP seeds, org data | Yes |
|
||||
| `rsa_key.pem` / `rsa_key.pub.pem` | JWT signing keys | Yes — without these, all sessions invalidate |
|
||||
| `attachments/` | File attachments on vault items | Yes |
|
||||
| `sends/` | Bitwarden Send files | No |
|
||||
| `config.json` | Server configuration | No — can be recreated |
|
||||
|
||||
## Restore Procedure
|
||||
|
||||
### 1. Identify the backup to restore
|
||||
```bash
|
||||
# List available backups (directories sorted by date)
|
||||
kubectl run vw-ls --rm -it --image=alpine \
|
||||
--overrides='{"spec":{"volumes":[{"name":"backup","persistentVolumeClaim":{"claimName":"vaultwarden-backup"}}],"containers":[{"name":"vw-ls","image":"alpine","volumeMounts":[{"name":"backup","mountPath":"/backup"}],"command":["ls","-lt","/backup/"]}]}}' \
|
||||
-n vaultwarden
|
||||
```
|
||||
|
||||
### 2. Scale down Vaultwarden
|
||||
```bash
|
||||
kubectl scale deployment vaultwarden -n vaultwarden --replicas=0
|
||||
```
|
||||
|
||||
### 3. Restore the backup
|
||||
```bash
|
||||
BACKUP_DIR="YYYY_MM_DD_HH_MM" # Set to desired backup
|
||||
|
||||
kubectl run vw-restore --rm -it --image=alpine \
|
||||
--overrides='{"spec":{"volumes":[{"name":"backup","persistentVolumeClaim":{"claimName":"vaultwarden-backup"}},{"name":"data","persistentVolumeClaim":{"claimName":"vaultwarden-data"}}],"containers":[{"name":"vw-restore","image":"alpine","volumeMounts":[{"name":"backup","mountPath":"/backup"},{"name":"data","mountPath":"/data"}],"command":["/bin/sh","-c","cp /backup/'$BACKUP_DIR'/db.sqlite3 /data/db.sqlite3 && cp /backup/'$BACKUP_DIR'/rsa_key.pem /data/ && cp /backup/'$BACKUP_DIR'/rsa_key.pub.pem /data/ && cp -a /backup/'$BACKUP_DIR'/attachments /data/ 2>/dev/null; echo Restore complete"]}]}}' \
|
||||
-n vaultwarden
|
||||
```
|
||||
|
||||
### 4. Scale up Vaultwarden
|
||||
```bash
|
||||
kubectl scale deployment vaultwarden -n vaultwarden --replicas=1
|
||||
|
||||
# Wait for pod to be ready
|
||||
kubectl wait --for=condition=Ready pod -l app=vaultwarden -n vaultwarden --timeout=120s
|
||||
```
|
||||
|
||||
### 5. Verify restoration
|
||||
```bash
|
||||
# Check pod logs for startup errors
|
||||
kubectl logs -n vaultwarden -l app=vaultwarden --tail=20
|
||||
|
||||
# Test web UI access
|
||||
curl -s -o /dev/null -w "%{http_code}" https://vaultwarden.viktorbarzin.me/
|
||||
```
|
||||
|
||||
### 6. Test login
|
||||
Log in to the Vaultwarden web UI and verify:
|
||||
- [ ] Can log in with your account
|
||||
- [ ] Vault items are present and readable
|
||||
- [ ] Attachments are accessible
|
||||
- [ ] TOTP codes are generating correctly
|
||||
|
||||
## Estimated Time
|
||||
- Restore: ~5 minutes
|
||||
- Verification: ~5 minutes
|
||||
Loading…
Add table
Add a link
Reference in a new issue