- HA Sofia token: auto-bootstrap from Vault secret/viktor/haos_api_token when
HOME_ASSISTANT_SOFIA_{URL,TOKEN} env vars are unset. Default URL =
https://ha-sofia.viktorbarzin.me.
- cert-manager: add cert_manager_installed() probe (kubectl get crd
certificates.cert-manager.io). When not installed — which is our current
state — report PASS "N/A" instead of noisy WARN "CRDs unavailable".
- LVM snapshot freshness: grep pattern was `-- -snap` but actual LV names use
underscore (`foo_snap_YYYY...`), so the grep matched nothing and the check
always WARN'd. Fixed to `grep _snap`.
After fix: PASS 36→40, WARN 9→6, FAIL 1→1 (new ha_entities FAIL is a real
HA issue, not a script bug — 400/1401 sensors stale on ha-sofia).
- scripts/cluster_healthcheck.sh: add 12 new checks (cert-manager
readiness/expiry/requests, backup freshness per-DB/offsite/LVM,
monitoring prom+AM/vault-sealed/CSS, external reachability cloudflared
+authentik/ExternalAccessDivergence/traefik-5xx). Bump TOTAL_CHECKS
to 42, add --no-fix flag.
- Remove the duplicate pod-version .claude/cluster-health.sh (1728
lines) and the openclaw cluster_healthcheck CronJob (local CLI is
now the single authoritative runner). Keep the healthcheck SA +
Role + RoleBinding — still reused by task_processor CronJob.
- Remove SLACK_WEBHOOK_URL env from openclaw deployment and delete
the unused setup-monitoring.sh.
- Rewrite .claude/skills/cluster-health/SKILL.md: mandates running
the script first, refreshes the 42-check table, drops stale
CronJob/Slack/post-mortem sections, documents the monorepo-canonical
+ hardlink layout. File is hardlinked to
/home/wizard/code/.claude/skills/cluster-health/SKILL.md for
dual discovery.
- AGENTS.md + k8s-portal agent page: 25-check → 42-check.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Context (bd code-yiu)
Phase 2 (HAProxy on pfSense) and Phase 3 (persist config in pfSense XML so
it lives in the nightly backup) of the PROXY-v2 migration. Test path only —
listens on pfSense 10.0.20.1:2525 → k8s node NodePort :30125 → pod :2525
postscreen. Real client IP verified in maillog
(`postfix/smtpd-proxy/postscreen: CONNECT from [10.0.10.10]:...`), Phase 1a
container plumbing is already live (commit ef75c02f).
pfSense HAProxy config lives in `/cf/conf/config.xml` under
`<installedpackages><haproxy>`. That file is captured daily by
`scripts/daily-backup.sh` (scp → `/mnt/backup/pfsense/config-YYYYMMDD.xml`)
and synced offsite to Synology. No new backup wiring needed — this commit
documents the fact + adds the reproducer script.
## This change
Two files, both additive:
1. `scripts/pfsense-haproxy-bootstrap.php` — idempotent PHP script that
edits pfSense config.xml to add:
- Backend pool `mailserver_nodes` with 4 k8s workers on NodePort 30125,
`send-proxy-v2`, TCP health-check every 120000 ms (2 min).
- Frontend `mailserver_proxy_test` listening on pfSense 10.0.20.1:2525
in TCP mode, forwarding to the pool.
Uses `haproxy_check_and_run()` to regenerate `/var/etc/haproxy/haproxy.cfg`
and reload HAProxy. Removes existing items with the same name before
adding, so repeat runs converge on declared state.
2. `docs/runbooks/mailserver-pfsense-haproxy.md` — ops runbook covering
current state, validation, bootstrap/restore, health checks, phase
roadmap, and known warts (health-check noise + bind-address templating).
## What is NOT in this change
- Phase 4 (NAT rdr flip for :25 from `<mailserver>` → HAProxy) — deferred.
- Phase 5 (extend to 465/587/993 with alt listeners + Dovecot dual-
inet_listener) — deferred.
- Terraform for pfSense HAProxy pkg install — not possible (no Terraform
provider for pfSense pkg management). Runbook documents the manual
`pkg install` command.
## Test Plan
### Automated
```
$ ssh admin@10.0.20.1 'pgrep -lf haproxy; sockstat -l | grep :2525'
64009 /usr/local/sbin/haproxy -f /var/etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D
www haproxy 64009 5 tcp4 *:2525 *:*
$ ssh admin@10.0.20.1 "echo 'show servers state' | socat /tmp/haproxy.socket stdio" \
| awk 'NR>1 {print $4, $6}'
node1 2
node2 2
node3 2
node4 2 # all UP
$ python3 -c "
import socket; s=socket.socket(); s.settimeout(10)
s.connect(('10.0.20.1', 2525))
print(s.recv(200).decode())
s.send(b'EHLO persist-test.example.com\r\n')
print(s.recv(500).decode())
s.send(b'QUIT\r\n'); s.close()"
220-mail.viktorbarzin.me ESMTP
...
250-mail.viktorbarzin.me
250-SIZE 209715200
...
221 2.0.0 Bye
$ kubectl logs -c docker-mailserver deployment/mailserver -n mailserver --tail=50 \
| grep smtpd-proxy.*CONNECT
postfix/smtpd-proxy/postscreen: CONNECT from [10.0.10.10]:33010 to [10.0.20.1]:2525
```
Real client IP `[10.0.10.10]` visible (not the k8s-node IP after kube-proxy
SNAT) → PROXY-v2 roundtrip confirmed.
### Manual Verification
Trigger a pfSense reboot; after boot, HAProxy should auto-restart from the
now-persisted config (`<enable>yes</enable>` in XML). Connection test above
should still work.
## Reproduce locally
1. `scp infra/scripts/pfsense-haproxy-bootstrap.php admin@10.0.20.1:/tmp/`
2. `ssh admin@10.0.20.1 'php /tmp/pfsense-haproxy-bootstrap.php'` → rc=OK
3. `python3 -c '...' ` SMTP roundtrip test above.
## Context
The claude-agent-service K8s pod (deployed 2026-04-15) provides an HTTP API
for running Claude headless agents. Three workflows still SSH'd to the DevVM
(10.0.10.10) to invoke `claude -p`. This eliminates that dependency.
## This change
Pipeline migrations (SSH → HTTP POST to claude-agent-service):
- `.woodpecker/issue-automation.yml` — Vault auth fetches API token instead
of SSH key; curl POST /execute + poll /jobs/{id} replaces SSH invocation
- `scripts/postmortem-pipeline.sh` — same pattern; uses jq for safe JSON
construction of TODO payloads
- `.woodpecker/postmortem-todos.yml` — drop openssh-client from apk install
- `stacks/n8n/workflows/diun-upgrade.json` — SSH node replaced with HTTP
Request node; API token via $env.CLAUDE_AGENT_API_TOKEN (added to Vault
secret/n8n)
Documentation updates:
- `docs/architecture/incident-response.md` — Mermaid diagram: DevVM → K8s
- `docs/architecture/automated-upgrades.md` — pipeline diagram + n8n action
- `AGENTS.md` — pipeline description updated
## What is NOT in this change
- DevVM decommissioning (still hosts terminal/foolery services)
- Removal of SSH key secrets from Vault (kept for rollback)
- n8n workflow import (must be done manually in n8n UI)
[ci skip]
Co-Authored-By: Claude Opus 4 (1M context) <noreply@anthropic.com>
Two-tier state architecture:
- Tier 0 (infra, platform, cnpg, vault, dbaas, external-secrets): local
state with SOPS encryption in git — unchanged, required for bootstrap.
- Tier 1 (105 app stacks): PostgreSQL backend on CNPG cluster at
10.0.20.200:5432/terraform_state with native pg_advisory_lock.
Motivation: multi-operator friction (every workstation needed SOPS + age +
git-crypt), bootstrap complexity for new operators, and headless agents/CI
needing the full encryption toolchain just to read state.
Changes:
- terragrunt.hcl: conditional backend (local vs pg) based on tier0 list
- scripts/tg: tier detection, auto-fetch PG creds from Vault for Tier 1,
skip SOPS and Vault KV locking for Tier 1 stacks
- scripts/state-sync: tier-aware encrypt/decrypt (skips Tier 1)
- scripts/migrate-state-to-pg: one-shot migration script (idempotent)
- stacks/vault/main.tf: pg-terraform-state static role + K8s auth role
for claude-agent namespace
- stacks/dbaas: terraform_state DB creation + MetalLB LoadBalancer
service on shared IP 10.0.20.200
- Deleted 107 .tfstate.enc files for migrated Tier 1 stacks
- Cleaned up per-stack tiers.tf (now generated by root terragrunt.hcl)
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
DevVM may have unstaged changes from active sessions. Use git stash
before pull to avoid 'cannot pull with rebase: unstaged changes' errors.
Stash pop after to restore working state.
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- daily-backup: handle rsync exit 23 (partial transfer) as OK for LUKS
noload mounts — in-flight writes have corrupt metadata from skipped
journal replay, but core data is intact
- daily-backup: clean up stale LUKS dm mappings from previous crashed
runs before attempting to open
- daily-backup: capture rsync exit code safely with set -e (|| pattern)
- kyverno: bump tier-4-aux requests.memory 2Gi→3Gi (servarr was at 83%)
- actualbudget: patched custom quota 5Gi→6Gi (was at 82%)
Verified: backup now completes status=0 (96 PVCs OK, 0 failed)
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Both issue-automation and postmortem pipelines were cd'ing into
~/code/infra before running Claude, missing the root CLAUDE.md
with beads config and project-wide instructions. Now cd to ~/code
and use relative agent paths from there.
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- technitium-password-sync: remove RWO encrypted PVC mount that caused
pods to stick in ContainerCreating on wrong nodes. Plugin install now
warns instead of failing when zip unavailable.
- daily-backup: add LUKS decryption support for encrypted PVC snapshots
using /root/.luks-backup-key. Uses noload mount option to skip ext4
journal replay. Also installed cryptsetup-bin on PVE host.
- speedtest: disable prometheus.io/scrape annotation (no /prometheus
endpoint exists, causing ScrapeTargetDown alert).
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- New custom CI Docker image (ci/Dockerfile) with TF 1.5.7, TG 0.99.4,
git-crypt, sops, kubectl pre-installed. Pushed to private registry.
Eliminates 17 apk add calls + binary downloads per pipeline run.
- Unified CI pipeline: merge default.yml + app-stacks.yml into one.
Changed-stacks-only detection (git diff, with global-file fallback).
Concurrency limit (xargs -P 4). Step consolidation (2 steps vs 4).
Shallow clone (depth=2). Provider cache (TF_PLUGIN_CACHE_DIR).
- Per-stack Vault advisory locks in scripts/tg. 30min TTL with stale
lock detection. Blocks concurrent applies to same stack.
- TF_PLUGIN_CACHE_DIR enabled by default in scripts/tg for local dev.
- Daily drift detection pipeline (.woodpecker/drift-detection.yml).
Runs terraform plan on all stacks, Slack alert on drift.
- CI image build pipeline (.woodpecker/build-ci-image.yml).
Expected speedup: ~5-10 min per pipeline run → ~2-4 min.
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Increase Uptime Kuma API timeout to 120s with wait_events=0.2
- Remove hardcoded password, use Vault or UPTIME_KUMA_PASSWORD env var
- Report internal and external monitor status separately
- Install uptime-kuma-api in local venv
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Synology Administrator user can't create dirs with root-owned permissions
from PVC snapshots. Switch from -az to -rltz --chmod to set writable
permissions on destination. Also updated Cloud Sync Task 1 excludes
to prevent duplication of backup dirs on Synology.
Verifies snmp-exporter, idrac-redfish-exporter, proxmox-exporter, and
tuya-bridge pods are running, plus checks Prometheus scrape targets
(snmp-idrac, snmp-ups, redfish-idrac, proxmox-host) are UP.
- Healthcheck: add entity availability, integration health, automation
status, and system resources checks for Home Assistant Sofia
- Docs: add backup-dr architecture documentation
When the pull-through proxy (10.0.20.10) is down, containerd now falls
back to the official upstream registries (registry-1.docker.io, ghcr.io)
instead of failing. Also cleans up stale disabled registry mirror dirs
and removes unnecessary containerd restart from the rollout script.
Both services migrated to unified ebooks namespace. Remove:
- Old stack directories and Terraform state
- calibre references from monitoring namespace lists
- calibre/audiobookshelf from operational scripts
- Each stack gets its own Vault Transit key (transit/keys/sops-state-<stack>)
- state-sync passes per-stack Transit URI + age keys on encrypt
- Vault policies scope namespace-owners to their stacks only:
- sops-admin: wildcard access to all transit keys
- sops-user-<name>: access only to owned stack keys
- Anca (plotting-book) can only decrypt plotting-book state
- Admin can decrypt everything (via admin Transit policy or age fallback)
- External group sops-plotting-book maps Authentik group to Vault policy
- Updated CLAUDE.md with state sync documentation
- .sops.yaml: add hc_vault_transit_uri for transit/keys/sops-state
- state-sync: try Vault Transit first, fall back to age key on disk
- Re-encrypted all 101 state files with both Vault Transit + age
- Normal workflow: vault login → decrypt via Transit (no key files)
- Bootstrap/DR: age key at ~/.config/sops/age/keys.txt
Kyverno ClusterPolicy reads dependency.kyverno.io/wait-for annotation
and injects busybox init containers that block until each dependency
is reachable (nc -z). Annotations added to 18 stacks (24 deployments).
Includes graceful-db-maintenance.sh script for planned DB maintenance
(scales dependents to 0, saves replica counts, restores on startup).
Vault is now the sole source of truth for secrets. SOPS pipeline
removed entirely — auth via `vault login -method=oidc`.
Part A: SOPS removal
- vault/main.tf: delete 990 lines (93 vars + 43 KV write resources),
add self-read data source for OIDC creds from secret/vault
- terragrunt.hcl: remove SOPS var loading, vault_root_token, check_secrets hook
- scripts/tg: remove SOPS decryption, keep -auto-approve logic
- .woodpecker/default.yml: replace SOPS with Vault K8s auth via curl
- Delete secrets.sops.json, .sops.yaml
Part B: External Secrets Operator
- New stack stacks/external-secrets/ with Helm chart + 2 ClusterSecretStores
(vault-kv for KV v2, vault-database for DB engine)
Part C: Database secrets engine (in vault/main.tf)
- MySQL + PostgreSQL connections with static role rotation (24h)
- 6 MySQL roles (speedtest, wrongmove, codimd, nextcloud, shlink, grafana)
- 6 PostgreSQL roles (trading, health, linkwarden, affine, woodpecker, claude_memory)
Part D: Kubernetes secrets engine (in vault/main.tf)
- RBAC for Vault SA to manage K8s tokens
- Roles: dashboard-admin, ci-deployer, openclaw, local-admin
- New scripts/vault-kubeconfig helper for dynamic kubeconfig
K8s auth method with scoped policies for CI, ESO, OpenClaw, Woodpecker sync.
- Replace subPath ConfigMap mount with init container that copies openclaw.json
to writable NFS home (OpenClaw writes back to the file at runtime)
- Remove invalid memory-api plugin references causing "Config invalid"
- Increase memory to 2Gi (req+limit) with NODE_OPTIONS=--max-old-space-size=1536
- Fix tg wrapper to inject -auto-approve when apply --non-interactive is used
Part of SOPS multi-user secrets migration.
- .sops.yaml: defines age recipients (Viktor + CI)
- scripts/tg: wrapper that decrypts secrets before running terragrunt
- .gitignore: excludes decrypted secrets.auto.tfvars.json
No functional change — terraform.tfvars still works as before.
Check real CPU/memory usage via kubectl top nodes instead of
limits-vs-allocatable ratios. Thresholds: >80% WARN, >90% FAIL.
Limits overcommit is expected with 70+ services on 3 worker nodes.
- Reduce Kyverno LimitRange default limits ~4x across all tiers to fix
800-900% memory overcommitment on worker nodes
- Add cluster health check #25: per-node resource overcommitment
showing requests and limits vs allocatable capacity
- Add Kyverno policy for Goldilocks VPA mode by tier: tier-0 namespaces
get VPA Off mode (recommend only, no evictions) to prevent downtime
on critical infra (traefik, cloudflared, authentik, technitium, etc.)
- Non-tier-0 namespaces get VPA Auto mode for active right-sizing
Drone CI has been fully replaced by Woodpecker CI at ci.viktorbarzin.me.
Destroys K8s resources (12), removes DNS records, NFS exports, Uptime Kuma
monitor, dashboard entry, and all code/doc references across 18 files.
Generated individual stack directories for all 66 services under stacks/.
Each stack has terragrunt.hcl (depends on platform) and main.tf (thin
wrapper calling existing module). Migrated all 64 active service states
from root terraform.tfstate to individual state files. Root state is now
empty. Verified with terragrunt plan on multiple stacks (no changes).
- Add explicit resource limits to dashy (2Gi memory) to prevent OOMKilled
during webpack build on startup
- Rewrite DNS healthcheck to test from inside the Technitium pod via
kubectl exec, since MetalLB virtual IPs aren't reachable from outside
the L2 network
- Deleted orphaned kured/tls-secret (expired Oct 2025, module disabled,
not mounted by kured DaemonSet)