## Context
Wave 3A (commit c9d221d5) added the `# KYVERNO_LIFECYCLE_V1` marker to the
27 pre-existing `ignore_changes = [...dns_config]` sites so they could be
grepped and audited. It did NOT address pod-owning resources that were
simply missing the suppression entirely. Post-Wave-3A sampling (2026-04-18)
found that navidrome, f1-stream, frigate, servarr, monitoring, crowdsec,
and many other stacks showed perpetual `dns_config` drift every plan
because their `kubernetes_deployment` / `kubernetes_stateful_set` /
`kubernetes_cron_job_v1` resources had no `lifecycle {}` block at all.
Root cause (same as Wave 3A): Kyverno's admission webhook stamps
`dns_config { option { name = "ndots"; value = "2" } }` on every pod's
`spec.template.spec.dns_config` to prevent NxDomain search-domain flooding
(see `k8s-ndots-search-domain-nxdomain-flood` skill). Without `ignore_changes`
on every Terraform-managed pod-owner, Terraform repeatedly tries to strip
the injected field.
## This change
Extends the Wave 3A convention by sweeping EVERY `kubernetes_deployment`,
`kubernetes_stateful_set`, `kubernetes_daemon_set`, `kubernetes_cron_job_v1`,
`kubernetes_job_v1` (+ their `_v1` variants) in the repo and ensuring each
carries the right `ignore_changes` path:
- **kubernetes_deployment / stateful_set / daemon_set / job_v1**:
`spec[0].template[0].spec[0].dns_config`
- **kubernetes_cron_job_v1**:
`spec[0].job_template[0].spec[0].template[0].spec[0].dns_config`
(extra `job_template[0]` nesting — the CronJob's PodTemplateSpec is
one level deeper)
Each injection / extension is tagged `# KYVERNO_LIFECYCLE_V1: Kyverno
admission webhook mutates dns_config with ndots=2` inline so the
suppression is discoverable via `rg 'KYVERNO_LIFECYCLE_V1' stacks/`.
Two insertion paths are handled by a Python pass (`/tmp/add_dns_config_ignore.py`):
1. **No existing `lifecycle {}`**: inject a brand-new block just before the
resource's closing `}`. 108 new blocks on 93 files.
2. **Existing `lifecycle {}` (usually for `DRIFT_WORKAROUND: CI owns image tag`
from Wave 4, commit a62b43d1)**: extend its `ignore_changes` list with the
dns_config path. Handles both inline (`= [x]`) and multiline
(`= [\n x,\n]`) forms; ensures the last pre-existing list item carries
a trailing comma so the extended list is valid HCL. 34 extensions.
The script skips anything already mentioning `dns_config` inside an
`ignore_changes`, so re-running is a no-op.
## Scale
- 142 total lifecycle injections/extensions
- 93 `.tf` files touched
- 108 brand-new `lifecycle {}` blocks + 34 extensions of existing ones
- Every Tier 0 and Tier 1 stack with a pod-owning resource is covered
- Together with Wave 3A's 27 pre-existing markers → **169 greppable
`KYVERNO_LIFECYCLE_V1` dns_config sites across the repo**
## What is NOT in this change
- `stacks/trading-bot/main.tf` — entirely commented-out block (`/* … */`).
Python script touched the file, reverted manually.
- `_template/main.tf.example` skeleton — kept minimal on purpose; any
future stack created from it should either inherit the Wave 3A one-line
form or add its own on first `kubernetes_deployment`.
- `terraform fmt` fixes to pre-existing alignment issues in meshcentral,
nvidia/modules/nvidia, vault — unrelated to this commit. Left for a
separate fmt-only pass.
- Non-pod resources (`kubernetes_service`, `kubernetes_secret`,
`kubernetes_manifest`, etc.) — they don't own pods so they don't get
Kyverno dns_config mutation.
## Verification
Random sample post-commit:
```
$ cd stacks/navidrome && ../../scripts/tg plan → No changes.
$ cd stacks/f1-stream && ../../scripts/tg plan → No changes.
$ cd stacks/frigate && ../../scripts/tg plan → No changes.
$ rg -c 'KYVERNO_LIFECYCLE_V1' stacks/ --include='*.tf' --include='*.tf.example' \
| awk -F: '{s+=$2} END {print s}'
169
```
## Reproduce locally
1. `git pull`
2. `rg 'KYVERNO_LIFECYCLE_V1' stacks/ | wc -l` → 169+
3. `cd stacks/navidrome && ../../scripts/tg plan` → expect 0 drift on
the deployment's dns_config field.
Refs: code-seq (Wave 3B dns_config class closed; kubernetes_manifest
annotation class handled separately in 8d94688d for tls_secret)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Context
Wave 3B-continued: the Goldilocks VPA dashboard (stacks/vpa) runs a Kyverno
ClusterPolicy `goldilocks-vpa-auto-mode` that mutates every namespace with
`metadata.labels["goldilocks.fairwinds.com/vpa-update-mode"] = "off"`. This
is intentional — Terraform owns container resource limits, and Goldilocks
should only provide recommendations, never auto-update. The label is how
Goldilocks decides per-namespace whether to run its VPA in `off` mode.
Effect on Terraform: every `kubernetes_namespace` resource shows the label
as pending-removal (`-> null`) on every `scripts/tg plan`. Dawarich survey
2026-04-18 confirmed the drift. Cluster-side count: 88 namespaces carry the
label (`kubectl get ns -o json | jq ... | wc -l`). Every TF-managed namespace
is affected.
This commit brings the intentional admission drift under the same
`# KYVERNO_LIFECYCLE_V1` discoverability marker introduced in c9d221d5 for
the ndots dns_config pattern. The marker now stands generically for any
Kyverno admission-webhook drift suppression; the inline comment records
which specific policy stamps which specific field so future grep audits
show why each suppression exists.
## This change
107 `.tf` files touched — every stack's `resource "kubernetes_namespace"`
resource gets:
```hcl
lifecycle {
# KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
```
Injection was done with a brace-depth-tracking Python pass (`/tmp/add_goldilocks_ignore.py`):
match `^resource "kubernetes_namespace" ` → track `{` / `}` until the
outermost closing brace → insert the lifecycle block before the closing
brace. The script is idempotent (skips any file that already mentions
`goldilocks.fairwinds.com/vpa-update-mode`) so re-running is safe.
Vault stack picked up 2 namespaces in the same file (k8s-users produces
one, plus a second explicit ns) — confirmed via file diff (+8 lines).
## What is NOT in this change
- `stacks/trading-bot/main.tf` — entire file is `/* … */` commented out
(paused 2026-04-06 per user decision). Reverted after the script ran.
- `stacks/_template/main.tf.example` — per-stack skeleton, intentionally
minimal. User keeps it that way. Not touched by the script (file
has no real `resource "kubernetes_namespace"` — only a placeholder
comment).
- `.terraform/` copies (e.g. `stacks/metallb/.terraform/modules/...`) —
gitignored, won't commit; the live path was edited.
- `terraform fmt` cleanup of adjacent pre-existing alignment issues in
authentik, freedify, hermes-agent, nvidia, vault, meshcentral. Reverted
to keep the commit scoped to the Goldilocks sweep. Those files will
need a separate fmt-only commit or will be cleaned up on next real
apply to that stack.
## Verification
Dawarich (one of the hundred-plus touched stacks) showed the pattern
before and after:
```
$ cd stacks/dawarich && ../../scripts/tg plan
Before:
Plan: 0 to add, 2 to change, 0 to destroy.
# kubernetes_namespace.dawarich will be updated in-place
(goldilocks.fairwinds.com/vpa-update-mode -> null)
# module.tls_secret.kubernetes_secret.tls_secret will be updated in-place
(Kyverno generate.* labels — fixed in 8d94688d)
After:
No changes. Your infrastructure matches the configuration.
```
Injection count check:
```
$ rg -c 'KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode' stacks/ | awk -F: '{s+=$2} END {print s}'
108
```
## Reproduce locally
1. `git pull`
2. Pick any stack: `cd stacks/<name> && ../../scripts/tg plan`
3. Expect: no drift on the namespace's goldilocks.fairwinds.com/vpa-update-mode label.
Closes: code-dwx
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Context
Grampsweb stack had an empty Terraform state — 7 K8s resources (namespace,
PVC, service, deployment, ingress, ExternalSecret manifest, TLS secret)
existed in the cluster but weren't tracked. This blocked commit 7b248897
(ollama LLM env-var removal) from being applied because any apply would
attempt to re-create existing resources.
Additionally, the TF source declared a **grampsweb-data-proxmox** PVC on
**storage_class=proxmox-lvm**, while the cluster had **grampsweb-data-encrypted**
on **proxmox-lvm-encrypted** (1 Gi, bound). The deployment was referencing
the encrypted PVC. This divergence predated this change — the source was
simply out of date vs cluster reality.
## This change
Two things:
1. **Source alignment** (the only file diff):
- Renames `kubernetes_persistent_volume_claim.data_proxmox` →
`data_encrypted`, metadata.name to match cluster, storage class to
`proxmox-lvm-encrypted`.
- Updates the deployment volume `claim_name` reference accordingly.
- Aligns with the newer project convention documented in
`.claude/CLAUDE.md`: "Default for sensitive data is
proxmox-lvm-encrypted" and "Convention: PVC names end in `-encrypted`".
- No destroy/recreate: the PVC and deployment already use the encrypted
PVC in the cluster; TF source now just describes reality.
2. **State imports** (out-of-band, via `scripts/tg import`, not in diff):
- `kubernetes_namespace.grampsweb` <- `grampsweb`
- `kubernetes_persistent_volume_claim.data_encrypted` <- `grampsweb/grampsweb-data-encrypted`
- `kubernetes_service.grampsweb` <- `grampsweb/grampsweb`
- `kubernetes_deployment.grampsweb` <- `grampsweb/grampsweb`
- `module.ingress.kubernetes_ingress_v1.proxied-ingress` <- `grampsweb/family`
- `module.tls_secret.kubernetes_secret.tls_secret` <- `grampsweb/tls-secret`
- `kubernetes_manifest.external_secret` <- `apiVersion=external-secrets.io/v1beta1,kind=ExternalSecret,namespace=grampsweb,name=grampsweb-secrets`
## Apply result
`Apply complete! Resources: 0 added, 7 changed, 0 destroyed.`
In-place updates applied:
- Deployment: dropped `GRAMPSWEB_LLM_BASE_URL` + `GRAMPSWEB_LLM_MODEL` env
vars (both containers) — realising the intent of commit 7b248897.
- Ingress: realigned Traefik middleware annotation + cleaned stale
`uptime.viktorbarzin.me/external-monitor=false` annotation.
- TLS secret: removed Kyverno-generated labels (Kyverno's
`sync-tls-secret` ClusterPolicy re-applies them on next reconcile —
no functional impact; same pattern in 29 other stacks using
`setup_tls_secret` module).
- Namespace, PVC, service: trivial metadata alignments (label /
`wait_until_bound` / `wait_for_load_balancer`).
- `kubernetes_manifest.external_secret`: populated the `manifest`
attribute after import (expected).
## What is NOT in this change
- No replica bump: deployment stays at `replicas=0` (stack is intentionally
inactive per 2026-03-14 OOM incident note).
- No destroy/recreate of any resource.
- The broader code-w97 (11 stacks with empty state) is NOT closed — only
grampsweb is imported. 10 stacks remain: beads-server, insta2spotify,
isponsorblocktv, kyverno, meshcentral, pvc-autoresizer, shadowsocks,
tor-proxy, travel_blog, + meshcentral PVC.
## Reproduce locally
```
KUBECONFIG=/home/wizard/code/config kubectl get all,ingress,pvc,externalsecret,secret -n grampsweb
# Deployment still replicas=0; PVC grampsweb-data-encrypted Bound; ingress 'family'
# on family.viktorbarzin.me; ExternalSecret SecretSynced True.
cd /home/wizard/code/infra/stacks/grampsweb
/home/wizard/code/infra/scripts/tg plan
# Expected: 'No changes.' (clean state after apply).
```
## Test Plan
### Automated
```
$ cd /home/wizard/code/infra/stacks/grampsweb && /home/wizard/code/infra/scripts/tg plan
Plan: 0 to add, 7 to change, 0 to destroy. [pre-apply]
$ /home/wizard/code/infra/scripts/tg apply --non-interactive
Plan: 0 to add, 7 to change, 0 to destroy.
kubernetes_namespace.grampsweb: Modifications complete after 0s [id=grampsweb]
kubernetes_persistent_volume_claim.data_encrypted: Modifications complete after 0s [id=grampsweb/grampsweb-data-encrypted]
kubernetes_service.grampsweb: Modifications complete after 0s [id=grampsweb/grampsweb]
module.ingress.kubernetes_ingress_v1.proxied-ingress: Modifications complete after 0s [id=grampsweb/family]
module.tls_secret.kubernetes_secret.tls_secret: Modifications complete after 0s [id=grampsweb/tls-secret]
kubernetes_manifest.external_secret: Modifications complete after 0s
kubernetes_deployment.grampsweb: Modifications complete after 1s [id=grampsweb/grampsweb]
Apply complete! Resources: 0 added, 7 changed, 0 destroyed.
$ terraform fmt -check -recursive stacks/grampsweb
(no output - formatted clean)
```
### Manual Verification
```
$ KUBECONFIG=/home/wizard/code/config kubectl get all,ingress,pvc,externalsecret,secret -n grampsweb
# - deployment.apps/grampsweb 0/0 0 0 47d (replicas=0 preserved)
# - service/grampsweb ClusterIP 10.106.232.205:80/TCP
# - persistentvolumeclaim/grampsweb-data-encrypted Bound pvc-c9a5dcf4... 1Gi RWO proxmox-lvm-encrypted
# - ingress/family traefik family.viktorbarzin.me -> 10.0.20.200:80,443
# - externalsecret/grampsweb-secrets vault-kv 15m SecretSynced True
# - secret/tls-secret kubernetes.io/tls
# No pod crashes (no pods — replicas=0).
```
Closes: code-8m6
## Context
Stage 4 of ollama decommission. `grampsweb` referenced `var.ollama_host` for
its `GRAMPSWEB_LLM_BASE_URL` + `GRAMPSWEB_LLM_MODEL` env vars. This stack is
currently missing from Terraform state (blocked by bd-w97, which handles
state imports for 11 stacks including grampsweb) — so an apply would fail on
"resource already exists" errors.
## This change
- Deletes `variable "ollama_host"` declaration (stacks/grampsweb/main.tf).
- Deletes the two env entries `GRAMPSWEB_LLM_BASE_URL` and
`GRAMPSWEB_LLM_MODEL` from the `common_env` locals block.
- **Source-only** — NO apply performed, because the stack cannot apply
cleanly until bd-w97 resolves state imports. When that unblocks, the next
apply will pick up the already-clean source.
## Why not apply now
- Running `scripts/tg apply` would try to create ~7 resources that already
exist in K8s (namespace, PVCs, deployments, ingress, etc.), producing
"already exists" errors for each.
- Once bd-w97 imports those into state, the next apply will be a no-op for
them and will rollout the LLM env-var removal without issue.
## Test plan
### Automated
- No apply performed — stack blocked on bd-w97.
- `terraform fmt` on main.tf: no issues.
### Manual Verification
After bd-w97 resolves:
1. `scripts/tg plan` should show only the env-var removal on `grampsweb`
deployments (no resource creates).
2. `scripts/tg apply` → deployments rollout with `GRAMPSWEB_LLM_*` vars gone.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two-tier state architecture:
- Tier 0 (infra, platform, cnpg, vault, dbaas, external-secrets): local
state with SOPS encryption in git — unchanged, required for bootstrap.
- Tier 1 (105 app stacks): PostgreSQL backend on CNPG cluster at
10.0.20.200:5432/terraform_state with native pg_advisory_lock.
Motivation: multi-operator friction (every workstation needed SOPS + age +
git-crypt), bootstrap complexity for new operators, and headless agents/CI
needing the full encryption toolchain just to read state.
Changes:
- terragrunt.hcl: conditional backend (local vs pg) based on tier0 list
- scripts/tg: tier detection, auto-fetch PG creds from Vault for Tier 1,
skip SOPS and Vault KV locking for Tier 1 stacks
- scripts/state-sync: tier-aware encrypt/decrypt (skips Tier 1)
- scripts/migrate-state-to-pg: one-shot migration script (idempotent)
- stacks/vault/main.tf: pg-terraform-state static role + K8s auth role
for claude-agent namespace
- stacks/dbaas: terraform_state DB creation + MetalLB LoadBalancer
service on shared IP 10.0.20.200
- Deleted 107 .tfstate.enc files for migrated Tier 1 stacks
- Cleaned up per-stack tiers.tf (now generated by root terragrunt.hcl)
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replaced data "vault_kv_secret_v2" with:
1. ExternalSecret (ESO syncs Vault KV → K8s Secret)
2. data "kubernetes_secret" (reads ESO-created secret at plan time)
This removes the Vault provider dependency at plan time for these
stacks — they now only need K8s API access, not a Vault token.
Stacks: actualbudget, affine, audiobookshelf, calibre, changedetection,
coturn, freedify, freshrss, grampsweb, navidrome, novelapp, ollama,
owntracks, real-estate-crawler, servarr, ytdlp
Kyverno ClusterPolicy reads dependency.kyverno.io/wait-for annotation
and injects busybox init containers that block until each dependency
is reachable (nc -z). Annotations added to 18 stacks (24 deployments).
Includes graceful-db-maintenance.sh script for planned DB maintenance
(scales dependents to 0, saves replica counts, restores on startup).
SQLite backup via Online Backup API + copy of RSA keys,
attachments, sends, and config. 30-day retention with rotation.
Pod affinity ensures co-scheduling with vaultwarden for RWO PVC access.
- Set memory requests = limits across 56 stacks to prevent overcommit
- Right-sized limits based on actual pod usage (2x actual, rounded up)
- Scaled down trading-bot (replicas=0) to free memory
- Fixed OOMKilled services: forgejo, dawarich, health, meshcentral,
paperless-ngx, vault auto-unseal, rybbit, whisper, openclaw, clickhouse
- Added startup+liveness probes to calibre-web
- Bumped inotify limits on nodes 2,3 (max_user_instances 128->8192)
Post node2 OOM incident (2026-03-14). Previous kubelet config had no
kubeReserved/systemReserved set, allowing pods to starve the kernel.
- Add vault provider to root terragrunt.hcl (generated providers.tf)
- Delete stacks/vault/vault_provider.tf (now in generated providers.tf)
- Add 124 variable declarations + 43 vault_kv_secret_v2 resources to
vault/main.tf to populate Vault KV at secret/<stack-name>
- Migrate 43 consuming stacks to read secrets from Vault KV via
data "vault_kv_secret_v2" instead of SOPS var-file
- Add dependency "vault" to all migrated stacks' terragrunt.hcl
- Complex types (maps/lists) stored as JSON strings, decoded with
jsondecode() in locals blocks
Bootstrap secrets (vault_root_token, vault_authentik_client_id,
vault_authentik_client_secret) remain in SOPS permanently.
Apply order: vault stack first (populates KV), then all others.
CPU limits cause CFS throttling even when nodes have idle capacity.
Move to a request-only CPU model: keep CPU requests for scheduling
fairness but remove all CPU limits. Memory limits stay (incompressible).
Changes across 108 files:
- Kyverno LimitRange policy: remove cpu from default/max in all 6 tiers
- Kyverno ResourceQuota policy: remove limits.cpu from all 5 tiers
- Custom ResourceQuotas: remove limits.cpu from 8 namespace quotas
- Custom LimitRanges: remove cpu from default/max (nextcloud, onlyoffice)
- RBAC module: remove cpu_limits variable and quota reference
- Freedify factory: remove cpu_limit variable and limits reference
- 86 deployment files: remove cpu from all limits blocks
- 6 Helm values files: remove cpu under limits sections
- Pin actualbudget/actual-server from edge to 26.3.0 (all 3 instances) to
prevent recurring migration breakage from rolling nightly builds
- Add podAntiAffinity to MySQL InnoDB Cluster to spread replicas across nodes,
relieving memory pressure on k8s-node4
- Scale grampsweb to 0 replicas (unused, consuming 1.7Gi memory)
- Add GPU toleration Kyverno policy to Terraform using patchesJson6902 instead
of patchStrategicMerge to fix toleration array being overwritten (caused
caretta DaemonSet pod to be unable to schedule on k8s-master)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use correct dashboard-icons names where available (changedetection,
gramps-web), Material Design Icons for custom apps (city-guesser,
plotting-book, resume, tuya-bridge, trading-bot, poison-fountain),
and Simple Icons for F1 Stream.
Add Kubernetes ingress annotations for Homepage auto-discovery across
~88 services organized into 11 groups. Enable serviceAccount for RBAC,
configure group layouts, and add Grafana/Frigate/Speedtest widgets.
Phase 5 — CI pipelines:
- default.yml: add SOPS decrypt in prepare step, change git add . to
specific paths (stacks/ state/ .woodpecker/), cleanup on success+failure
- renew-tls.yml: change git add . to git add secrets/ state/
Phase 6 — sensitive=true:
- Add sensitive = true to 256 variable declarations across 149 stack files
- Prevents secret values from appearing in terraform plan output
- Does NOT modify shared modules (ingress_factory, nfs_volume) to avoid
breaking module interface contracts
Note: CI pipeline SOPS decryption requires sops_age_key Woodpecker secret
to be created before the pipeline will work with SOPS. Until then, the old
terraform.tfvars path continues to function.
Remove the module "xxx" { source = "./module" } indirection layer
from all 66 service stacks. Resources are now defined directly in
each stack's main.tf instead of through a wrapper module.
- Merge module/main.tf contents into stack main.tf
- Apply variable replacements (var.tier -> local.tiers.X, renamed vars)
- Fix shared module paths (one fewer ../ at each level)
- Move extra files/dirs (factory/, chart_values, subdirs) to stack root
- Update state files to strip module.<name>. prefix
- Update CLAUDE.md to reflect flat structure
Verified: terragrunt plan shows 0 add, 0 destroy across all stacks.
Move all 88 service modules (66 individual + 22 platform) from
modules/kubernetes/<service>/ into their corresponding stack directories:
- Service stacks: stacks/<service>/module/
- Platform stack: stacks/platform/modules/<service>/
This collocates module source code with its Terragrunt definition.
Only shared utility modules remain in modules/kubernetes/:
ingress_factory, setup_tls_secret, dockerhub_secret, oauth-proxy.
All cross-references to shared modules updated to use correct
relative paths. Verified with terragrunt run --all -- plan:
0 adds, 0 destroys across all 68 stacks.
Generated individual stack directories for all 66 services under stacks/.
Each stack has terragrunt.hcl (depends on platform) and main.tf (thin
wrapper calling existing module). Migrated all 64 active service states
from root terraform.tfstate to individual state files. Root state is now
empty. Verified with terragrunt plan on multiple stacks (no changes).