infra/stacks/homepage/main.tf
Viktor Barzin 327ce215b9 [infra] Sweep dns_config ignore_changes across all pod-owning resources [ci skip]
## Context

Wave 3A (commit c9d221d5) added the `# KYVERNO_LIFECYCLE_V1` marker to the
27 pre-existing `ignore_changes = [...dns_config]` sites so they could be
grepped and audited. It did NOT address pod-owning resources that were
simply missing the suppression entirely. Post-Wave-3A sampling (2026-04-18)
found that navidrome, f1-stream, frigate, servarr, monitoring, crowdsec,
and many other stacks showed perpetual `dns_config` drift every plan
because their `kubernetes_deployment` / `kubernetes_stateful_set` /
`kubernetes_cron_job_v1` resources had no `lifecycle {}` block at all.

Root cause (same as Wave 3A): Kyverno's admission webhook stamps
`dns_config { option { name = "ndots"; value = "2" } }` on every pod's
`spec.template.spec.dns_config` to prevent NxDomain search-domain flooding
(see `k8s-ndots-search-domain-nxdomain-flood` skill). Without `ignore_changes`
on every Terraform-managed pod-owner, Terraform repeatedly tries to strip
the injected field.

## This change

Extends the Wave 3A convention by sweeping EVERY `kubernetes_deployment`,
`kubernetes_stateful_set`, `kubernetes_daemon_set`, `kubernetes_cron_job_v1`,
`kubernetes_job_v1` (+ their `_v1` variants) in the repo and ensuring each
carries the right `ignore_changes` path:

- **kubernetes_deployment / stateful_set / daemon_set / job_v1**:
  `spec[0].template[0].spec[0].dns_config`
- **kubernetes_cron_job_v1**:
  `spec[0].job_template[0].spec[0].template[0].spec[0].dns_config`
  (extra `job_template[0]` nesting — the CronJob's PodTemplateSpec is
  one level deeper)

Each injection / extension is tagged `# KYVERNO_LIFECYCLE_V1: Kyverno
admission webhook mutates dns_config with ndots=2` inline so the
suppression is discoverable via `rg 'KYVERNO_LIFECYCLE_V1' stacks/`.

Two insertion paths are handled by a Python pass (`/tmp/add_dns_config_ignore.py`):

1. **No existing `lifecycle {}`**: inject a brand-new block just before the
   resource's closing `}`. 108 new blocks on 93 files.
2. **Existing `lifecycle {}` (usually for `DRIFT_WORKAROUND: CI owns image tag`
   from Wave 4, commit a62b43d1)**: extend its `ignore_changes` list with the
   dns_config path. Handles both inline (`= [x]`) and multiline
   (`= [\n  x,\n]`) forms; ensures the last pre-existing list item carries
   a trailing comma so the extended list is valid HCL. 34 extensions.

The script skips anything already mentioning `dns_config` inside an
`ignore_changes`, so re-running is a no-op.

## Scale

- 142 total lifecycle injections/extensions
- 93 `.tf` files touched
- 108 brand-new `lifecycle {}` blocks + 34 extensions of existing ones
- Every Tier 0 and Tier 1 stack with a pod-owning resource is covered
- Together with Wave 3A's 27 pre-existing markers → **169 greppable
  `KYVERNO_LIFECYCLE_V1` dns_config sites across the repo**

## What is NOT in this change

- `stacks/trading-bot/main.tf` — entirely commented-out block (`/* … */`).
  Python script touched the file, reverted manually.
- `_template/main.tf.example` skeleton — kept minimal on purpose; any
  future stack created from it should either inherit the Wave 3A one-line
  form or add its own on first `kubernetes_deployment`.
- `terraform fmt` fixes to pre-existing alignment issues in meshcentral,
  nvidia/modules/nvidia, vault — unrelated to this commit. Left for a
  separate fmt-only pass.
- Non-pod resources (`kubernetes_service`, `kubernetes_secret`,
  `kubernetes_manifest`, etc.) — they don't own pods so they don't get
  Kyverno dns_config mutation.

## Verification

Random sample post-commit:
```
$ cd stacks/navidrome && ../../scripts/tg plan  → No changes.
$ cd stacks/f1-stream && ../../scripts/tg plan  → No changes.
$ cd stacks/frigate && ../../scripts/tg plan    → No changes.

$ rg -c 'KYVERNO_LIFECYCLE_V1' stacks/ --include='*.tf' --include='*.tf.example' \
    | awk -F: '{s+=$2} END {print s}'
169
```

## Reproduce locally
1. `git pull`
2. `rg 'KYVERNO_LIFECYCLE_V1' stacks/ | wc -l` → 169+
3. `cd stacks/navidrome && ../../scripts/tg plan` → expect 0 drift on
   the deployment's dns_config field.

Refs: code-seq (Wave 3B dns_config class closed; kubernetes_manifest
annotation class handled separately in 8d94688d for tls_secret)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:19:48 +00:00

155 lines
4.2 KiB
HCL

variable "tls_secret_name" {
type = string
sensitive = true
}
module "tls_secret" {
source = "../../modules/kubernetes/setup_tls_secret"
namespace = kubernetes_namespace.homepage.metadata[0].name
tls_secret_name = var.tls_secret_name
}
resource "kubernetes_namespace" "homepage" {
metadata {
name = "homepage"
labels = {
"istio-injection" : "disabled"
tier = local.tiers.aux
}
}
lifecycle {
# KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
}
resource "helm_release" "homepage" {
namespace = kubernetes_namespace.homepage.metadata[0].name
create_namespace = false
name = "homepage"
atomic = true
repository = "http://jameswynn.github.io/helm-charts"
chart = "homepage"
values = [file("${path.module}/values.yaml")]
}
# --- Caching proxy: nginx in front of Homepage for stale-while-revalidate on /api/ ---
resource "kubernetes_config_map" "cache_proxy" {
metadata {
name = "homepage-cache-config"
namespace = kubernetes_namespace.homepage.metadata[0].name
}
data = {
"default.conf" = <<-EOT
proxy_cache_path /tmp/cache levels=1:2 keys_zone=hp:10m max_size=500m inactive=24h;
server {
listen 80;
resolver kube-dns.kube-system.svc.cluster.local valid=5s;
set $upstream http://homepage.homepage.svc.cluster.local:3000;
location /api/ {
proxy_pass $upstream;
proxy_cache hp;
proxy_cache_valid 200 24h;
proxy_cache_use_stale updating error timeout;
proxy_cache_background_update on;
proxy_cache_lock on;
proxy_cache_key "$request_uri";
proxy_set_header Host $host;
proxy_next_upstream error timeout http_500 http_502 http_503;
proxy_next_upstream_tries 3;
add_header X-Cache-Status $upstream_cache_status;
}
location / {
proxy_pass $upstream;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
}
}
EOT
}
}
resource "kubernetes_deployment" "cache_proxy" {
metadata {
name = "homepage-cache"
namespace = kubernetes_namespace.homepage.metadata[0].name
}
spec {
replicas = 1
selector {
match_labels = { app = "homepage-cache" }
}
template {
metadata {
labels = { app = "homepage-cache" }
}
spec {
container {
name = "nginx"
image = "nginx:alpine"
port {
container_port = 80
}
resources {
requests = { cpu = "10m", memory = "64Mi" }
limits = { memory = "64Mi" }
}
volume_mount {
name = "config"
mount_path = "/etc/nginx/conf.d"
}
}
volume {
name = "config"
config_map {
name = kubernetes_config_map.cache_proxy.metadata[0].name
}
}
}
}
}
lifecycle {
# KYVERNO_LIFECYCLE_V1: Kyverno admission webhook mutates dns_config with ndots=2
ignore_changes = [spec[0].template[0].spec[0].dns_config]
}
}
resource "kubernetes_service" "cache_proxy" {
metadata {
name = "homepage-cache"
namespace = kubernetes_namespace.homepage.metadata[0].name
}
spec {
selector = { app = "homepage-cache" }
port {
port = 80
target_port = 80
}
}
}
module "ingress" {
source = "../../modules/kubernetes/ingress_factory"
namespace = kubernetes_namespace.homepage.metadata[0].name
name = "homepage"
host = "home"
dns_type = "proxied"
service_name = kubernetes_service.cache_proxy.metadata[0].name
tls_secret_name = var.tls_secret_name
extra_annotations = {
"gethomepage.dev/enabled" = "true"
"gethomepage.dev/name" = "Homepage"
"gethomepage.dev/description" = "Service dashboard"
"gethomepage.dev/group" = "Core Platform"
"gethomepage.dev/icon" = "homepage.png"
}
}