infra/stacks/homepage/main.tf
Viktor Barzin 8b43692af0 [infra] Suppress Goldilocks vpa-update-mode label drift on all namespaces [ci skip]
## Context

Wave 3B-continued: the Goldilocks VPA dashboard (stacks/vpa) runs a Kyverno
ClusterPolicy `goldilocks-vpa-auto-mode` that mutates every namespace with
`metadata.labels["goldilocks.fairwinds.com/vpa-update-mode"] = "off"`. This
is intentional — Terraform owns container resource limits, and Goldilocks
should only provide recommendations, never auto-update. The label is how
Goldilocks decides per-namespace whether to run its VPA in `off` mode.

Effect on Terraform: every `kubernetes_namespace` resource shows the label
as pending-removal (`-> null`) on every `scripts/tg plan`. Dawarich survey
2026-04-18 confirmed the drift. Cluster-side count: 88 namespaces carry the
label (`kubectl get ns -o json | jq ... | wc -l`). Every TF-managed namespace
is affected.

This commit brings the intentional admission drift under the same
`# KYVERNO_LIFECYCLE_V1` discoverability marker introduced in c9d221d5 for
the ndots dns_config pattern. The marker now stands generically for any
Kyverno admission-webhook drift suppression; the inline comment records
which specific policy stamps which specific field so future grep audits
show why each suppression exists.

## This change

107 `.tf` files touched — every stack's `resource "kubernetes_namespace"`
resource gets:

```hcl
lifecycle {
  # KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
  ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
```

Injection was done with a brace-depth-tracking Python pass (`/tmp/add_goldilocks_ignore.py`):
match `^resource "kubernetes_namespace" ` → track `{` / `}` until the
outermost closing brace → insert the lifecycle block before the closing
brace. The script is idempotent (skips any file that already mentions
`goldilocks.fairwinds.com/vpa-update-mode`) so re-running is safe.

Vault stack picked up 2 namespaces in the same file (k8s-users produces
one, plus a second explicit ns) — confirmed via file diff (+8 lines).

## What is NOT in this change

- `stacks/trading-bot/main.tf` — entire file is `/* … */` commented out
  (paused 2026-04-06 per user decision). Reverted after the script ran.
- `stacks/_template/main.tf.example` — per-stack skeleton, intentionally
  minimal. User keeps it that way. Not touched by the script (file
  has no real `resource "kubernetes_namespace"` — only a placeholder
  comment).
- `.terraform/` copies (e.g. `stacks/metallb/.terraform/modules/...`) —
  gitignored, won't commit; the live path was edited.
- `terraform fmt` cleanup of adjacent pre-existing alignment issues in
  authentik, freedify, hermes-agent, nvidia, vault, meshcentral. Reverted
  to keep the commit scoped to the Goldilocks sweep. Those files will
  need a separate fmt-only commit or will be cleaned up on next real
  apply to that stack.

## Verification

Dawarich (one of the hundred-plus touched stacks) showed the pattern
before and after:

```
$ cd stacks/dawarich && ../../scripts/tg plan

Before:
  Plan: 0 to add, 2 to change, 0 to destroy.
   # kubernetes_namespace.dawarich will be updated in-place
     (goldilocks.fairwinds.com/vpa-update-mode -> null)
   # module.tls_secret.kubernetes_secret.tls_secret will be updated in-place
     (Kyverno generate.* labels — fixed in 8d94688d)

After:
  No changes. Your infrastructure matches the configuration.
```

Injection count check:
```
$ rg -c 'KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode' stacks/ | awk -F: '{s+=$2} END {print s}'
108
```

## Reproduce locally
1. `git pull`
2. Pick any stack: `cd stacks/<name> && ../../scripts/tg plan`
3. Expect: no drift on the namespace's goldilocks.fairwinds.com/vpa-update-mode label.

Closes: code-dwx

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:15:27 +00:00

151 lines
4.1 KiB
HCL

variable "tls_secret_name" {
type = string
sensitive = true
}
module "tls_secret" {
source = "../../modules/kubernetes/setup_tls_secret"
namespace = kubernetes_namespace.homepage.metadata[0].name
tls_secret_name = var.tls_secret_name
}
resource "kubernetes_namespace" "homepage" {
metadata {
name = "homepage"
labels = {
"istio-injection" : "disabled"
tier = local.tiers.aux
}
}
lifecycle {
# KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
}
resource "helm_release" "homepage" {
namespace = kubernetes_namespace.homepage.metadata[0].name
create_namespace = false
name = "homepage"
atomic = true
repository = "http://jameswynn.github.io/helm-charts"
chart = "homepage"
values = [file("${path.module}/values.yaml")]
}
# --- Caching proxy: nginx in front of Homepage for stale-while-revalidate on /api/ ---
resource "kubernetes_config_map" "cache_proxy" {
metadata {
name = "homepage-cache-config"
namespace = kubernetes_namespace.homepage.metadata[0].name
}
data = {
"default.conf" = <<-EOT
proxy_cache_path /tmp/cache levels=1:2 keys_zone=hp:10m max_size=500m inactive=24h;
server {
listen 80;
resolver kube-dns.kube-system.svc.cluster.local valid=5s;
set $upstream http://homepage.homepage.svc.cluster.local:3000;
location /api/ {
proxy_pass $upstream;
proxy_cache hp;
proxy_cache_valid 200 24h;
proxy_cache_use_stale updating error timeout;
proxy_cache_background_update on;
proxy_cache_lock on;
proxy_cache_key "$request_uri";
proxy_set_header Host $host;
proxy_next_upstream error timeout http_500 http_502 http_503;
proxy_next_upstream_tries 3;
add_header X-Cache-Status $upstream_cache_status;
}
location / {
proxy_pass $upstream;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
}
}
EOT
}
}
resource "kubernetes_deployment" "cache_proxy" {
metadata {
name = "homepage-cache"
namespace = kubernetes_namespace.homepage.metadata[0].name
}
spec {
replicas = 1
selector {
match_labels = { app = "homepage-cache" }
}
template {
metadata {
labels = { app = "homepage-cache" }
}
spec {
container {
name = "nginx"
image = "nginx:alpine"
port {
container_port = 80
}
resources {
requests = { cpu = "10m", memory = "64Mi" }
limits = { memory = "64Mi" }
}
volume_mount {
name = "config"
mount_path = "/etc/nginx/conf.d"
}
}
volume {
name = "config"
config_map {
name = kubernetes_config_map.cache_proxy.metadata[0].name
}
}
}
}
}
}
resource "kubernetes_service" "cache_proxy" {
metadata {
name = "homepage-cache"
namespace = kubernetes_namespace.homepage.metadata[0].name
}
spec {
selector = { app = "homepage-cache" }
port {
port = 80
target_port = 80
}
}
}
module "ingress" {
source = "../../modules/kubernetes/ingress_factory"
namespace = kubernetes_namespace.homepage.metadata[0].name
name = "homepage"
host = "home"
dns_type = "proxied"
service_name = kubernetes_service.cache_proxy.metadata[0].name
tls_secret_name = var.tls_secret_name
extra_annotations = {
"gethomepage.dev/enabled" = "true"
"gethomepage.dev/name" = "Homepage"
"gethomepage.dev/description" = "Service dashboard"
"gethomepage.dev/group" = "Core Platform"
"gethomepage.dev/icon" = "homepage.png"
}
}