infra/stacks/osm_routing/main.tf
Viktor Barzin 8b43692af0 [infra] Suppress Goldilocks vpa-update-mode label drift on all namespaces [ci skip]
## Context

Wave 3B-continued: the Goldilocks VPA dashboard (stacks/vpa) runs a Kyverno
ClusterPolicy `goldilocks-vpa-auto-mode` that mutates every namespace with
`metadata.labels["goldilocks.fairwinds.com/vpa-update-mode"] = "off"`. This
is intentional — Terraform owns container resource limits, and Goldilocks
should only provide recommendations, never auto-update. The label is how
Goldilocks decides per-namespace whether to run its VPA in `off` mode.

Effect on Terraform: every `kubernetes_namespace` resource shows the label
as pending-removal (`-> null`) on every `scripts/tg plan`. Dawarich survey
2026-04-18 confirmed the drift. Cluster-side count: 88 namespaces carry the
label (`kubectl get ns -o json | jq ... | wc -l`). Every TF-managed namespace
is affected.

This commit brings the intentional admission drift under the same
`# KYVERNO_LIFECYCLE_V1` discoverability marker introduced in c9d221d5 for
the ndots dns_config pattern. The marker now stands generically for any
Kyverno admission-webhook drift suppression; the inline comment records
which specific policy stamps which specific field so future grep audits
show why each suppression exists.

## This change

107 `.tf` files touched — every stack's `resource "kubernetes_namespace"`
resource gets:

```hcl
lifecycle {
  # KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
  ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
```

Injection was done with a brace-depth-tracking Python pass (`/tmp/add_goldilocks_ignore.py`):
match `^resource "kubernetes_namespace" ` → track `{` / `}` until the
outermost closing brace → insert the lifecycle block before the closing
brace. The script is idempotent (skips any file that already mentions
`goldilocks.fairwinds.com/vpa-update-mode`) so re-running is safe.

Vault stack picked up 2 namespaces in the same file (k8s-users produces
one, plus a second explicit ns) — confirmed via file diff (+8 lines).

## What is NOT in this change

- `stacks/trading-bot/main.tf` — entire file is `/* … */` commented out
  (paused 2026-04-06 per user decision). Reverted after the script ran.
- `stacks/_template/main.tf.example` — per-stack skeleton, intentionally
  minimal. User keeps it that way. Not touched by the script (file
  has no real `resource "kubernetes_namespace"` — only a placeholder
  comment).
- `.terraform/` copies (e.g. `stacks/metallb/.terraform/modules/...`) —
  gitignored, won't commit; the live path was edited.
- `terraform fmt` cleanup of adjacent pre-existing alignment issues in
  authentik, freedify, hermes-agent, nvidia, vault, meshcentral. Reverted
  to keep the commit scoped to the Goldilocks sweep. Those files will
  need a separate fmt-only commit or will be cleaned up on next real
  apply to that stack.

## Verification

Dawarich (one of the hundred-plus touched stacks) showed the pattern
before and after:

```
$ cd stacks/dawarich && ../../scripts/tg plan

Before:
  Plan: 0 to add, 2 to change, 0 to destroy.
   # kubernetes_namespace.dawarich will be updated in-place
     (goldilocks.fairwinds.com/vpa-update-mode -> null)
   # module.tls_secret.kubernetes_secret.tls_secret will be updated in-place
     (Kyverno generate.* labels — fixed in 8d94688d)

After:
  No changes. Your infrastructure matches the configuration.
```

Injection count check:
```
$ rg -c 'KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode' stacks/ | awk -F: '{s+=$2} END {print s}'
108
```

## Reproduce locally
1. `git pull`
2. Pick any stack: `cd stacks/<name> && ../../scripts/tg plan`
3. Expect: no drift on the namespace's goldilocks.fairwinds.com/vpa-update-mode label.

Closes: code-dwx

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:15:27 +00:00

300 lines
6.6 KiB
HCL

variable "tls_secret_name" {
type = string
sensitive = true
}
variable "nfs_server" { type = string }
resource "kubernetes_namespace" "osm-routing" {
metadata {
name = "osm-routing"
labels = {
"istio-injection" : "disabled"
tier = local.tiers.aux
"resource-governance/custom-quota" = "true"
}
}
lifecycle {
# KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
}
resource "kubernetes_resource_quota_v1" "osm_routing" {
metadata {
name = "tier-quota"
namespace = kubernetes_namespace.osm-routing.metadata[0].name
}
spec {
hard = {
"requests.cpu" = "4"
"requests.memory" = "6Gi"
"limits.cpu" = "16"
"limits.memory" = "16Gi"
pods = "20"
}
}
}
module "nfs_osrm_data_host" {
source = "../../modules/kubernetes/nfs_volume"
name = "osm-routing-osrm-data-host"
namespace = kubernetes_namespace.osm-routing.metadata[0].name
nfs_server = "192.168.1.127"
nfs_path = "/srv/nfs/osm-routing/osrm"
}
module "nfs_otp_data_host" {
source = "../../modules/kubernetes/nfs_volume"
name = "osm-routing-otp-data-host"
namespace = kubernetes_namespace.osm-routing.metadata[0].name
nfs_server = "192.168.1.127"
nfs_path = "/srv/nfs/osm-routing/otp"
}
# --- OSRM Foot ---
resource "kubernetes_deployment" "osrm-foot" {
metadata {
name = "osrm-foot"
namespace = kubernetes_namespace.osm-routing.metadata[0].name
labels = {
app = "osrm-foot"
tier = local.tiers.aux
}
}
spec {
# Disabled: reduce cluster memory pressure (2026-03-14 OOM incident)
replicas = 0
strategy {
type = "Recreate"
}
selector {
match_labels = {
app = "osrm-foot"
}
}
template {
metadata {
labels = {
app = "osrm-foot"
}
}
spec {
container {
name = "osrm-foot"
image = "ghcr.io/project-osrm/osrm-backend:latest"
command = ["osrm-routed", "--algorithm", "MLD", "/data/foot/greater-london-latest.osrm"]
port {
name = "http"
container_port = 5000
protocol = "TCP"
}
volume_mount {
name = "osrm-data"
mount_path = "/data"
}
resources {
requests = {
cpu = "15m"
memory = "256Mi"
}
limits = {
memory = "256Mi"
}
}
}
volume {
name = "osrm-data"
persistent_volume_claim {
claim_name = module.nfs_osrm_data_host.claim_name
}
}
}
}
}
}
resource "kubernetes_service" "osrm-foot" {
metadata {
name = "osrm-foot"
namespace = kubernetes_namespace.osm-routing.metadata[0].name
labels = {
app = "osrm-foot"
}
}
spec {
selector = {
app = "osrm-foot"
}
port {
port = 5000
target_port = 5000
}
}
}
# --- OSRM Bicycle ---
resource "kubernetes_deployment" "osrm-bicycle" {
metadata {
name = "osrm-bicycle"
namespace = kubernetes_namespace.osm-routing.metadata[0].name
labels = {
app = "osrm-bicycle"
tier = local.tiers.aux
}
}
spec {
# Disabled: reduce cluster memory pressure (2026-03-14 OOM incident)
replicas = 0
strategy {
type = "Recreate"
}
selector {
match_labels = {
app = "osrm-bicycle"
}
}
template {
metadata {
labels = {
app = "osrm-bicycle"
}
}
spec {
container {
name = "osrm-bicycle"
image = "ghcr.io/project-osrm/osrm-backend:latest"
command = ["osrm-routed", "--algorithm", "MLD", "/data/bicycle/greater-london-latest.osrm"]
port {
name = "http"
container_port = 5000
protocol = "TCP"
}
volume_mount {
name = "osrm-data"
mount_path = "/data"
}
resources {
requests = {
cpu = "15m"
memory = "256Mi"
}
limits = {
memory = "256Mi"
}
}
}
volume {
name = "osrm-data"
persistent_volume_claim {
claim_name = module.nfs_osrm_data_host.claim_name
}
}
}
}
}
}
resource "kubernetes_service" "osrm-bicycle" {
metadata {
name = "osrm-bicycle"
namespace = kubernetes_namespace.osm-routing.metadata[0].name
labels = {
app = "osrm-bicycle"
}
}
spec {
selector = {
app = "osrm-bicycle"
}
port {
port = 5000
target_port = 5000
}
}
}
# --- OTP (OpenTripPlanner) ---
resource "kubernetes_deployment" "otp" {
metadata {
name = "otp"
namespace = kubernetes_namespace.osm-routing.metadata[0].name
labels = {
app = "otp"
tier = local.tiers.aux
}
}
spec {
# Disabled: reduce cluster memory pressure (2026-03-14 OOM incident)
replicas = 0
strategy {
type = "Recreate"
}
selector {
match_labels = {
app = "otp"
}
}
template {
metadata {
labels = {
app = "otp"
}
}
spec {
container {
name = "otp"
image = "opentripplanner/opentripplanner:2.6.0"
args = ["--build", "--save"]
port {
name = "http"
container_port = 8080
protocol = "TCP"
}
volume_mount {
name = "otp-data"
mount_path = "/var/opentripplanner"
}
env {
name = "JAVA_TOOL_OPTIONS"
value = "-Xmx3g"
}
resources {
requests = {
cpu = "300m"
memory = "2Gi"
}
limits = {
memory = "2Gi"
}
}
}
volume {
name = "otp-data"
persistent_volume_claim {
claim_name = module.nfs_otp_data_host.claim_name
}
}
}
}
}
}
resource "kubernetes_service" "otp" {
metadata {
name = "otp"
namespace = kubernetes_namespace.osm-routing.metadata[0].name
labels = {
app = "otp"
}
}
spec {
selector = {
app = "otp"
}
port {
port = 8080
target_port = 8080
}
}
}