infra/stacks/frigate/main.tf
Viktor Barzin 8b43692af0 [infra] Suppress Goldilocks vpa-update-mode label drift on all namespaces [ci skip]
## Context

Wave 3B-continued: the Goldilocks VPA dashboard (stacks/vpa) runs a Kyverno
ClusterPolicy `goldilocks-vpa-auto-mode` that mutates every namespace with
`metadata.labels["goldilocks.fairwinds.com/vpa-update-mode"] = "off"`. This
is intentional — Terraform owns container resource limits, and Goldilocks
should only provide recommendations, never auto-update. The label is how
Goldilocks decides per-namespace whether to run its VPA in `off` mode.

Effect on Terraform: every `kubernetes_namespace` resource shows the label
as pending-removal (`-> null`) on every `scripts/tg plan`. Dawarich survey
2026-04-18 confirmed the drift. Cluster-side count: 88 namespaces carry the
label (`kubectl get ns -o json | jq ... | wc -l`). Every TF-managed namespace
is affected.

This commit brings the intentional admission drift under the same
`# KYVERNO_LIFECYCLE_V1` discoverability marker introduced in c9d221d5 for
the ndots dns_config pattern. The marker now stands generically for any
Kyverno admission-webhook drift suppression; the inline comment records
which specific policy stamps which specific field so future grep audits
show why each suppression exists.

## This change

107 `.tf` files touched — every stack's `resource "kubernetes_namespace"`
resource gets:

```hcl
lifecycle {
  # KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
  ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
```

Injection was done with a brace-depth-tracking Python pass (`/tmp/add_goldilocks_ignore.py`):
match `^resource "kubernetes_namespace" ` → track `{` / `}` until the
outermost closing brace → insert the lifecycle block before the closing
brace. The script is idempotent (skips any file that already mentions
`goldilocks.fairwinds.com/vpa-update-mode`) so re-running is safe.

Vault stack picked up 2 namespaces in the same file (k8s-users produces
one, plus a second explicit ns) — confirmed via file diff (+8 lines).

## What is NOT in this change

- `stacks/trading-bot/main.tf` — entire file is `/* … */` commented out
  (paused 2026-04-06 per user decision). Reverted after the script ran.
- `stacks/_template/main.tf.example` — per-stack skeleton, intentionally
  minimal. User keeps it that way. Not touched by the script (file
  has no real `resource "kubernetes_namespace"` — only a placeholder
  comment).
- `.terraform/` copies (e.g. `stacks/metallb/.terraform/modules/...`) —
  gitignored, won't commit; the live path was edited.
- `terraform fmt` cleanup of adjacent pre-existing alignment issues in
  authentik, freedify, hermes-agent, nvidia, vault, meshcentral. Reverted
  to keep the commit scoped to the Goldilocks sweep. Those files will
  need a separate fmt-only commit or will be cleaned up on next real
  apply to that stack.

## Verification

Dawarich (one of the hundred-plus touched stacks) showed the pattern
before and after:

```
$ cd stacks/dawarich && ../../scripts/tg plan

Before:
  Plan: 0 to add, 2 to change, 0 to destroy.
   # kubernetes_namespace.dawarich will be updated in-place
     (goldilocks.fairwinds.com/vpa-update-mode -> null)
   # module.tls_secret.kubernetes_secret.tls_secret will be updated in-place
     (Kyverno generate.* labels — fixed in 8d94688d)

After:
  No changes. Your infrastructure matches the configuration.
```

Injection count check:
```
$ rg -c 'KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode' stacks/ | awk -F: '{s+=$2} END {print s}'
108
```

## Reproduce locally
1. `git pull`
2. Pick any stack: `cd stacks/<name> && ../../scripts/tg plan`
3. Expect: no drift on the namespace's goldilocks.fairwinds.com/vpa-update-mode label.

Closes: code-dwx

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:15:27 +00:00

313 lines
8 KiB
HCL

variable "tls_secret_name" {
type = string
sensitive = true
}
variable "nfs_server" { type = string }
resource "kubernetes_namespace" "frigate" {
metadata {
name = "frigate"
labels = {
tier = local.tiers.gpu
}
# labels = {
# "istio-injection" : "enabled"
# }
}
lifecycle {
# KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
}
module "tls_secret" {
source = "../../modules/kubernetes/setup_tls_secret"
namespace = kubernetes_namespace.frigate.metadata[0].name
tls_secret_name = var.tls_secret_name
}
resource "kubernetes_persistent_volume_claim" "config_encrypted" {
wait_until_bound = false
metadata {
name = "frigate-config-encrypted"
namespace = kubernetes_namespace.frigate.metadata[0].name
annotations = {
"resize.topolvm.io/threshold" = "80%"
"resize.topolvm.io/increase" = "100%"
"resize.topolvm.io/storage_limit" = "5Gi"
}
}
spec {
access_modes = ["ReadWriteOnce"]
storage_class_name = "proxmox-lvm-encrypted"
resources {
requests = {
storage = "1Gi"
}
}
}
}
module "nfs_media_host" {
source = "../../modules/kubernetes/nfs_volume"
name = "frigate-media-host"
namespace = kubernetes_namespace.frigate.metadata[0].name
nfs_server = "192.168.1.127"
nfs_path = "/srv/nfs/frigate/media"
}
resource "kubernetes_deployment" "frigate" {
metadata {
name = "frigate"
namespace = kubernetes_namespace.frigate.metadata[0].name
labels = {
app = "frigate"
tier = local.tiers.gpu
}
annotations = {
"reloader.stakater.com/search" = "true"
}
}
spec {
replicas = 1 # Temporarily disabled due to high power consumption
strategy {
type = "Recreate"
}
selector {
match_labels = {
app = "frigate"
}
}
template {
metadata {
labels = {
app = "frigate"
}
}
spec {
node_selector = {
"gpu" : true
}
toleration {
key = "nvidia.com/gpu"
operator = "Equal"
value = "true"
effect = "NoSchedule"
}
container {
# image = "ghcr.io/blakeblackshear/frigate:stable"
# image = "ghcr.io/blakeblackshear/frigate:stable-tensorrt"
image = "ghcr.io/blakeblackshear/frigate:0.17.0-beta1-tensorrt"
name = "frigate"
resources {
requests = {
cpu = "1500m"
memory = "5Gi"
}
limits = {
memory = "10Gi"
"nvidia.com/gpu" = "1"
}
}
env {
name = "FRIGATE_RTSP_PASSWORD"
value = "password"
}
port {
container_port = 5000
}
port {
container_port = 8554
}
port {
container_port = 8555
protocol = "TCP"
}
port {
container_port = 8555
protocol = "UDP"
}
volume_mount {
name = "config"
mount_path = "/config"
}
volume_mount {
name = "dri"
mount_path = "/dev/dri"
}
volume_mount {
name = "dshm"
mount_path = "/dev/shm"
}
volume_mount {
name = "media"
mount_path = "/media/frigate"
}
volume_mount {
name = "cache-tmpfs"
mount_path = "/tmp/cache"
}
# Restart pod if GPU becomes unavailable, Frigate hangs, or
# detector falls back to CPU (inference time spikes from ~20ms to 200ms+)
liveness_probe {
exec {
command = ["sh", "-c", <<-EOT
nvidia-smi > /dev/null 2>&1 || exit 1
STATS=$(curl -sf --max-time 5 http://localhost:5000/api/stats) || exit 1
echo "$STATS" | python3 -c "
import sys, json
stats = json.load(sys.stdin)
for name, det in stats.get('detectors', {}).items():
speed = det.get('inference_speed', 0)
if speed > 100:
print(f'UNHEALTHY: detector {name} inference {speed}ms > 100ms threshold')
sys.exit(1)
"
EOT
]
}
initial_delay_seconds = 120
period_seconds = 60
timeout_seconds = 10
failure_threshold = 3
}
# TensorRT model loading can take several minutes
startup_probe {
http_get {
path = "/api/version"
port = 5000
}
period_seconds = 10
failure_threshold = 30 # up to 5 minutes for startup
}
security_context {
privileged = true
}
}
volume {
name = "config"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim.config_encrypted.metadata[0].name
}
}
volume {
name = "dshm"
empty_dir {
medium = "Memory"
size_limit = "1Gi"
}
}
volume {
name = "media"
persistent_volume_claim {
claim_name = module.nfs_media_host.claim_name
}
}
volume {
name = "cache-tmpfs"
empty_dir {
medium = "Memory"
size_limit = "512Mi"
}
}
volume {
name = "dri"
host_path {
path = "/dev/dri"
type = "Directory"
}
}
}
}
}
}
resource "kubernetes_service" "frigate" {
metadata {
name = "frigate"
namespace = kubernetes_namespace.frigate.metadata[0].name
labels = {
"app" = "frigate"
}
}
spec {
selector = {
app = "frigate"
}
port {
name = "http"
target_port = 5000
port = 80
protocol = "TCP"
}
}
}
resource "kubernetes_service" "frigate-rtsp" {
metadata {
name = "frigate-rtsp"
namespace = kubernetes_namespace.frigate.metadata[0].name
labels = {
"app" = "frigate"
}
}
spec {
type = "NodePort" # Should always live on node1 where the gpu is
selector = {
app = "frigate"
}
port {
name = "rtsp-tcp"
target_port = 8554
port = 8554
protocol = "TCP"
node_port = 30554
}
port {
name = "rtsp-udp"
target_port = 8554
port = 8554
protocol = "UDP"
node_port = 30554
}
}
}
module "ingress" {
source = "../../modules/kubernetes/ingress_factory"
dns_type = "proxied"
namespace = kubernetes_namespace.frigate.metadata[0].name
name = "frigate"
tls_secret_name = var.tls_secret_name
protected = true
extra_annotations = {
"gethomepage.dev/enabled" = "true"
"gethomepage.dev/name" = "Frigate"
"gethomepage.dev/description" = "NVR & object detection"
"gethomepage.dev/icon" = "frigate.png"
"gethomepage.dev/group" = "Media & Entertainment"
"gethomepage.dev/pod-selector" = ""
"gethomepage.dev/widget.type" = "frigate"
"gethomepage.dev/widget.url" = "http://frigate.frigate.svc.cluster.local"
}
}
module "ingress-internal" {
source = "../../modules/kubernetes/ingress_factory"
namespace = kubernetes_namespace.frigate.metadata[0].name
name = "frigate-lan"
host = "frigate-lan"
root_domain = "viktorbarzin.lan"
service_name = "frigate"
tls_secret_name = var.tls_secret_name
allow_local_access_only = true
ssl_redirect = false
extra_annotations = {
"gethomepage.dev/enabled" = "false"
}
}