infra/stacks/woodpecker/main.tf
Viktor Barzin 327ce215b9 [infra] Sweep dns_config ignore_changes across all pod-owning resources [ci skip]
## Context

Wave 3A (commit c9d221d5) added the `# KYVERNO_LIFECYCLE_V1` marker to the
27 pre-existing `ignore_changes = [...dns_config]` sites so they could be
grepped and audited. It did NOT address pod-owning resources that were
simply missing the suppression entirely. Post-Wave-3A sampling (2026-04-18)
found that navidrome, f1-stream, frigate, servarr, monitoring, crowdsec,
and many other stacks showed perpetual `dns_config` drift every plan
because their `kubernetes_deployment` / `kubernetes_stateful_set` /
`kubernetes_cron_job_v1` resources had no `lifecycle {}` block at all.

Root cause (same as Wave 3A): Kyverno's admission webhook stamps
`dns_config { option { name = "ndots"; value = "2" } }` on every pod's
`spec.template.spec.dns_config` to prevent NxDomain search-domain flooding
(see `k8s-ndots-search-domain-nxdomain-flood` skill). Without `ignore_changes`
on every Terraform-managed pod-owner, Terraform repeatedly tries to strip
the injected field.

## This change

Extends the Wave 3A convention by sweeping EVERY `kubernetes_deployment`,
`kubernetes_stateful_set`, `kubernetes_daemon_set`, `kubernetes_cron_job_v1`,
`kubernetes_job_v1` (+ their `_v1` variants) in the repo and ensuring each
carries the right `ignore_changes` path:

- **kubernetes_deployment / stateful_set / daemon_set / job_v1**:
  `spec[0].template[0].spec[0].dns_config`
- **kubernetes_cron_job_v1**:
  `spec[0].job_template[0].spec[0].template[0].spec[0].dns_config`
  (extra `job_template[0]` nesting — the CronJob's PodTemplateSpec is
  one level deeper)

Each injection / extension is tagged `# KYVERNO_LIFECYCLE_V1: Kyverno
admission webhook mutates dns_config with ndots=2` inline so the
suppression is discoverable via `rg 'KYVERNO_LIFECYCLE_V1' stacks/`.

Two insertion paths are handled by a Python pass (`/tmp/add_dns_config_ignore.py`):

1. **No existing `lifecycle {}`**: inject a brand-new block just before the
   resource's closing `}`. 108 new blocks on 93 files.
2. **Existing `lifecycle {}` (usually for `DRIFT_WORKAROUND: CI owns image tag`
   from Wave 4, commit a62b43d1)**: extend its `ignore_changes` list with the
   dns_config path. Handles both inline (`= [x]`) and multiline
   (`= [\n  x,\n]`) forms; ensures the last pre-existing list item carries
   a trailing comma so the extended list is valid HCL. 34 extensions.

The script skips anything already mentioning `dns_config` inside an
`ignore_changes`, so re-running is a no-op.

## Scale

- 142 total lifecycle injections/extensions
- 93 `.tf` files touched
- 108 brand-new `lifecycle {}` blocks + 34 extensions of existing ones
- Every Tier 0 and Tier 1 stack with a pod-owning resource is covered
- Together with Wave 3A's 27 pre-existing markers → **169 greppable
  `KYVERNO_LIFECYCLE_V1` dns_config sites across the repo**

## What is NOT in this change

- `stacks/trading-bot/main.tf` — entirely commented-out block (`/* … */`).
  Python script touched the file, reverted manually.
- `_template/main.tf.example` skeleton — kept minimal on purpose; any
  future stack created from it should either inherit the Wave 3A one-line
  form or add its own on first `kubernetes_deployment`.
- `terraform fmt` fixes to pre-existing alignment issues in meshcentral,
  nvidia/modules/nvidia, vault — unrelated to this commit. Left for a
  separate fmt-only pass.
- Non-pod resources (`kubernetes_service`, `kubernetes_secret`,
  `kubernetes_manifest`, etc.) — they don't own pods so they don't get
  Kyverno dns_config mutation.

## Verification

Random sample post-commit:
```
$ cd stacks/navidrome && ../../scripts/tg plan  → No changes.
$ cd stacks/f1-stream && ../../scripts/tg plan  → No changes.
$ cd stacks/frigate && ../../scripts/tg plan    → No changes.

$ rg -c 'KYVERNO_LIFECYCLE_V1' stacks/ --include='*.tf' --include='*.tf.example' \
    | awk -F: '{s+=$2} END {print s}'
169
```

## Reproduce locally
1. `git pull`
2. `rg 'KYVERNO_LIFECYCLE_V1' stacks/ | wc -l` → 169+
3. `cd stacks/navidrome && ../../scripts/tg plan` → expect 0 drift on
   the deployment's dns_config field.

Refs: code-seq (Wave 3B dns_config class closed; kubernetes_manifest
annotation class handled separately in 8d94688d for tls_secret)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:19:48 +00:00

341 lines
10 KiB
HCL

variable "tls_secret_name" {
type = string
sensitive = true
}
variable "nfs_server" { type = string }
variable "postgresql_host" { type = string }
variable "woodpecker_forgejo_url" { type = string }
data "vault_kv_secret_v2" "secrets" {
mount = "secret"
name = "woodpecker"
}
data "vault_kv_secret_v2" "platform" {
mount = "secret"
name = "platform"
}
locals {
k8s_users = jsondecode(data.vault_kv_secret_v2.platform.data["k8s_users"])
# Build admin list: existing admin + all namespace-owner usernames
woodpecker_admins = join(",", concat(
["ViktorBarzin"],
[for name, user in local.k8s_users : name if user.role == "namespace-owner"]
))
}
resource "kubernetes_namespace" "woodpecker" {
metadata {
name = "woodpecker"
labels = {
"resource-governance/custom-quota" = "true"
tier = local.tiers.edge
}
}
lifecycle {
# KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
}
resource "kubernetes_resource_quota" "woodpecker" {
metadata {
name = "tier-quota"
namespace = kubernetes_namespace.woodpecker.metadata[0].name
}
spec {
hard = {
"requests.cpu" = "16"
"requests.memory" = "16Gi"
"limits.memory" = "32Gi"
pods = "60"
}
}
}
module "tls_secret" {
source = "../../modules/kubernetes/setup_tls_secret"
namespace = kubernetes_namespace.woodpecker.metadata[0].name
tls_secret_name = var.tls_secret_name
}
resource "kubernetes_manifest" "external_secret" {
manifest = {
apiVersion = "external-secrets.io/v1beta1"
kind = "ExternalSecret"
metadata = {
name = "woodpecker-secrets"
namespace = "woodpecker"
}
spec = {
refreshInterval = "15m"
secretStoreRef = {
name = "vault-kv"
kind = "ClusterSecretStore"
}
target = {
name = "woodpecker-secrets"
}
dataFrom = [{
extract = {
key = "woodpecker"
}
}]
}
}
depends_on = [kubernetes_namespace.woodpecker]
}
# DB credentials from Vault database engine (rotated every 24h)
# ExternalSecret provides WOODPECKER_DATABASE_DATASOURCE injected via
# server.extraSecretNamesForEnvFrom — auto-updates when password rotates
resource "kubernetes_manifest" "db_external_secret" {
field_manager {
force_conflicts = true
}
manifest = {
apiVersion = "external-secrets.io/v1beta1"
kind = "ExternalSecret"
metadata = {
name = "woodpecker-db-creds"
namespace = "woodpecker"
}
spec = {
refreshInterval = "15m"
secretStoreRef = {
name = "vault-database"
kind = "ClusterSecretStore"
}
target = {
name = "woodpecker-db-creds"
template = {
data = {
WOODPECKER_DATABASE_DATASOURCE = "postgres://woodpecker:{{ .password }}@${var.postgresql_host}:5432/woodpecker?sslmode=disable"
}
}
}
data = [{
secretKey = "password"
remoteRef = {
key = "static-creds/pg-woodpecker"
property = "password"
}
}]
}
}
depends_on = [kubernetes_namespace.woodpecker]
}
resource "kubernetes_config_map" "git_crypt_key" {
metadata {
name = "git-crypt-key"
namespace = kubernetes_namespace.woodpecker.metadata[0].name
}
data = {
"key" = filebase64("${path.root}/../../.git/git-crypt/keys/default")
}
}
# Database init job - REMOVED: database and user already exist.
# The job used -U root which doesn't work with CNPG (superuser is 'postgres').
# Vault DB engine manages the woodpecker credentials via rotation.
# Woodpecker server data is on local-path (node-local storage), NOT NFS.
# The old NFS PV was unused — PVC was already bound to local-path PV.
# No PV management needed here.
# Helm release for Woodpecker CI
# Database datasource is now injected from ExternalSecret via envFrom
resource "helm_release" "woodpecker" {
name = "woodpecker"
namespace = kubernetes_namespace.woodpecker.metadata[0].name
repository = "oci://ghcr.io/woodpecker-ci/helm"
chart = "woodpecker"
version = "3.5.1"
values = [
templatefile("${path.module}/values.yaml", {
github_client_id = data.vault_kv_secret_v2.secrets.data["github_client_id"]
github_client_secret = data.vault_kv_secret_v2.secrets.data["github_client_secret"]
agent_secret = data.vault_kv_secret_v2.secrets.data["agent_secret"]
forgejo_client_id = data.vault_kv_secret_v2.secrets.data["forgejo_client_id"]
forgejo_client_secret = data.vault_kv_secret_v2.secrets.data["forgejo_client_secret"]
forgejo_url = var.woodpecker_forgejo_url
woodpecker_admins = local.woodpecker_admins
})
]
timeout = 600
depends_on = [kubernetes_manifest.db_external_secret]
}
# ClusterRoleBinding - build pods need cluster-admin to PATCH deployments across namespaces
resource "kubernetes_cluster_role_binding" "woodpecker" {
metadata {
name = "woodpecker"
}
subject {
kind = "ServiceAccount"
name = "woodpecker-agent"
namespace = kubernetes_namespace.woodpecker.metadata[0].name
}
role_ref {
kind = "ClusterRole"
name = "cluster-admin"
api_group = "rbac.authorization.k8s.io"
}
}
# Also bind the default SA (pipeline pods run as default)
resource "kubernetes_cluster_role_binding" "woodpecker_default" {
metadata {
name = "woodpecker-default"
}
subject {
kind = "ServiceAccount"
name = "default"
namespace = kubernetes_namespace.woodpecker.metadata[0].name
}
role_ref {
kind = "ClusterRole"
name = "cluster-admin"
api_group = "rbac.authorization.k8s.io"
}
}
# --- Vault → Woodpecker Secret Sync ---
# Syncs secrets from Vault KV (secret/ci/global) to Woodpecker global secrets via API.
# Runs every 6 hours. Secrets are created/updated via Woodpecker REST API.
resource "kubernetes_config_map" "vault_woodpecker_sync" {
metadata {
name = "vault-woodpecker-sync"
namespace = kubernetes_namespace.woodpecker.metadata[0].name
}
data = {
"sync.sh" = <<-SCRIPT
#!/bin/sh
set -e
VAULT_ADDR="http://vault-active.vault.svc.cluster.local:8200"
WP_API="http://woodpecker-server.woodpecker.svc.cluster.local/api"
# Authenticate to Vault via K8s SA
SA_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
VAULT_TOKEN=$(curl -sf -X POST "$VAULT_ADDR/v1/auth/kubernetes/login" \
-d "{\"role\":\"woodpecker-sync\",\"jwt\":\"$SA_TOKEN\"}" | jq -r .auth.client_token)
if [ -z "$VAULT_TOKEN" ] || [ "$VAULT_TOKEN" = "null" ]; then
echo "ERROR: Failed to authenticate to Vault"
exit 1
fi
# Get Woodpecker API token from Vault
WP_TOKEN=$(curl -sf -H "X-Vault-Token: $VAULT_TOKEN" \
"$VAULT_ADDR/v1/secret/data/ci/global" | jq -r '.data.data.woodpecker_api_token // empty')
if [ -z "$WP_TOKEN" ]; then
echo "ERROR: No woodpecker_api_token in secret/ci/global"
exit 1
fi
# Sync global secrets
SECRETS=$(curl -sf -H "X-Vault-Token: $VAULT_TOKEN" \
"$VAULT_ADDR/v1/secret/data/ci/global" | jq -r '.data.data | to_entries[] | select(.key != "woodpecker_api_token") | @base64')
synced=0
for entry in $SECRETS; do
NAME=$(echo "$entry" | base64 -d | jq -r .key)
VALUE=$(echo "$entry" | base64 -d | jq -r .value)
# Try PATCH first (update), fall back to POST (create)
# Include all event types so secrets work for manual/cron-triggered pipelines too
STATUS=$(curl -sf -o /dev/null -w "%%{http_code}" -X PATCH "$WP_API/secrets/$NAME" \
-H "Authorization: Bearer $WP_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"name\":\"$NAME\",\"value\":\"$VALUE\",\"events\":[\"cron\",\"deployment\",\"manual\",\"push\",\"tag\"]}" 2>/dev/null || echo "000")
if [ "$STATUS" != "200" ]; then
curl -sf -X POST "$WP_API/secrets" \
-H "Authorization: Bearer $WP_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"name\":\"$NAME\",\"value\":\"$VALUE\",\"events\":[\"cron\",\"deployment\",\"manual\",\"push\",\"tag\"]}" > /dev/null
fi
synced=$((synced + 1))
done
echo "Synced $synced global secrets from Vault to Woodpecker"
SCRIPT
}
}
resource "kubernetes_cron_job_v1" "vault_secret_sync" {
metadata {
name = "vault-woodpecker-sync"
namespace = kubernetes_namespace.woodpecker.metadata[0].name
}
spec {
schedule = "0 */6 * * *"
successful_jobs_history_limit = 3
failed_jobs_history_limit = 3
concurrency_policy = "Forbid"
job_template {
metadata {}
spec {
template {
metadata {}
spec {
container {
name = "sync"
image = "alpine"
command = ["/bin/sh", "-c", "apk add --no-cache curl jq && /bin/sh /scripts/sync.sh"]
volume_mount {
name = "sync-script"
mount_path = "/scripts"
}
resources {
requests = {
cpu = "10m"
memory = "32Mi"
}
limits = {
memory = "64Mi"
}
}
}
volume {
name = "sync-script"
config_map {
name = kubernetes_config_map.vault_woodpecker_sync.metadata[0].name
}
}
restart_policy = "OnFailure"
}
}
}
}
}
lifecycle {
# KYVERNO_LIFECYCLE_V1: Kyverno admission webhook mutates dns_config with ndots=2
ignore_changes = [spec[0].job_template[0].spec[0].template[0].spec[0].dns_config]
}
}
module "ingress" {
source = "../../modules/kubernetes/ingress_factory"
dns_type = "non-proxied"
namespace = kubernetes_namespace.woodpecker.metadata[0].name
name = "ci"
service_name = "woodpecker-server"
tls_secret_name = var.tls_secret_name
extra_annotations = {
"gethomepage.dev/enabled" = "true"
"gethomepage.dev/name" = "Woodpecker CI"
"gethomepage.dev/description" = "CI/CD pipelines"
"gethomepage.dev/icon" = "woodpecker-ci.png"
"gethomepage.dev/group" = "Development & CI"
"gethomepage.dev/pod-selector" = ""
}
}