infra/stacks/webhook_handler/main.tf

304 lines
7.4 KiB
Terraform
Raw Normal View History

variable "tls_secret_name" {
type = string
sensitive = true
}
data "vault_kv_secret_v2" "secrets" {
mount = "secret"
name = "webhook-handler"
}
resource "kubernetes_namespace" "webhook-handler" {
metadata {
name = "webhook-handler"
labels = {
tier = local.tiers.aux
}
}
[infra] Suppress Goldilocks vpa-update-mode label drift on all namespaces [ci skip] ## Context Wave 3B-continued: the Goldilocks VPA dashboard (stacks/vpa) runs a Kyverno ClusterPolicy `goldilocks-vpa-auto-mode` that mutates every namespace with `metadata.labels["goldilocks.fairwinds.com/vpa-update-mode"] = "off"`. This is intentional — Terraform owns container resource limits, and Goldilocks should only provide recommendations, never auto-update. The label is how Goldilocks decides per-namespace whether to run its VPA in `off` mode. Effect on Terraform: every `kubernetes_namespace` resource shows the label as pending-removal (`-> null`) on every `scripts/tg plan`. Dawarich survey 2026-04-18 confirmed the drift. Cluster-side count: 88 namespaces carry the label (`kubectl get ns -o json | jq ... | wc -l`). Every TF-managed namespace is affected. This commit brings the intentional admission drift under the same `# KYVERNO_LIFECYCLE_V1` discoverability marker introduced in c9d221d5 for the ndots dns_config pattern. The marker now stands generically for any Kyverno admission-webhook drift suppression; the inline comment records which specific policy stamps which specific field so future grep audits show why each suppression exists. ## This change 107 `.tf` files touched — every stack's `resource "kubernetes_namespace"` resource gets: ```hcl lifecycle { # KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]] } ``` Injection was done with a brace-depth-tracking Python pass (`/tmp/add_goldilocks_ignore.py`): match `^resource "kubernetes_namespace" ` → track `{` / `}` until the outermost closing brace → insert the lifecycle block before the closing brace. The script is idempotent (skips any file that already mentions `goldilocks.fairwinds.com/vpa-update-mode`) so re-running is safe. Vault stack picked up 2 namespaces in the same file (k8s-users produces one, plus a second explicit ns) — confirmed via file diff (+8 lines). ## What is NOT in this change - `stacks/trading-bot/main.tf` — entire file is `/* … */` commented out (paused 2026-04-06 per user decision). Reverted after the script ran. - `stacks/_template/main.tf.example` — per-stack skeleton, intentionally minimal. User keeps it that way. Not touched by the script (file has no real `resource "kubernetes_namespace"` — only a placeholder comment). - `.terraform/` copies (e.g. `stacks/metallb/.terraform/modules/...`) — gitignored, won't commit; the live path was edited. - `terraform fmt` cleanup of adjacent pre-existing alignment issues in authentik, freedify, hermes-agent, nvidia, vault, meshcentral. Reverted to keep the commit scoped to the Goldilocks sweep. Those files will need a separate fmt-only commit or will be cleaned up on next real apply to that stack. ## Verification Dawarich (one of the hundred-plus touched stacks) showed the pattern before and after: ``` $ cd stacks/dawarich && ../../scripts/tg plan Before: Plan: 0 to add, 2 to change, 0 to destroy. # kubernetes_namespace.dawarich will be updated in-place (goldilocks.fairwinds.com/vpa-update-mode -> null) # module.tls_secret.kubernetes_secret.tls_secret will be updated in-place (Kyverno generate.* labels — fixed in 8d94688d) After: No changes. Your infrastructure matches the configuration. ``` Injection count check: ``` $ rg -c 'KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode' stacks/ | awk -F: '{s+=$2} END {print s}' 108 ``` ## Reproduce locally 1. `git pull` 2. Pick any stack: `cd stacks/<name> && ../../scripts/tg plan` 3. Expect: no drift on the namespace's goldilocks.fairwinds.com/vpa-update-mode label. Closes: code-dwx Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:15:27 +00:00
lifecycle {
# KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
}
module "tls_secret" {
source = "../../modules/kubernetes/setup_tls_secret"
namespace = kubernetes_namespace.webhook-handler.metadata[0].name
tls_secret_name = var.tls_secret_name
}
resource "kubernetes_cluster_role" "deployment_updater" {
metadata {
name = "deployment-updater"
}
rule {
verbs = ["create", "update", "get", "patch", "list"]
api_groups = ["extensions", "apps", ""]
resources = ["deployments", "namespaces", "pods", "services"]
}
}
resource "kubernetes_cluster_role_binding" "update_deployment_binding" {
metadata {
name = "update-deployment-binding"
}
subject {
kind = "ServiceAccount"
name = "default"
namespace = kubernetes_namespace.webhook-handler.metadata[0].name
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "deployment-updater"
}
}
resource "kubernetes_secret" "ssh-key" {
metadata {
name = "ssh-key"
namespace = kubernetes_namespace.webhook-handler.metadata[0].name
annotations = {
"reloader.stakater.com/match" = "true"
}
}
data = {
"id_rsa" = data.vault_kv_secret_v2.secrets.data["ssh_key"]
}
type = "generic"
}
resource "kubernetes_deployment" "webhook_handler" {
metadata {
name = "webhook-handler"
namespace = kubernetes_namespace.webhook-handler.metadata[0].name
labels = {
app = "webhook-handler"
tier = local.tiers.aux
}
annotations = {
"reloader.stakater.com/auto" = "true"
}
}
spec {
replicas = 1
selector {
match_labels = {
app = "webhook-handler"
}
}
template {
metadata {
labels = {
app = "webhook-handler"
}
}
spec {
container {
# security_context {
# run_as_user = 1000
# }
# lifecycle {
# post_start {
# exec {
# # Must be kept in sycn with webhook_handler dockerfile
# command = ["echo", "\"$SSH_KEY\"", ">", "/opt/id_rsa", "&&", "chown", "appuser", "/opt/id_rsa", "&&", "chmod", "600", "/opt/id_rsa"]
# }
# }
# }
image = "viktorbarzin/webhook-handler:latest"
name = "webhook-handler"
resources {
limits = {
memory = "64Mi"
}
requests = {
2026-03-01 19:18:50 +00:00
cpu = "10m"
memory = "64Mi"
}
}
port {
container_port = 80
}
volume_mount {
name = "id-rsa"
mount_path = "/opt/id_rsa"
sub_path = "id_rsa"
}
env {
name = "WEBHOOKSECRET"
value_from {
secret_key_ref {
name = "webhook-handler-secrets"
key = "secret"
}
}
}
env {
name = "FB_APP_SECRET"
value_from {
secret_key_ref {
name = "webhook-handler-secrets"
key = "fb_app_secret"
}
}
}
env {
name = "FB_VERIFY_TOKEN"
value_from {
secret_key_ref {
name = "webhook-handler-secrets"
key = "fb_verify_token"
}
}
}
env {
name = "FB_PAGE_TOKEN"
value_from {
secret_key_ref {
name = "webhook-handler-secrets"
key = "fb_page_token"
}
}
}
env {
name = "CONFIG"
value = "./chatbot/config/viktorwebservices.yaml"
}
env {
name = "GIT_USER"
value_from {
secret_key_ref {
name = "webhook-handler-secrets"
key = "git_user"
}
}
}
env {
name = "GIT_TOKEN"
value_from {
secret_key_ref {
name = "webhook-handler-secrets"
key = "git_token"
}
}
}
env {
name = "SSH_KEY"
value = "/opt/id_rsa"
}
env {
name = "WOODPECKER_API_URL"
value = "https://ci.viktorbarzin.me"
}
env {
name = "WOODPECKER_TOKEN"
value_from {
secret_key_ref {
name = "webhook-handler-secrets"
key = "woodpecker_token"
}
}
}
env {
name = "WOODPECKER_INFRA_REPO_ID"
value_from {
secret_key_ref {
name = "webhook-handler-secrets"
key = "woodpecker_infra_repo_id"
}
}
}
env {
name = "AUTHENTIK_WEBHOOK_SECRET"
value_from {
secret_key_ref {
name = "webhook-handler-secrets"
key = "authentik_webhook_secret"
}
}
}
}
volume {
name = "id-rsa"
secret {
secret_name = "ssh-key"
}
}
}
}
}
lifecycle {
[infra] Establish KYVERNO_LIFECYCLE_V1 drift-suppression convention [ci skip] ## Context Phase 1 of the state-drift consolidation audit (plan Wave 3) identified that the entire repo leans on a repeated `lifecycle { ignore_changes = [...dns_config] }` snippet to suppress Kyverno's admission-webhook dns_config mutation (the ndots=2 override that prevents NxDomain search-domain flooding). 27 occurrences across 19 stacks. Without this suppression, every pod-owning resource shows perpetual TF plan drift. The original plan proposed a shared `modules/kubernetes/kyverno_lifecycle/` module emitting the ignore-paths list as an output that stacks would consume in their `ignore_changes` blocks. That approach is architecturally impossible: Terraform's `ignore_changes` meta-argument accepts only static attribute paths — it rejects module outputs, locals, variables, and any expression (the HCL spec evaluates `lifecycle` before the regular expression graph). So a DRY module cannot exist. The canonical pattern IS the repeated snippet. What the snippet was missing was a *discoverability tag* so that (a) new resources can be validated for compliance, (b) the existing 27 sites can be grep'd in a single command, and (c) future maintainers understand the convention rather than each reinventing it. ## This change - Introduces `# KYVERNO_LIFECYCLE_V1` as the canonical marker comment. Attached inline on every `spec[0].template[0].spec[0].dns_config` line (or `spec[0].job_template[0].spec[0]...` for CronJobs) across all 27 existing suppression sites. - Documents the convention with rationale and copy-paste snippets in `AGENTS.md` → new "Kyverno Drift Suppression" section. - Expands the existing `.claude/CLAUDE.md` Kyverno ndots note to reference the marker and explain why the module approach is blocked. - Updates `_template/main.tf.example` so every new stack starts compliant. ## What is NOT in this change - The `kubernetes_manifest` Kyverno annotation drift (beads `code-seq`) — that is Phase B with a sibling `# KYVERNO_MANIFEST_V1` marker. - Behavioral changes — every `ignore_changes` list is byte-identical save for the inline comment. - The fallback module the original plan anticipated — skipped because Terraform rejects expressions in `ignore_changes`. - `terraform fmt` cleanup on adjacent unrelated blocks in three files (claude-agent-service, freedify/factory, hermes-agent). Reverted to keep this commit scoped to the convention rollout. ## Before / after Before (cannot distinguish accidental-forgotten from intentional-convention): ```hcl lifecycle { ignore_changes = [spec[0].template[0].spec[0].dns_config] } ``` After (greppable, self-documenting, discoverable by tooling): ```hcl lifecycle { ignore_changes = [spec[0].template[0].spec[0].dns_config] # KYVERNO_LIFECYCLE_V1 } ``` ## Test Plan ### Automated ``` $ rg -c 'KYVERNO_LIFECYCLE_V1' stacks/ --include='*.tf' --include='*.tf.example' \ | awk -F: '{s+=$2} END {print s}' 27 $ git diff --stat | grep -E '\.(tf|tf\.example|md)$' | wc -l 21 # All code-file diffs are 1 insertion + 1 deletion per marker site, # except beads-server (3), ebooks (4), immich (3), uptime-kuma (2). $ git diff --stat stacks/ | tail -1 20 files changed, 45 insertions(+), 28 deletions(-) ``` ### Manual Verification No apply required — HCL comments only. Zero effect on any stack's plan output. Future audits: `rg 'KYVERNO_LIFECYCLE_V1' stacks/ | wc -l` must grow as new pod-owning resources are added. ## Reproduce locally 1. `cd infra && git pull` 2. `rg 'KYVERNO_LIFECYCLE_V1' stacks/` → expect 27 hits in 19 files 3. Grep any new `kubernetes_deployment` for the marker; absence = missing suppression. Closes: code-28m Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:15:51 +00:00
ignore_changes = [spec[0].template[0].spec[0].dns_config] # KYVERNO_LIFECYCLE_V1
}
}
resource "kubernetes_service" "webhook_handler" {
metadata {
name = "webhook-handler"
namespace = kubernetes_namespace.webhook-handler.metadata[0].name
labels = {
"app" = "webhook-handler"
}
}
spec {
selector = {
app = "webhook-handler"
}
port {
port = "80"
target_port = "3000"
}
}
}
module "ingress" {
source = "../../modules/kubernetes/ingress_factory"
namespace = kubernetes_namespace.webhook-handler.metadata[0].name
name = "webhook-handler"
host = "webhook"
[infra] Auto-create Cloudflare DNS records from ingress_factory ## Context Deploying new services required manually adding hostnames to cloudflare_proxied_names/cloudflare_non_proxied_names in config.tfvars — a separate file from the service stack. This was frequently forgotten, leaving services unreachable externally. ## This change: - Add `dns_type` parameter to `ingress_factory` and `reverse_proxy/factory` modules. Setting `dns_type = "proxied"` or `"non-proxied"` auto-creates the Cloudflare DNS record (CNAME to tunnel or A/AAAA to public IP). - Simplify cloudflared tunnel from 100 per-hostname rules to wildcard `*.viktorbarzin.me → Traefik`. Traefik still handles host-based routing. - Add global Cloudflare provider via terragrunt.hcl (separate cloudflare_provider.tf with Vault-sourced API key). - Migrate 118 hostnames from centralized config.tfvars to per-service dns_type. 17 hostnames remain centrally managed (Helm ingresses, special cases). - Update docs, AGENTS.md, CLAUDE.md, dns.md runbook. ``` BEFORE AFTER config.tfvars (manual list) stacks/<svc>/main.tf | module "ingress" { v dns_type = "proxied" stacks/cloudflared/ } for_each = list | cloudflare_record auto-creates tunnel per-hostname cloudflare_record + annotation ``` ## What is NOT in this change: - Uptime Kuma monitor migration (still reads from config.tfvars) - 17 remaining centrally-managed hostnames (Helm, special cases) - Removal of allow_overwrite (keep until migration confirmed stable) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 13:45:04 +00:00
dns_type = "non-proxied"
tls_secret_name = var.tls_secret_name
extra_annotations = {
"gethomepage.dev/enabled" = "true"
"gethomepage.dev/name" = "Webhook Handler"
"gethomepage.dev/description" = "Webhook relay"
"gethomepage.dev/icon" = "webhook.png"
"gethomepage.dev/group" = "Automation"
"gethomepage.dev/pod-selector" = ""
}
}
resource "kubernetes_manifest" "external_secret" {
manifest = {
apiVersion = "external-secrets.io/v1beta1"
kind = "ExternalSecret"
metadata = {
name = "webhook-handler-secrets"
namespace = "webhook-handler"
}
spec = {
refreshInterval = "15m"
secretStoreRef = {
name = "vault-kv"
kind = "ClusterSecretStore"
}
target = {
name = "webhook-handler-secrets"
}
dataFrom = [{
extract = {
key = "webhook-handler"
}
}]
}
}
depends_on = [kubernetes_namespace.webhook-handler]
}