infra/.woodpecker
Viktor Barzin b28c76e371 [infra] Wire drift detection to Pushgateway + alert on stale/unaddressed drift
## Context

Wave 7 of the state-drift consolidation plan. The drift-detection pipeline
(`.woodpecker/drift-detection.yml`) already ran terragrunt plan on every
stack daily and Slack-posted a summary, but its output was ephemeral —
nothing persisted in Prometheus, so there was no historical view of which
stacks drift, when, or for how long. Following the convergence work in
waves 1–6 (168 KYVERNO_LIFECYCLE_V1 markers, 4 stacks adopted, Phase 4
mysql cleanup), the baseline is clean enough that *new* drift should
stand out. That only works if we have observability.

## This change

### `.woodpecker/drift-detection.yml`

Enhances the existing cron pipeline to push a batched set of metrics to
the in-cluster Pushgateway (`prometheus-prometheus-pushgateway.monitoring:9091`)
after each run:

| Metric | Kind | Purpose |
|---|---|---|
| `drift_stack_state{stack}` | gauge, 0/1/2 | 0=clean, 1=drift, 2=error |
| `drift_stack_first_seen{stack}` | gauge (unix seconds) | Preserved across runs for drift-age tracking |
| `drift_stack_age_hours{stack}` | gauge (hours) | Computed from `first_seen` |
| `drift_stack_count` | gauge (count) | Total drifted stacks this run |
| `drift_error_count` | gauge (count) | Total plan-errored stacks |
| `drift_clean_count` | gauge (count) | Total clean stacks |
| `drift_detection_last_run_timestamp` | gauge (unix seconds) | Pipeline heartbeat |

First-seen preservation: on each drift hit, the pipeline queries
Pushgateway for the existing `drift_stack_first_seen{stack=<stack>}`
value. If present and non-zero, reuse it; otherwise stamp with `NOW`.
That means age-hours grows monotonically until the stack goes clean
(at which point state=0 resets first_seen by omission).

Atomic batched push: all metrics for a run are POST'd in a single
HTTP request. Pushgateway doesn't support atomic multi-metric updates
natively, but batching at the pipeline layer prevents half-updated
state if the curl is interrupted mid-run (the second call would just
fail the entire run and alert on `DriftDetectionStale`).

### `stacks/monitoring/.../prometheus_chart_values.tpl`

New `Infrastructure Drift` alert group with three rules:

- **DriftDetectionStale** (warning, 30m): fires if
  `drift_detection_last_run_timestamp` is older than 26h. Gives a 2h
  grace window on top of the 24h cron so transient Pushgateway or
  cluster unavailability doesn't false-alarm. Guards against the
  pipeline silently failing or the cron not firing.
- **DriftUnaddressed** (warning, 1h): fires if any stack has
  `drift_stack_age_hours > 72` — three days of unacknowledged drift.
  Three days is long enough to absorb weekends + typical review cycles
  but short enough to force follow-up before drift compounds.
- **DriftStacksMany** (warning, 30m): fires if `drift_stack_count > 10`
  in a single run. Sudden wide drift usually signals systemic causes
  (new admission webhook, provider version bump, cluster-wide CRD
  upgrade) rather than individual configuration errors, and the alert
  body nudges toward that diagnosis.

Applied to `stacks/monitoring` this session — 1 helm_release changed,
no other drift surfaced.

## What is NOT in this change

- The Wave 7 **GitHub issue auto-filer** — the full plan included
  filing a `drift-detected` issue per drifted stack. Deferred because
  it requires wiring the `file-issue` skill's convention + a gh token
  exposed to Woodpecker, both of which need separate setup. The Slack
  alert covers the same need at lower fidelity in the meantime.
- The Wave 7 **PG drift_history table** — would provide the richest
  historical view but adds a new DB schema dependency for a CI
  pipeline. Pushgateway + Prometheus handle the 72h window we care
  about; PG history is nice-to-have for quarterly reviews.
- Auto-apply marker (`# DRIFT_AUTO_APPLY_OK`) — premature until the
  baseline has been stable for a few cycles.

Follow-ups tracked: file dedicated beads items for GH-issue filer + PG
drift_history.

## Verification

```
$ cd stacks/monitoring && ../../scripts/tg apply --non-interactive
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

# After next cron run (cron expr: "drift-detection" in Woodpecker UI):
$ curl -s http://prometheus-prometheus-pushgateway.monitoring:9091/metrics \
    | grep -c '^drift_'
# expect a positive number
```

## Reproduce locally
1. `git pull`
2. Check Prometheus rules: `curl -sk https://prometheus.viktorbarzin.lan/api/v1/rules | jq '.data.groups[] | select(.name == "Infrastructure Drift")'`
3. Manually trigger the Woodpecker cron and watch Pushgateway populate.

Refs: Wave 7 umbrella (code-hl1)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 22:42:51 +00:00
..
build-ci-image.yml fix: remove manual event from build-ci-image to fix issue automation 2026-04-15 17:31:25 +00:00
build-cli.yml fix: CI pipeline - disable corrupted cache, add pull before push 2026-03-15 22:51:08 +00:00
default.yml feat(ci): add Vault advisory locks to CI terraform applies 2026-04-15 20:53:00 +00:00
drift-detection.yml [infra] Wire drift detection to Pushgateway + alert on stale/unaddressed drift 2026-04-18 22:42:51 +00:00
issue-automation.yml [claude-agent-service] Migrate all pipelines from DevVM SSH to K8s HTTP 2026-04-18 10:12:02 +00:00
k8s-portal.yml add generic multi-user cluster onboarding system 2026-03-15 22:23:36 +00:00
postmortem-todos.yml [claude-agent-service] Migrate all pipelines from DevVM SSH to K8s HTTP 2026-04-18 10:12:02 +00:00
provision-user.yml fix: remove manual event from build-ci-image to fix issue automation 2026-04-15 17:31:25 +00:00
renew-tls.yml fix(renew-tls): update TLS secret in ALL namespaces, not just kyverno 2026-03-23 22:36:31 +02:00