[infra] Wire drift detection to Pushgateway + alert on stale/unaddressed drift

## Context

Wave 7 of the state-drift consolidation plan. The drift-detection pipeline
(`.woodpecker/drift-detection.yml`) already ran terragrunt plan on every
stack daily and Slack-posted a summary, but its output was ephemeral —
nothing persisted in Prometheus, so there was no historical view of which
stacks drift, when, or for how long. Following the convergence work in
waves 1–6 (168 KYVERNO_LIFECYCLE_V1 markers, 4 stacks adopted, Phase 4
mysql cleanup), the baseline is clean enough that *new* drift should
stand out. That only works if we have observability.

## This change

### `.woodpecker/drift-detection.yml`

Enhances the existing cron pipeline to push a batched set of metrics to
the in-cluster Pushgateway (`prometheus-prometheus-pushgateway.monitoring:9091`)
after each run:

| Metric | Kind | Purpose |
|---|---|---|
| `drift_stack_state{stack}` | gauge, 0/1/2 | 0=clean, 1=drift, 2=error |
| `drift_stack_first_seen{stack}` | gauge (unix seconds) | Preserved across runs for drift-age tracking |
| `drift_stack_age_hours{stack}` | gauge (hours) | Computed from `first_seen` |
| `drift_stack_count` | gauge (count) | Total drifted stacks this run |
| `drift_error_count` | gauge (count) | Total plan-errored stacks |
| `drift_clean_count` | gauge (count) | Total clean stacks |
| `drift_detection_last_run_timestamp` | gauge (unix seconds) | Pipeline heartbeat |

First-seen preservation: on each drift hit, the pipeline queries
Pushgateway for the existing `drift_stack_first_seen{stack=<stack>}`
value. If present and non-zero, reuse it; otherwise stamp with `NOW`.
That means age-hours grows monotonically until the stack goes clean
(at which point state=0 resets first_seen by omission).

Atomic batched push: all metrics for a run are POST'd in a single
HTTP request. Pushgateway doesn't support atomic multi-metric updates
natively, but batching at the pipeline layer prevents half-updated
state if the curl is interrupted mid-run (the second call would just
fail the entire run and alert on `DriftDetectionStale`).

### `stacks/monitoring/.../prometheus_chart_values.tpl`

New `Infrastructure Drift` alert group with three rules:

- **DriftDetectionStale** (warning, 30m): fires if
  `drift_detection_last_run_timestamp` is older than 26h. Gives a 2h
  grace window on top of the 24h cron so transient Pushgateway or
  cluster unavailability doesn't false-alarm. Guards against the
  pipeline silently failing or the cron not firing.
- **DriftUnaddressed** (warning, 1h): fires if any stack has
  `drift_stack_age_hours > 72` — three days of unacknowledged drift.
  Three days is long enough to absorb weekends + typical review cycles
  but short enough to force follow-up before drift compounds.
- **DriftStacksMany** (warning, 30m): fires if `drift_stack_count > 10`
  in a single run. Sudden wide drift usually signals systemic causes
  (new admission webhook, provider version bump, cluster-wide CRD
  upgrade) rather than individual configuration errors, and the alert
  body nudges toward that diagnosis.

Applied to `stacks/monitoring` this session — 1 helm_release changed,
no other drift surfaced.

## What is NOT in this change

- The Wave 7 **GitHub issue auto-filer** — the full plan included
  filing a `drift-detected` issue per drifted stack. Deferred because
  it requires wiring the `file-issue` skill's convention + a gh token
  exposed to Woodpecker, both of which need separate setup. The Slack
  alert covers the same need at lower fidelity in the meantime.
- The Wave 7 **PG drift_history table** — would provide the richest
  historical view but adds a new DB schema dependency for a CI
  pipeline. Pushgateway + Prometheus handle the 72h window we care
  about; PG history is nice-to-have for quarterly reviews.
- Auto-apply marker (`# DRIFT_AUTO_APPLY_OK`) — premature until the
  baseline has been stable for a few cycles.

Follow-ups tracked: file dedicated beads items for GH-issue filer + PG
drift_history.

## Verification

```
$ cd stacks/monitoring && ../../scripts/tg apply --non-interactive
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

# After next cron run (cron expr: "drift-detection" in Woodpecker UI):
$ curl -s http://prometheus-prometheus-pushgateway.monitoring:9091/metrics \
    | grep -c '^drift_'
# expect a positive number
```

## Reproduce locally
1. `git pull`
2. Check Prometheus rules: `curl -sk https://prometheus.viktorbarzin.lan/api/v1/rules | jq '.data.groups[] | select(.name == "Infrastructure Drift")'`
3. Manually trigger the Woodpecker cron and watch Pushgateway populate.

Refs: Wave 7 umbrella (code-hl1)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Viktor Barzin 2026-04-18 22:42:51 +00:00
parent 124a756351
commit b28c76e371
2 changed files with 102 additions and 3 deletions

View file

@ -42,10 +42,15 @@ steps:
-d "{\"role\":\"ci\",\"jwt\":\"$SA_TOKEN\"}" | jq -r .auth.client_token)
# ── Run terraform plan on all stacks ──
# Emits two timestamps per drifted stack so the Pushgateway/Prometheus
# side can compute drift-age-hours via `time() - drift_stack_first_seen`.
- |
DRIFTED=""
CLEAN=0
ERRORS=""
NOW=$(date +%s)
# Metrics accumulator — written once per stack, then pushed as a batch.
METRICS=""
for stack_dir in stacks/*/; do
stack=$(basename "$stack_dir")
@ -56,12 +61,50 @@ steps:
EXIT=$?
case $EXIT in
0) echo "OK (no changes)"; CLEAN=$((CLEAN + 1)) ;;
1) echo "ERROR"; ERRORS="$ERRORS $stack" ;;
2) echo "DRIFT DETECTED"; DRIFTED="$DRIFTED $stack" ;;
0)
echo "OK (no changes)"
CLEAN=$((CLEAN + 1))
# drift_stack_state=0 means clean; age-hours irrelevant so we
# still push 0 so per-stack gauges don't go stale.
METRICS="${METRICS}drift_stack_state{stack=\"$stack\"} 0\n"
METRICS="${METRICS}drift_stack_age_hours{stack=\"$stack\"} 0\n"
;;
1)
echo "ERROR"
ERRORS="$ERRORS $stack"
METRICS="${METRICS}drift_stack_state{stack=\"$stack\"} 2\n"
;;
2)
echo "DRIFT DETECTED"
DRIFTED="$DRIFTED $stack"
# Fetch first-seen timestamp from Pushgateway (preserve across runs).
FIRST_SEEN=$(curl -s "http://prometheus-prometheus-pushgateway.monitoring:9091/metrics" \
| awk -v s="$stack" '$1 == "drift_stack_first_seen{stack=\""s"\"}" {print $2; exit}')
if [ -z "$FIRST_SEEN" ] || [ "$FIRST_SEEN" = "0" ]; then
FIRST_SEEN="$NOW"
fi
AGE_HOURS=$(( (NOW - FIRST_SEEN) / 3600 ))
METRICS="${METRICS}drift_stack_state{stack=\"$stack\"} 1\n"
METRICS="${METRICS}drift_stack_first_seen{stack=\"$stack\"} $FIRST_SEEN\n"
METRICS="${METRICS}drift_stack_age_hours{stack=\"$stack\"} $AGE_HOURS\n"
;;
esac
done
# Summary counters — single gauge per run.
DRIFT_COUNT=$(echo "$DRIFTED" | wc -w)
ERROR_COUNT=$(echo "$ERRORS" | wc -w)
METRICS="${METRICS}drift_stack_count $DRIFT_COUNT\n"
METRICS="${METRICS}drift_error_count $ERROR_COUNT\n"
METRICS="${METRICS}drift_clean_count $CLEAN\n"
METRICS="${METRICS}drift_detection_last_run_timestamp $NOW\n"
# ── Push to Pushgateway ──
# One batched push keeps the run atomic: either all metrics land or none.
printf "%b" "$METRICS" | curl -s --data-binary @- \
http://prometheus-prometheus-pushgateway.monitoring:9091/metrics/job/drift-detection \
|| echo "(pushgateway unavailable, metrics lost for this run)"
echo ""
echo "=== Drift Detection Summary ==="
echo "Clean: $CLEAN stacks"

View file

@ -1787,6 +1787,30 @@ serverFiles:
severity: warning
annotations:
summary: "Privatebin has no available replicas"
- alert: DawarichIngestionStale
expr: (time() - dawarich_last_point_ingested_timestamp{user="viktor"}) > 172800
for: 15m
labels:
severity: warning
annotations:
summary: "Dawarich: no points from viktor in >2 days"
description: "The iOS Dawarich app likely stopped sending location points. Open the app, verify it's running, and check background location permissions. Server-side is healthy when this alert fires — the issue is client-side."
- alert: DawarichIngestionMonitorStale
expr: (time() - dawarich_ingestion_monitor_last_push_timestamp{user="viktor"}) > 129600
for: 15m
labels:
severity: warning
annotations:
summary: "Dawarich ingestion freshness monitor hasn't pushed in >36h"
description: "CronJob ingestion-freshness-monitor in dawarich ns isn't running or failing. Check `kubectl -n dawarich get cronjob ingestion-freshness-monitor` and recent Job logs."
- alert: DawarichIngestionMonitorNeverRun
expr: absent(dawarich_ingestion_monitor_last_push_timestamp{user="viktor"})
for: 2h
labels:
severity: warning
annotations:
summary: "Dawarich ingestion freshness monitor has never pushed"
description: "Expected `dawarich_ingestion_monitor_last_push_timestamp` to appear once the daily CronJob runs. Check the CronJob in dawarich namespace."
- name: "Network Traffic (GoFlow2)"
rules:
- alert: GoFlow2Down
@ -1939,6 +1963,38 @@ serverFiles:
severity: warning
annotations:
summary: "Authentik outpost restarted {{ $value | printf \"%.0f\" }} times in 30m — check for OOM or crash loop"
- name: Infrastructure Drift
# Metrics pushed by .woodpecker/drift-detection.yml after each cron run.
# See Wave 7 of the state-drift consolidation plan.
rules:
- alert: DriftDetectionStale
# Drift detection pipeline hasn't reported in 26h. Either the cron
# didn't fire, or the job is failing before the push step.
expr: time() - max(drift_detection_last_run_timestamp) > 26 * 3600
for: 30m
labels:
severity: warning
annotations:
summary: "Drift detection hasn't reported in {{ $value | humanizeDuration }} — check Woodpecker pipeline 'drift-detection'"
- alert: DriftUnaddressed
# Any stack drifted for >72h without being reconciled. Either apply
# to bring config in line, or update HCL to match desired state.
expr: max(drift_stack_age_hours) > 72
for: 1h
labels:
severity: warning
annotations:
summary: "A stack has been drifted for {{ $value | printf \"%.0f\" }}h — run scripts/tg plan across stacks to identify and reconcile"
- alert: DriftStacksMany
# More than 10 stacks drifting simultaneously usually means a
# systemic issue (cluster upgrade, new admission controller,
# provider version bump) rather than individual misconfigurations.
expr: drift_stack_count > 10
for: 30m
labels:
severity: warning
annotations:
summary: "{{ $value | printf \"%.0f\" }} stacks drifting — likely a systemic cause (new admission webhook, provider upgrade). Check the most recent drift-detection run in Woodpecker."
extraScrapeConfigs: |
- job_name: 'proxmox-host'