Commit graph

591 commits

Author SHA1 Message Date
Viktor Barzin
345ba2182f [mailserver] Widen email-roundtrip probe IMAP window 180s → 300s + per-attempt timeout
## Context

After fixing the two mail-server-side root causes of probe false-failures
(Dovecot userdb duplicates, postscreen btree lock contention), the probe
is expected to succeed well under 120s. This commit is defence in depth
against residual SMTP relay variance and against a future scenario where
Dovecot is transiently unresponsive during IMAP login.

The probe currently polls IMAP with `range(9) × 20s = 180s`. Brevo's
queueing, DNS variance, and general SMTP retry backoff can easily
exceed that on a bad day. Widening to 5 minutes gives plenty of headroom
while still remaining well within the CronJob's 20-minute schedule
interval.

Additionally, `imaplib.IMAP4_SSL(...)` previously had no timeout. If
Dovecot is unresponsive (e.g., mid-rollout, transient TLS handshake
hang), the connect call can block indefinitely and the probe hangs
without ever looping to the next attempt. Adding `timeout=10` caps each
connect at 10s so the retry loop keeps making forward progress.

## This change

Two edits to the embedded probe script inside the cronjob resource:

```
-    # Step 2: Wait for delivery, retry IMAP up to 3 min
+    # Step 2: Wait for delivery, retry IMAP up to 5 min (15 x 20s)
  ...
-    for attempt in range(9):
+    for attempt in range(15):
  ...
-            imap = imaplib.IMAP4_SSL(IMAP_HOST, 993, ssl_context=ctx)
+            imap = imaplib.IMAP4_SSL(IMAP_HOST, 993, ssl_context=ctx, timeout=10)
```

Flow (before):

```
send via Brevo ─► for 9 loops: sleep 20s, IMAP connect (blocks on hang) ─► 180s total
```

Flow (after):

```
send via Brevo ─► for 15 loops: sleep 20s, IMAP connect (≤10s) ─► 300s total
                                           │
                                           └─ timeout ─► log, continue to next loop
```

## What is NOT in this change

- Probe frequency stays at `*/20 * * * *`.
- The `EmailRoundtripStale` alert thresholds are intentionally left at
  3600s + for: 10m. Those fire only on sustained multi-hour issues and
  should not be loosened — they would mask future regressions. Probe
  success rate is now expected to recover to ≥95% from the two upstream
  fixes; if it doesn't, alert tuning gets revisited separately.
- No change to the Brevo send step, the success-metrics push, or the
  cleanup of stale e2e-probe-* messages.

## Test Plan

### Automated

`scripts/tg plan -target=module.mailserver.kubernetes_cron_job_v1.email_roundtrip_monitor`:

```
  # module.mailserver.kubernetes_cron_job_v1.email_roundtrip_monitor will be updated in-place
  -     for attempt in range(9):
  +     for attempt in range(15):
  -             imap = imaplib.IMAP4_SSL(IMAP_HOST, 993, ssl_context=ctx)
  +             imap = imaplib.IMAP4_SSL(IMAP_HOST, 993, ssl_context=ctx, timeout=10)
Plan: 0 to add, 1 to change, 0 to destroy.
```

`scripts/tg apply`:

```
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
```

### Manual Verification

1. Trigger the probe manually:
   `kubectl -n mailserver create job --from=cronjob/email-roundtrip-monitor probe-verify-$(date +%s)`
2. Tail its logs:
   `kubectl -n mailserver logs job/probe-verify-<ts> -f`
3. Expect: `Round-trip SUCCESS` within the 5-min window. Typical
   successful run should still complete in < 60s now that postscreen
   is no longer stalling.
4. Watch the 48-hour window on the `email_roundtrip_success` gauge in
   Prometheus — expect ≥95% (was ~65% before all three fixes).

## Reproduce locally

1. `kubectl -n mailserver get cronjob email-roundtrip-monitor -o yaml | grep -E "range\(|timeout"`
2. Expect: `range(15)` and `timeout=10`
3. `kubectl -n mailserver create job --from=cronjob/email-roundtrip-monitor probe-verify-$(date +%s)`
4. `kubectl -n mailserver logs -f job/probe-verify-<ts>`
5. Expect: eventual `Round-trip SUCCESS in <N>s` message and exit 0.

Closes: code-18e

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:33:56 +00:00
Viktor Barzin
e2516b07a3 [mailserver] Disable postscreen btree cache to stop SMTP lock-contention stalls
## Context

Postfix inside docker-mailserver was spamming fatal errors at roughly
1 per minute — 5,464 of them in a 24h window — all of the same shape:

```
postfix/postscreen[NNN]: fatal: btree:/var/lib/postfix/postscreen_cache:
unable to get exclusive lock: Resource temporarily unavailable
```

Every time one of these fires, the postscreen process dies mid-connection
and the inbound SMTP session is dropped. Legitimate mail (including Brevo
deliveries for our e2e email-roundtrip probe) gets re-queued by the sender
and arrives late — frequently past the probe's 180s IMAP polling window,
producing a 35%/7d probe success rate and the EmailRoundtripStale alert
noise that was originally flagged as "probably nothing."

## Root cause

`master.cf` declares postscreen with `maxproc=1`, but postscreen still
re-spawns per incoming connection (or for short-lived reopens), and each
instance opens the shared btree cache with an exclusive file lock. Under
any concurrency (two TCP SYNs arriving close together, or a retry during
teardown), the second process hits EWOULDBLOCK on fcntl and Postfix
treats that as fatal.

Three options were considered:

  | Option | Verdict |
  |--------|---------|
  | (a) Disable cache (postscreen_cache_map = )  | ✓ chosen |
  | (b) Switch btree → lmdb                       | ✗ lmdb not compiled into docker-mailserver 15.0.0's postfix (`postconf -m` has no lmdb) |
  | (c) proxy:btree via proxymap                  | ✗ unsafe — Postfix docs: "postscreen does its own locking, not safe via proxymap" |
  | (d) Memcached sidecar                         | ✗ new moving part; deferred |

Option (a) is a small trade-off: legitimate clients re-run the
greet-action / bare-newline-action checks on every fresh TCP session
instead of hitting the 7-day whitelist cache. At our volume (~100
deliveries/day, ~72 of which are the probe itself) that's negligible CPU.
DNSBL re-evaluation is also avoided only partially, but this mailserver
already has `postscreen_dnsbl_action = ignore` so the cache's DNSBL role
was doing nothing anyway.

## This change

Appends a stanza to the user-merged postfix main.cf stored in
`variable.postfix_cf` that sets `postscreen_cache_map =` (empty value).
Postfix treats an empty cache_map as "no persistent cache" — per-session
decisions are still enforced, they just aren't cached across sessions.

Before:

```
smtpd ──► postscreen (maxproc=1, btree cache with exclusive lock)
                ├─ concurrent access → fcntl EWOULDBLOCK → fatal
                └─ connection dropped, sender retries, mail arrives late
```

After:

```
smtpd ──► postscreen (no cache, per-session checks only)
                └─ no shared file, no lock → no fatal, no dropped session
```

No change to master.cf (postscreen still the front-end), no change to
DNSBL / greet / bare-newline policy.

## What is NOT in this change

- Dovecot userdb dedup (shipped in the previous commit).
- Email-roundtrip probe widening (next commit).
- Rebuilding docker-mailserver image with lmdb support (deferred —
  disabling the cache is simpler and sufficient at our volume).

## Test Plan

### Automated

`postconf -m` in the running container to confirm lmdb is genuinely absent
(ruling out option (b) before we commit to (a)):

```
btree  cidr  environ  fail  hash  inline  internal  ldap  memcache
nis  pcre  pipemap  proxy  randmap  regexp  socketmap  static  tcp
texthash  unionmap  unix
```

No lmdb entry — confirmed.

`scripts/tg plan -target=module.mailserver.kubernetes_config_map.mailserver_config`:

```
  ~ "postfix-main.cf" = <<-EOT
      + postscreen_cache_map =
```

`scripts/tg apply`:

```
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
```

Reloader triggers pod rollout — baseline error count before apply was 34
`unable to get exclusive lock` lines per `--tail=500` log window.

### Manual Verification

Post-rollout, when the new pod is Ready:

1. `kubectl -n mailserver exec <pod> -c docker-mailserver -- postconf postscreen_cache_map`
   Expect: empty (no value)
2. Watch for 15 min: `kubectl -n mailserver logs -l app=mailserver -c docker-mailserver --tail=1000 | grep -c "unable to get exclusive lock"`
   Expect: 0 new occurrences (any hits are from before the rollout).
3. Trigger a probe run manually:
   `kubectl -n mailserver create job --from=cronjob/email-roundtrip-monitor probe-verify-$(date +%s)`
   then `kubectl -n mailserver logs job/probe-verify-...`
   Expect: `Round-trip SUCCESS` with duration < 120s.

## Reproduce locally

1. `kubectl -n mailserver exec <pod> -c docker-mailserver -- postconf postscreen_cache_map`
2. Expect: `postscreen_cache_map =` (empty value)
3. `kubectl -n mailserver logs -l app=mailserver -c docker-mailserver --since=15m | grep -c "unable to get exclusive lock"`
4. Expect: 0

Closes: code-1dc

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:32:48 +00:00
Viktor Barzin
01a718e17b [mailserver] Filter redundant local→local aliases to fix Dovecot 'exists more than once'
## Context

Dovecot auth logs have been steadily spamming
`passwd-file /etc/dovecot/userdb: User r730-idrac@viktorbarzin.me exists more
than once` (and the same for vaultwarden@) at ~31 occurrences per 500 log
lines. Under load this flakes IMAP auth for the e2e email-roundtrip probe
(spam@viktorbarzin.me uses the catch-all), which was masquerading as "Brevo
or probe timing" noise.

## Root cause

docker-mailserver builds Dovecot's `/etc/dovecot/userdb` from two sources:
real accounts (`postfix-accounts.cf`) AND virtual-alias entries whose
*target* resolves to a local mailbox (`postfix-virtual.cf`). When the same
address appears as BOTH a real mailbox AND an alias whose target is another
local mailbox, the generated userdb has two lines for that username pointing
to different home directories — e.g.:

  r730-idrac@viktorbarzin.me:...:/var/mail/.../r730-idrac/home
  r730-idrac@viktorbarzin.me:...:/var/mail/.../spam/home      ← from alias

Dovecot's passwd-file driver rejects the duplicate, and every subsequent
auth lookup logs the error.

This affected exactly two addresses:
- r730-idrac@viktorbarzin.me (real account + alias → spam@)
- vaultwarden@viktorbarzin.me  (real account + alias → me@)

Other aliases are fine: they either forward to external addresses (gmail
etc.) — no local userdb entry generated — or map an address to itself
(me@ → me@) which docker-mailserver dedups internally.

Note: removing the real accounts is not an option because Vaultwarden uses
`vaultwarden@viktorbarzin.me` as its live SMTP_USERNAME
(stacks/vaultwarden/modules/vaultwarden/main.tf:121).

## This change

Introduces a `local.postfix_virtual` that concatenates the Vault-sourced
aliases with `extra/aliases.txt`, then filters out any line matching the
exact "LHS RHS" shape where both sides are in `var.mailserver_accounts` and
LHS != RHS. That is, only the pure local→local redundant entries are
dropped; all forwarding aliases and the catch-all are preserved.

The filter is self-healing: if a future alias ever collides with a real
account, it gets silently suppressed instead of breaking Dovecot auth.

```
  Vault mailserver_aliases  ─┐
                              ├─ concat ─ split \n ─ filter ─ join \n ─► postfix-virtual.cf
  extra/aliases.txt ─────────┘                        │
                                                       └── drop if LHS+RHS both in
                                                           mailserver_accounts and
                                                           LHS != RHS
```

Filtered entries (confirmed via locally-simulated filter on live data):
- r730-idrac@viktorbarzin.me spam@viktorbarzin.me
- vaultwarden@viktorbarzin.me me@viktorbarzin.me

Preserved (sample): postmaster→me, contact→me, alarm-valchedrym→self+3 ext,
lubohristov→gmail, yoana→gmail, @viktorbarzin.me→spam (catch-all), all four
disposable `*-generated@` aliases.

## What is NOT in this change

- Real accounts in Vault (`secret/platform.mailserver_accounts`) are
  untouched — vaultwarden SMTP auth keeps working.
- Postfix postscreen btree lock contention (separate commit).
- Email-roundtrip probe IMAP window (separate commit).

## Test Plan

### Automated

`terraform validate` — passes (docker-mailserver module):

```
Success! The configuration is valid, but there were some validation warnings as shown above.
```

`scripts/tg plan -target=module.mailserver.kubernetes_config_map.mailserver_config`:

```
  # module.mailserver.kubernetes_config_map.mailserver_config will be updated in-place
  ~ resource "kubernetes_config_map" "mailserver_config" {
      ~ data = {
          ~ "postfix-virtual.cf" = (sensitive value)
            # (9 unchanged elements hidden)
        }
        id = "mailserver/mailserver.config"
    }
Plan: 0 to add, 1 to change, 0 to destroy.
```

`scripts/tg apply` — applied:

```
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
```

### Manual Verification

Post-apply configmap content (the two lines are gone):

```
$ kubectl -n mailserver get cm mailserver.config -o jsonpath='{.data.postfix-virtual\.cf}'
postmaster@viktorbarzin.me me@viktorbarzin.me
contact@viktorbarzin.me me@viktorbarzin.me
me@viktorbarzin.me me@viktorbarzin.me
lubohristov@viktorbarzin.me lyubomir.hristov3@gmail.com
alarm-valchedrym@viktorbarzin.me alarm-valchedrym@...,vbarzin@...,emil.barzin@...,me@...
yoana@viktorbarzin.me divcheva.yoana@gmail.com

@viktorbarzin.me spam@viktorbarzin.me
firmly-gerardo-generated@viktorbarzin.me me@viktorbarzin.me
closely-keith-generated@viktorbarzin.me vbarzin@gmail.com
literally-paolo-generated@viktorbarzin.me viktorbarzin@fb.com
hastily-stefanie-generated@viktorbarzin.me elliestamenova@gmail.com
```

Reloader triggers a pod rollout; once new pod is Ready:
- `kubectl -n mailserver exec <pod> -c docker-mailserver -- cut -d: -f1 /etc/dovecot/userdb | sort | uniq -d`
  expected output: empty (no duplicate usernames)
- `kubectl -n mailserver logs <pod> -c docker-mailserver --tail=500 | grep -c "exists more than once"`
  expected output: 0 (baseline was 31/500 lines)

## Reproduce locally

1. `kubectl -n mailserver get cm mailserver.config -o jsonpath='{.data.postfix-virtual\.cf}'`
2. Expect: no `r730-idrac@viktorbarzin.me spam@viktorbarzin.me` line and no
   `vaultwarden@viktorbarzin.me me@viktorbarzin.me` line.
3. After pod restart: `kubectl -n mailserver logs -l app=mailserver -c docker-mailserver --tail=500 | grep -c "exists more than once"` → 0.

Closes: code-27l

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:29:02 +00:00
Viktor Barzin
327ce215b9 [infra] Sweep dns_config ignore_changes across all pod-owning resources [ci skip]
## Context

Wave 3A (commit c9d221d5) added the `# KYVERNO_LIFECYCLE_V1` marker to the
27 pre-existing `ignore_changes = [...dns_config]` sites so they could be
grepped and audited. It did NOT address pod-owning resources that were
simply missing the suppression entirely. Post-Wave-3A sampling (2026-04-18)
found that navidrome, f1-stream, frigate, servarr, monitoring, crowdsec,
and many other stacks showed perpetual `dns_config` drift every plan
because their `kubernetes_deployment` / `kubernetes_stateful_set` /
`kubernetes_cron_job_v1` resources had no `lifecycle {}` block at all.

Root cause (same as Wave 3A): Kyverno's admission webhook stamps
`dns_config { option { name = "ndots"; value = "2" } }` on every pod's
`spec.template.spec.dns_config` to prevent NxDomain search-domain flooding
(see `k8s-ndots-search-domain-nxdomain-flood` skill). Without `ignore_changes`
on every Terraform-managed pod-owner, Terraform repeatedly tries to strip
the injected field.

## This change

Extends the Wave 3A convention by sweeping EVERY `kubernetes_deployment`,
`kubernetes_stateful_set`, `kubernetes_daemon_set`, `kubernetes_cron_job_v1`,
`kubernetes_job_v1` (+ their `_v1` variants) in the repo and ensuring each
carries the right `ignore_changes` path:

- **kubernetes_deployment / stateful_set / daemon_set / job_v1**:
  `spec[0].template[0].spec[0].dns_config`
- **kubernetes_cron_job_v1**:
  `spec[0].job_template[0].spec[0].template[0].spec[0].dns_config`
  (extra `job_template[0]` nesting — the CronJob's PodTemplateSpec is
  one level deeper)

Each injection / extension is tagged `# KYVERNO_LIFECYCLE_V1: Kyverno
admission webhook mutates dns_config with ndots=2` inline so the
suppression is discoverable via `rg 'KYVERNO_LIFECYCLE_V1' stacks/`.

Two insertion paths are handled by a Python pass (`/tmp/add_dns_config_ignore.py`):

1. **No existing `lifecycle {}`**: inject a brand-new block just before the
   resource's closing `}`. 108 new blocks on 93 files.
2. **Existing `lifecycle {}` (usually for `DRIFT_WORKAROUND: CI owns image tag`
   from Wave 4, commit a62b43d1)**: extend its `ignore_changes` list with the
   dns_config path. Handles both inline (`= [x]`) and multiline
   (`= [\n  x,\n]`) forms; ensures the last pre-existing list item carries
   a trailing comma so the extended list is valid HCL. 34 extensions.

The script skips anything already mentioning `dns_config` inside an
`ignore_changes`, so re-running is a no-op.

## Scale

- 142 total lifecycle injections/extensions
- 93 `.tf` files touched
- 108 brand-new `lifecycle {}` blocks + 34 extensions of existing ones
- Every Tier 0 and Tier 1 stack with a pod-owning resource is covered
- Together with Wave 3A's 27 pre-existing markers → **169 greppable
  `KYVERNO_LIFECYCLE_V1` dns_config sites across the repo**

## What is NOT in this change

- `stacks/trading-bot/main.tf` — entirely commented-out block (`/* … */`).
  Python script touched the file, reverted manually.
- `_template/main.tf.example` skeleton — kept minimal on purpose; any
  future stack created from it should either inherit the Wave 3A one-line
  form or add its own on first `kubernetes_deployment`.
- `terraform fmt` fixes to pre-existing alignment issues in meshcentral,
  nvidia/modules/nvidia, vault — unrelated to this commit. Left for a
  separate fmt-only pass.
- Non-pod resources (`kubernetes_service`, `kubernetes_secret`,
  `kubernetes_manifest`, etc.) — they don't own pods so they don't get
  Kyverno dns_config mutation.

## Verification

Random sample post-commit:
```
$ cd stacks/navidrome && ../../scripts/tg plan  → No changes.
$ cd stacks/f1-stream && ../../scripts/tg plan  → No changes.
$ cd stacks/frigate && ../../scripts/tg plan    → No changes.

$ rg -c 'KYVERNO_LIFECYCLE_V1' stacks/ --include='*.tf' --include='*.tf.example' \
    | awk -F: '{s+=$2} END {print s}'
169
```

## Reproduce locally
1. `git pull`
2. `rg 'KYVERNO_LIFECYCLE_V1' stacks/ | wc -l` → 169+
3. `cd stacks/navidrome && ../../scripts/tg plan` → expect 0 drift on
   the deployment's dns_config field.

Refs: code-seq (Wave 3B dns_config class closed; kubernetes_manifest
annotation class handled separately in 8d94688d for tls_secret)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:19:48 +00:00
Viktor Barzin
8b43692af0 [infra] Suppress Goldilocks vpa-update-mode label drift on all namespaces [ci skip]
## Context

Wave 3B-continued: the Goldilocks VPA dashboard (stacks/vpa) runs a Kyverno
ClusterPolicy `goldilocks-vpa-auto-mode` that mutates every namespace with
`metadata.labels["goldilocks.fairwinds.com/vpa-update-mode"] = "off"`. This
is intentional — Terraform owns container resource limits, and Goldilocks
should only provide recommendations, never auto-update. The label is how
Goldilocks decides per-namespace whether to run its VPA in `off` mode.

Effect on Terraform: every `kubernetes_namespace` resource shows the label
as pending-removal (`-> null`) on every `scripts/tg plan`. Dawarich survey
2026-04-18 confirmed the drift. Cluster-side count: 88 namespaces carry the
label (`kubectl get ns -o json | jq ... | wc -l`). Every TF-managed namespace
is affected.

This commit brings the intentional admission drift under the same
`# KYVERNO_LIFECYCLE_V1` discoverability marker introduced in c9d221d5 for
the ndots dns_config pattern. The marker now stands generically for any
Kyverno admission-webhook drift suppression; the inline comment records
which specific policy stamps which specific field so future grep audits
show why each suppression exists.

## This change

107 `.tf` files touched — every stack's `resource "kubernetes_namespace"`
resource gets:

```hcl
lifecycle {
  # KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
  ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
```

Injection was done with a brace-depth-tracking Python pass (`/tmp/add_goldilocks_ignore.py`):
match `^resource "kubernetes_namespace" ` → track `{` / `}` until the
outermost closing brace → insert the lifecycle block before the closing
brace. The script is idempotent (skips any file that already mentions
`goldilocks.fairwinds.com/vpa-update-mode`) so re-running is safe.

Vault stack picked up 2 namespaces in the same file (k8s-users produces
one, plus a second explicit ns) — confirmed via file diff (+8 lines).

## What is NOT in this change

- `stacks/trading-bot/main.tf` — entire file is `/* … */` commented out
  (paused 2026-04-06 per user decision). Reverted after the script ran.
- `stacks/_template/main.tf.example` — per-stack skeleton, intentionally
  minimal. User keeps it that way. Not touched by the script (file
  has no real `resource "kubernetes_namespace"` — only a placeholder
  comment).
- `.terraform/` copies (e.g. `stacks/metallb/.terraform/modules/...`) —
  gitignored, won't commit; the live path was edited.
- `terraform fmt` cleanup of adjacent pre-existing alignment issues in
  authentik, freedify, hermes-agent, nvidia, vault, meshcentral. Reverted
  to keep the commit scoped to the Goldilocks sweep. Those files will
  need a separate fmt-only commit or will be cleaned up on next real
  apply to that stack.

## Verification

Dawarich (one of the hundred-plus touched stacks) showed the pattern
before and after:

```
$ cd stacks/dawarich && ../../scripts/tg plan

Before:
  Plan: 0 to add, 2 to change, 0 to destroy.
   # kubernetes_namespace.dawarich will be updated in-place
     (goldilocks.fairwinds.com/vpa-update-mode -> null)
   # module.tls_secret.kubernetes_secret.tls_secret will be updated in-place
     (Kyverno generate.* labels — fixed in 8d94688d)

After:
  No changes. Your infrastructure matches the configuration.
```

Injection count check:
```
$ rg -c 'KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode' stacks/ | awk -F: '{s+=$2} END {print s}'
108
```

## Reproduce locally
1. `git pull`
2. Pick any stack: `cd stacks/<name> && ../../scripts/tg plan`
3. Expect: no drift on the namespace's goldilocks.fairwinds.com/vpa-update-mode label.

Closes: code-dwx

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:15:27 +00:00
Viktor Barzin
e612baac15 [dawarich] Re-enable Sidekiq worker with resource limits + probes
## Context

Sidekiq was commented out in main.tf:203–274 on 2026-02-23 after the
unbounded 10-thread worker drove the whole pod into memory pressure —
the kubelet then evicted the web container along with it. Viktor's
recollection was "it was crashing"; the cgroup-root cause was that the
Sidekiq container had no `resources.limits.memory` set, so a misbehaving
job could pull the entire pod down instead of being OOM-killed and
restarted in isolation.

During the ~55 days the worker was off, POSTs to /api/v1 continued to
enqueue jobs in Redis DB 1 (Dawarich uses redis-master.redis:6379/1, not
the cluster default DB 0). track_segments and digests tables stayed
empty because nothing was processing the backfill queue (beads
code-459). Dawarich was also bumped 0.37.1 → 1.6.1 on 2026-04-16, so
Sidekiq was untested against the new release in this environment.

Live pre-apply snapshot via `bin/rails runner`:
  enqueued=18  (cache=2, data_migrations=4, default=12)
  scheduled=16, retry=0, dead=0, procs=0, processed/failed=0 (stats
  reset by the 1.6.1 upgrade)
Queue latencies ~50h — lines up with code-e9c (iOS client stopped
POSTing on 2026-04-16), not with the nominal 55-day gap. Redis DB 1
was therefore a small, recoverable backlog, not the disaster the plan
originally feared — no pre-apply triage needed.

## What changed

Second container `dawarich-sidekiq` added to the existing Deployment
(same pod, same lifecycle as `dawarich` web). Key differences vs the
2026-02-23 commented block:

- `resources.limits.memory = 1Gi`, `requests = { cpu = 50m, memory =
  768Mi }`. Burstable QoS — cgroup is now bounded, so a runaway Sidekiq
  job gets OOM-killed and container-restarted in place without evicting
  the whole pod (web stays Ready).
- Hosts parametrised via `var.redis_host` / `var.postgresql_host`
  instead of hardcoded FQDNs; matches the web container's pattern.
- DB / secret / Geoapify creds via `value_from.secret_key_ref` against
  the existing `dawarich-secrets` K8s Secret (populated by the existing
  ExternalSecret). Removes the plan-time `data.vault_kv_secret_v2`
  reference the 2026-02-23 block relied on — that data source no longer
  exists in this stack.
- `BACKGROUND_PROCESSING_CONCURRENCY = "2"` (was "10"). Ramp deferred
  to separate commits (plan: 2 → 5 → 10 with 15-30min observation
  between bumps).
- Liveness + readiness `pgrep -f 'bundle exec sidekiq'` probes —
  container-scoped restart on stall, verified `pgrep` is at
  /usr/bin/pgrep in the Debian-trixie-based freikin/dawarich image.
- Same Rails boot envs as the web container (TIME_ZONE, DISTANCE_UNIT,
  RAILS_ENV, RAILS_LOG_TO_STDOUT, SECRET_KEY_BASE, SELF_HOSTED) so
  Sidekiq's Rails initialisation matches web.

Pod-level additions:
- `termination_grace_period_seconds = 60` — gives Sidekiq time to
  drain in-flight jobs on SIGTERM during rolls (default 30s not enough
  for reverse-geocoding batches).

## What is NOT in this change

- Prometheus exporter for Sidekiq metrics. The first apply turned on
  `PROMETHEUS_EXPORTER_ENABLED=true`, which enabled the
  `prometheus_exporter` gem's CLIENT middleware. That middleware PUSHes
  metrics over TCP to a separate exporter server process — and the
  freikin/dawarich image does not start one. Client logged ~2/sec
  "Connection refused" errors until we flipped ENABLED back to "false"
  in this commit. `pod.annotations["prometheus.io/scrape"]` reverted
  for the same reason (nothing listening on :9394). Filed code-1q5
  (blocks code-459) to add a third sidecar container running
  `bundle exec prometheus_exporter -p 9394 -b 0.0.0.0` and restore
  the 4 drafted alerts (DawarichSidekiqDown /
  QueueLatencyHigh / DeadGrowing / FailureRateHigh) once metrics are
  actually being emitted.
- The 4 drafted Sidekiq alerts — reverted from
  monitoring/prometheus_chart_values.tpl; they reference metrics that
  don't exist yet. Restoration is part of code-1q5.
- Concurrency ramp past 2 and the 24h burn-in gate that closes
  code-459 — separate future commits.
- Liveness/readiness probes on the web container — pre-existing gap,
  out of scope per plan.

## Other changes bundled in

Kyverno `dns_config` drift suppression added with the
`# KYVERNO_LIFECYCLE_V1` marker on both `kubernetes_deployment.dawarich`
AND `kubernetes_cron_job_v1.ingestion_freshness_monitor`. Plan only
called it out for the Deployment, but the CronJob shows identical
drift (Kyverno injects ndots=2 on every pod template, Terraform wipes
it, infinite churn). Per AGENTS.md "Kyverno Drift Suppression" every
pod-owning resource MUST carry the lifecycle block — this commit
brings this stack into convention.

## Topology trade-off recorded

Sidekiq lives in the same pod as the web container, not a separate
Deployment. This means:
- Every env bump during ramp bounces both containers (Recreate
  strategy) — brief UI blip accepted.
- `kubectl scale` alone can't pause Sidekiq — pausing requires
  `BACKGROUND_PROCESSING_CONCURRENCY=0` + apply, or re-commenting
  the container block + apply.
- Shared pod network namespace — only one process can bind any given
  port. This is why the plan explicitly avoided declaring a new
  `port { name = "prometheus" }` on the sidekiq container (the web
  container already reserves 9394 by name).

Accepted because the alternative (split Deployment) is significantly
more config for a single-instance service and a follow-up bead
(tracked in code-1q5 description area / Viktor's notes) already
captures "revisit if future crashes warrant blast-radius isolation".

## Rollback

Three levels, in order of increasing impact:
1. `BACKGROUND_PROCESSING_CONCURRENCY` → "0" + apply — pod stays up,
   no jobs processed, backlog preserved in Redis.
2. Drop concurrency to 1 or 2 + apply — reduce load but keep draining.
3. Re-comment the second container block (this diff in reverse) +
   apply — full disable, backlog stays in Redis DB 1, recoverable.

Never DEL queue:* keys directly — Redis DB 1 is where Dawarich lives,
and the jobs are recoverable state.

## Refs

- code-459 (P3) — Dawarich Sidekiq disabled. In progress; closes
  after 24h burn-in at concurrency=10 with restartCount=0, DeadSet
  delta < 100.
- code-1q5 (P3) — Follow-up: prometheus_exporter sidecar + 4 alerts.
  Depends on code-459.
- code-e9c (P2) — Viktor client-side POST bug 2026-04-16.
  Untouched; processing the backlog does not fix this but ensures
  future POSTs drain cleanly.
- code-72g (P3) — Anca ingestion silent since 2025-06-21. Untouched;
  same reasoning.

## Test Plan

### Automated

```
$ cd stacks/dawarich && ../../scripts/tg plan
...
Plan: 0 to add, 3 to change, 0 to destroy.
#   kubernetes_deployment.dawarich         (sidekiq container + probes + lifecycle)
#   kubernetes_namespace.dawarich          (drops stale goldilocks label, pre-existing drift)
#   module.tls_secret.kubernetes_secret.tls_secret  (Kyverno clone-label drift, pre-existing)

$ ../../scripts/tg apply --non-interactive
...
Apply complete! Resources: 0 added, 3 changed, 0 destroyed.

(Second apply for PROMETHEUS_EXPORTER_ENABLED=false + annotation
removal — same 0/3/0 shape.)
```

### Manual Verification

Setup: kubectl context against the k8s cluster (10.0.20.100).

1. Pod has both containers Ready with zero restarts:
   ```
   $ kubectl -n dawarich get pods -o wide
   NAME                        READY  STATUS   RESTARTS  AGE
   dawarich-75b4ff9fbf-qh56v   2/2    Running  0         <fresh>
   ```

2. Sidekiq container is actively processing jobs:
   ```
   $ kubectl -n dawarich logs deploy/dawarich -c dawarich-sidekiq --tail=20
   Sidekiq 8.0.10 connecting to Redis ... db: 1
   queues: [data_migrations, points, default, mailers, families,
            imports, exports, stats, trips, tracks,
            reverse_geocoding, visit_suggesting, places,
            app_version_checking, cache, archival, digests,
            low_priority]
   Performing DataMigrations::BackfillMotionDataJob ...
   Backfilled motion_data for N000 points (N climbing)
   ```

3. Rails Sidekiq::API snapshot — procs registered, counters moving:
   ```
   $ kubectl -n dawarich exec deploy/dawarich -- bin/rails runner '
       require "sidekiq/api"
       s = Sidekiq::Stats.new
       puts "processed=#{s.processed} failed=#{s.failed} procs=#{Sidekiq::ProcessSet.new.size}"
     '
   processed=7 failed=2 procs=1
   retry=0 dead=0
   ```
   (The 2 "failures" are cumulative across two pod lifecycles during
   the Prometheus env flip — retried successfully, neither retry nor
   dead set holds any jobs.)

4. Per-container memory well under the 1Gi limit:
   ```
   $ kubectl -n dawarich top pod --containers
   POD                         NAME              CPU    MEMORY
   dawarich-75b4ff9fbf-qh56v   dawarich          1m     272Mi  (of 896Mi)
   dawarich-75b4ff9fbf-qh56v   dawarich-sidekiq  79m    333Mi  (of 1Gi)
   ```

5. No "Prometheus Exporter, failed to send" log lines since the second
   apply:
   ```
   $ kubectl -n dawarich logs deploy/dawarich -c dawarich-sidekiq --tail=500 \
       | grep -c "Prometheus Exporter"
   0
   ```

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:13:05 +00:00
Viktor Barzin
2b8bb849c0 [infra] Bump claude-agent-service + beadboard image tags
## Context
Two rolling updates tied to the BeadBoard dispatch-button work (code-kel):

1. claude-agent-service now ships bd 1.0.2, a seed ConfigMap-equivalent
   (files in /usr/share/agent-seed/), the beads-task-runner agent, and
   hmac.compare_digest bearer verification. The tag moves from 382d6b14
   to 0c24c9b6 (monorepo HEAD).
2. The beadboard Deployment in beads-server now consumes
   CLAUDE_AGENT_SERVICE_URL + CLAUDE_AGENT_BEARER_TOKEN, so the image
   needs the Dispatch button + /api/agent-dispatch + /api/agent-status
   routes. Tag moves from :latest to :17a38e43 (fork HEAD on
   github.com/ViktorBarzin/beadboard).

## What this change does
- Flips `local.image_tag` in claude-agent-service main.tf.
- Drops the "temporary" comment on `beadboard_image_tag` and sets the
  default to 17a38e43 (honours the infra 8-char-SHA rule — CLAUDE.md
  "Use 8-char git SHA tags — `:latest` causes stale pull-through cache").

## Test Plan
## Automated
- Both images already pushed to registry.viktorbarzin.me{:5050}/ :
  - claude-agent-service:0c24c9b6 verified via
    `docker run --rm … bd version` → 1.0.2 OK, /usr/share/agent-seed/
    contains both seed files.
  - beadboard:17a38e43 pushed, digest cd0d3c47.
- terraform fmt/validate clean on both stacks from the earlier commits.

## Manual Verification
1. Push triggers Woodpecker default.yml.
2. Expected: both stacks apply; claude-agent-service pod rolls (new
   seed-beads-agent init creates /workspace/.beads/ + /workspace/scratch
   + copies beads-task-runner.md), beadboard pod rolls with new env vars
   sourced from beadboard-agent-service ExternalSecret.
3. Cross-check: `kubectl -n claude-agent get pod -o yaml | grep image:`
   should show :0c24c9b6; `kubectl -n beads-server get deploy beadboard
   -o yaml | grep image:` should show :17a38e43.

Closes: code-kel
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:24:37 +00:00
Viktor Barzin
f79e3c563e [infra] Remove mysql InnoDB Cluster + Operator HCL (Phase 4 cleanup) [ci skip]
## Context

On 2026-04-16 (memory #711) MySQL was migrated from InnoDB Cluster (3-member
Group Replication + MySQL Operator) to a raw `kubernetes_stateful_set_v1.mysql_standalone`
on `mysql:8.4`. The migration preserved the `mysql.dbaas` Service name
(selector switched to the standalone pod), all 20 databases/688 tables/14
users were dump-restored, and Vault rotated credentials against the new
instance. The InnoDB Cluster has been dark since — Phase 4 was to remove
the dead code and decommission its cluster-side Helm state.

Memory #711 explicitly notes Phase 4 as: "Remove helm_release.mysql_cluster
+ mysql_operator + namespace + RBAC + Delete PVC datadir-mysql-cluster-0
(30Gi) + Delete mysql-operator namespace + CRDs + stale Vault roles."

## This change

Phase 4 scope executed in this session (beads code-qai):

1. `terragrunt destroy -target` against 6 resources in the dbaas Tier 0 stack:
   - `module.dbaas.helm_release.mysql_cluster` — uninstalled InnoDBCluster CR
     + MySQL Router Deployment + 8 Services (mysql-cluster, -instances,
     ports 6446/6448/6447/6449/6450/8443, etc.)
   - `module.dbaas.helm_release.mysql_operator` — uninstalled MySQL Operator
     Deployment, InnoDBCluster CRD + webhook, operator ClusterRoles
   - `module.dbaas.kubernetes_namespace.mysql_operator` — deleted the ns
   - `module.dbaas.kubernetes_cluster_role.mysql_sidecar_extra` — leftover
     permissions patch that existed to work around the sidecar's kopf
     permissions bug; unused without the operator
   - `module.dbaas.kubernetes_cluster_role_binding.mysql_sidecar_extra`
   - `module.dbaas.kubernetes_config_map.mysql_extra_cnf` — used to override
     `innodb_doublewrite=OFF` via subPath mount; standalone does not need it
2. `kubectl delete pvc datadir-mysql-cluster-0 -n dbaas` — Helm does not
   garbage-collect PVCs; 30Gi reclaimed.
3. Removed 295 lines (lines 86–380) from `stacks/dbaas/modules/dbaas/main.tf`
   covering the `#### MYSQL — InnoDB Cluster via MySQL Operator` section
   and all six resources above.

The first destroy hit a Helm timeout on `mysql-cluster` uninstall ("context
deadline exceeded"). Uninstallation had in fact completed cluster-side by
that point but TF rolled back the state delta. A second `terragrunt destroy
-target` call with the same args resolved cleanly — destroyed the remaining
2 tracked resources (the first pass cleared 4) and encrypted+committed the
Tier 0 state.

## What is NOT in this change

- CRDs (`innodbclusters.mysql.oracle.com`, etc.) — Helm does delete these
  on uninstall. Verified clean: `kubectl get crd | grep mysql.oracle.com`
  returns nothing.
- Orphan PVC `datadir-mysql-cluster-0` — already deleted via kubectl; not
  a TF-managed resource.
- Stale Vault DB roles (health, linkwarden, affine, woodpecker,
  claude_memory, crowdsec, technitium) for services migrated MySQL→PG —
  sandbox denies `vault list database/roles` as credential scouting, so
  the user handles this manually.
- 2 state-commits preceding this one (`30fa411b`, `6cf3575e`) are automatic
  SOPS-encrypted-state commits produced by `scripts/tg` after each
  `terragrunt destroy` pass. Standard Tier 0 workflow.

## Verification

```
$ helm list -A | grep -E 'mysql-cluster|mysql-operator'
(no output)

$ kubectl get ns mysql-operator
Error from server (NotFound): namespaces "mysql-operator" not found

$ kubectl get pvc -n dbaas datadir-mysql-cluster-0
Error from server (NotFound): persistentvolumeclaims "datadir-mysql-cluster-0" not found

$ kubectl get pod -n dbaas -l app.kubernetes.io/instance=mysql-standalone
NAME                 READY   STATUS    RESTARTS       AGE
mysql-standalone-0   1/1     Running   1 (118m ago)   2d

$ ../../scripts/tg state list | grep -i 'mysql_operator\|mysql_cluster\|mysql_sidecar\|mysql_extra_cnf'
(no output)

$ ../../scripts/tg plan | grep -E 'mysql_cluster|mysql_operator|mysql_sidecar|mysql_extra_cnf'
(no output — Wave 2 drift is gone; remaining plan items are pre-existing
drift unrelated to this change, see Wave 3 + in-flight payslip work)
```

## Reproduce locally
1. `git pull`
2. `cd stacks/dbaas && ../../scripts/tg state list | grep mysql_cluster` → no output
3. `helm list -A | grep mysql-cluster` → no output

Closes: code-qai

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:19:48 +00:00
Viktor Barzin
c75beaac6c wealthfolio: bump memory 64Mi → 1Gi (limit) / 256Mi (request)
## Context

Pod was OOMKilled after today's broker-sync Phase 3 import grew the
activity DB from ~10 rows (Phase 0 demo) to ~700 (Fidelity + cash-flow
matches across 6 accounts). `/api/v1/net-worth` and
`/valuations/history` materialise the full history in memory to render
the dashboard chart.

`kubectl describe pod` showed Back-off restarting failed container;
`kubectl top pod` reported 14Mi steady-state but spikes crossed the
64Mi cap.

## This change

Bump container resources to:
- requests.memory: 64Mi → 256Mi
- limits.memory:  64Mi → 1Gi

CPU unchanged. 1Gi is generous for the current 700-activity DB +
chart rendering, with headroom for another year of growth before we
need to revisit (VPA will flag if actual use exceeds upperBound).

## Verification

### Automated
`scripts/tg apply stacks/wealthfolio` → Apply complete! Resources: 0
added, 4 changed, 0 destroyed.

### Manual
$ kubectl -n wealthfolio get pod -l app=wealthfolio -o jsonpath='{.items[0].spec.containers[0].resources}'
→ {"limits":{"memory":"1Gi"},"requests":{"cpu":"10m","memory":"256Mi"}}

$ kubectl -n wealthfolio get pods -l app=wealthfolio
NAME                           READY   STATUS    RESTARTS   AGE
wealthfolio-86c8696b9c-nzwkf   1/1     Running   0          51s

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:13:05 +00:00
Viktor Barzin
43b4e1d372 [payslip-ingest] Deploy stack + Grafana dashboard + Vault DB role
## Context

New service `payslip-ingest` (code lives in `/home/wizard/code/payslip-ingest/`)
needs in-cluster deployment, its own Postgres DB + rotating user, a Grafana
datasource, a dashboard, and a Claude agent definition for PDF extraction.

Cluster-internal only — webhook fires from Paperless-ngx in a sibling namespace.
No ingress, no TLS cert, no DNS record.

## What

### New stack `stacks/payslip-ingest/`
- `kubernetes_namespace` payslip-ingest, tier=aux.
- ExternalSecret (vault-kv) projects PAPERLESS_API_TOKEN, CLAUDE_AGENT_BEARER_TOKEN,
  WEBHOOK_BEARER_TOKEN into `payslip-ingest-secrets`.
- ExternalSecret (vault-database) reads rotating password from
  `static-creds/pg-payslip-ingest` and templates `DATABASE_URL` into
  `payslip-ingest-db-creds` with `reloader.stakater.com/match=true`.
- Deployment: single replica, Recreate strategy (matches single-worker queue
  design), `wait-for postgresql.dbaas:5432` annotation, init container runs
  `alembic upgrade head`, main container serves FastAPI on 8080, Kyverno
  dns_config lifecycle ignore.
- ClusterIP Service :8080.
- Grafana datasource ConfigMap in `monitoring` ns (label `grafana_datasource=1`,
  uid `payslips-pg`) reading password from the db-creds K8s Secret.

### Grafana dashboard `uk-payslip.json` (4 panels)
- Monthly gross/net/tax/NI (timeseries, currencyGBP).
- YTD tax-band progression with threshold lines at £12,570 / £50,270 / £125,140.
- Deductions breakdown (stacked bars).
- Effective rate + take-home % (timeseries, percent).

### Vault DB role `pg-payslip-ingest`
- Added to `allowed_roles` in `vault_database_secret_backend_connection.postgresql`.
- New `vault_database_secret_backend_static_role.pg_payslip_ingest`
  (username `payslip_ingest`, 7d rotation).

### DBaaS — DB + role creation
- New `null_resource.pg_payslip_ingest_db` mirrors `pg_terraform_state_db`:
  idempotent CREATE ROLE + CREATE DATABASE + GRANT ALL via `kubectl exec` into
  `pg-cluster-1`.

### Claude agent `.claude/agents/payslip-extractor.md`
- Haiku-backed agent invoked by `claude-agent-service`.
- Decodes base64 PDF from prompt, tries pdftotext → pypdf fallback, emits a single
  JSON object matching the schema to stdout. No network, no file writes outside /tmp,
  no markdown fences.

## Trade-offs / decisions

- Own DB per service (convention), NOT a schema in a shared `app` DB as the plan
  initially described. The Alembic migration still creates a `payslip_ingest`
  schema inside the `payslip_ingest` DB for table organisation.
- Paperless URL uses port 80 (the Service port), not 8000 (the pod target port).
- Grafana datasource uses the primary RW user — separate `_ro` role is aspirational
  and not yet a pattern in this repo.
- No ingress — webhook is cluster-internal; external exposure is unnecessary attack
  surface.
- No Uptime Kuma monitor yet: the internal-monitor list is a static block in
  `stacks/uptime-kuma/`; will add in a follow-up tied to code-z29 (internal monitor
  auto-creator).

## Test Plan

### Automated
```
terraform init -backend=false && terraform validate
Success! The configuration is valid.

terraform fmt -check -recursive
(exit 0)

python3 -c "import json; json.load(open('uk-payslip.json'))"
(exit 0)
```

### Manual Verification (post-merge)

Prerequisites:
1. Seed Vault: `vault kv put secret/payslip-ingest webhook_bearer_token=$(openssl rand -hex 32)`.
2. Seed Vault: `vault kv patch secret/paperless-ngx api_token=<paperless token>`.

Apply:
3. `scripts/tg apply vault` → creates pg-payslip-ingest static role.
4. `scripts/tg apply dbaas` → creates payslip_ingest DB + role.
5. `cd stacks/payslip-ingest && ../../scripts/tg apply -target=kubernetes_manifest.db_external_secret`
   (first-apply ESO bootstrap).
6. `scripts/tg apply payslip-ingest` (full).
7. `kubectl -n payslip-ingest get pods` → Running 1/1.
8. `kubectl -n payslip-ingest port-forward svc/payslip-ingest 8080:8080 && curl localhost:8080/healthz` → 200.

End-to-end:
9. Configure Paperless workflow (README in code repo has steps).
10. Upload sample payslip tagged `payslip` → row in `payslip_ingest.payslip` within 60s.
11. Grafana → Dashboards → UK Payslip → 4 panels render.

Closes: code-do7

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 19:07:05 +00:00
Viktor Barzin
bde713f8a4 broker-sync: add Fidelity PlanViewer CronJob (suspended)
## Context

Viktor's UK workplace pension is at Fidelity PlanViewer. The broker-sync
provider + CLI landed in the broker-sync repo (commits 804e6a8 +
7c9be54); this commit adds the infra bits so the monthly sync runs
in-cluster like the other broker-sync jobs.

One successful manual backfill on 2026-04-18 pulled 51 contributions +
valuation into a new WF WORKPLACE_PENSION account; Net Worth moved from
£865k → £1,003k. This commit productionises that flow.

## This change

- New kubernetes_cron_job_v1.fidelity in stacks/broker-sync/main.tf:
  - Schedule: 05:00 UK on the 20th of each month (after mid-month
    payroll settles; finance data shows credits on the 13th-18th).
  - Suspended by default — unsuspend once broker-sync image is rebuilt
    with Chromium baked in (Dockerfile change shipped separately in the
    broker-sync repo).
  - Init container materialises the storage_state JSON (projected from
    the broker-sync-secrets K8s Secret, synced from Vault by ESO) to the
    encrypted PVC at /data/fidelity_storage_state.json. Chromium then
    loads it.
  - Container: broker-sync fidelity-ingest with WF + FIDELITY_* env
    vars. Memory request 512Mi, limit 1280Mi — Chromium is hungry.
  - Lifecycle ignore_changes on dns_config per the KYVERNO_LIFECYCLE_V1
    convention documented in AGENTS.md.

## What is NOT in this change

- The Vault keys fidelity_storage_state + fidelity_plan_id — already
  staged via `vault kv patch` on 2026-04-18.
- Dockerfile Chromium install — in broker-sync repo (commit 7c9be54).
- Prometheus BrokerSyncFidelityFailed alert — deferred until the
  CronJob has run successfully for a month and we have a baseline.
  Existing broker-sync CronJobs also don't have per-job alerts yet;
  filing as a follow-up.

## Verification

### Automated
terraform fmt ran clean. `terragrunt plan` would show a single new
kubernetes_cron_job_v1 (suspended, so no pods scheduled).

### Manual (after apply + image rebuild)

1. Build + push broker-sync:<sha> with Chromium.
2. `scripts/tg apply stacks/broker-sync` (updates image_tag + adds
   fidelity CronJob).
3. Unsuspend: `kubectl -n broker-sync patch cronjob broker-sync-fidelity \
     -p '{"spec":{"suspend":false}}'` OR flip the tf flag.
4. Trigger a test run: `kubectl -n broker-sync create job \
     fidelity-test --from=cronjob/broker-sync-fidelity`.
5. Expect logs: `fidelity-ingest: fetched=N new=N imported=N failed=0`.
6. On FidelitySessionError: run `broker-sync fidelity-seed` locally +
   `vault kv patch secret/broker-sync fidelity_storage_state=@...`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 18:51:20 +00:00
Viktor Barzin
4f54c959d7 [infra] Remove iscsi-csi stack — TrueNAS decommissioned [ci skip]
## Context

The iSCSI CSI driver was deployed against a TrueNAS appliance at 10.0.10.15
that was decommissioned 2026-04-12 when all Immich PVCs migrated to the
proxmox-lvm-encrypted storage class. The stack has been dead code since —
live survey (2026-04-18):

- iscsi-csi namespace: empty (0 resources), 27h old (since last TF apply)
- No iscsi CSI driver registered in the cluster
- No PVs/PVCs reference iscsi
- TF state held only the empty namespace
- helm_release.democratic_csi was not in state (already gone pre-session)

Leaving the stack around meant every `terragrunt run --all plan` would
drift (TF wanted to create the helm release again) and every CI run would
try to pull `truenas_api_key` + `truenas_ssh_private_key` from Vault
against a TrueNAS that no longer exists. Beads tracking: code-gw0.

## This change

- `scripts/tg destroy` in stacks/iscsi-csi (1 resource destroyed — the namespace).
- `rm -rf stacks/iscsi-csi/` — removes modules/, main.tf, terragrunt.hcl,
  secrets symlink, and the 4 terragrunt-generated files (backend.tf,
  providers.tf, cloudflare_provider.tf, tiers.tf).
- Dropped PG schema `iscsi-csi` on `10.0.20.200:5432/terraform_state`
  (table states had 1 row — the current state — dropped by CASCADE).
- Deleted the empty `gadget` namespace (112d old, no owner — unrelated
  dead namespace swept as part of the same Wave 1 cleanup).

## What is NOT in this change

- Vault database role cleanup for the 7 MySQL-migrated services
  (health, linkwarden, affine, woodpecker, claude_memory, crowdsec,
  technitium). The sandbox denies listing Vault DB roles as credential
  enumeration, so this is flagged for user to do manually via:
  `vault delete database/roles/<name>` after checking
  `vault list sys/leases/lookup/database/creds/<name>/` for active leases.

## Reproduce locally
1. `git pull`
2. `ls stacks/ | grep iscsi` → no output
3. `kubectl get ns iscsi-csi gadget` → both NotFound
4. psql to 10.0.20.200:5432/terraform_state → `\dn` shows no iscsi-csi schema

## Test Plan

### Automated
```
$ kubectl --kubeconfig config get ns iscsi-csi
Error from server (NotFound): namespaces "iscsi-csi" not found

$ kubectl --kubeconfig config get ns gadget
Error from server (NotFound): namespaces "gadget" not found

$ PGPASSWORD=... psql -h 10.0.20.200 -U ... -d terraform_state -c '\dn' | grep iscsi
(no output)

$ ls stacks/iscsi-csi 2>&1
ls: cannot access 'stacks/iscsi-csi': No such file or directory
```

### Manual Verification
None required — destroy was a no-op for workloads (namespace was empty).

Closes: code-b6l
Closes: code-gw0

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 18:49:40 +00:00
Viktor Barzin
e1d20457c4 [infra/claude-agent-service] Seed beads metadata + scratch dir at runtime
## Context

Review of the BeadBoard Dispatch wiring found that the claude-agent-service
Dockerfile's `COPY beads/metadata.json /workspace/.beads/metadata.json` and
`COPY agents/beads-task-runner.md /home/agent/.claude/agents/...` both land
on paths that are volume-mounted at runtime:

  - `/workspace` → `claude-agent-workspace-encrypted` PVC (main.tf:394-398)
  - `/home/agent/.claude` → `claude-home` emptyDir (main.tf:424-427)

Kubernetes mounts hide image-layer content at those paths, so the COPYs are
dead. The companion commit in `claude-agent-service` restages both files to
`/usr/share/agent-seed/` (an image-layer path that is never mounted).

Additionally, the beads-task-runner agent rails expect
`/workspace/scratch/<job_id>/` to exist, but nothing was creating it.

## Layout before / after

```
  Before (dead COPYs):

    image layer          runtime (mounted volumes hide the files)
    -----------          -----------------------------------
    /workspace/          <- hidden by PVC mount
      .beads/
        metadata.json    <- UNREACHABLE
    /home/agent/.claude/ <- hidden by emptyDir mount
      agents/
        beads-task-runner.md  <- UNREACHABLE

  After (init container seeds volumes at pod start):

    image layer          runtime
    -----------          ------------------------------------
    /usr/share/agent-seed/
      beads-metadata.json    --+
      beads-task-runner.md    --+-> copied by seed-beads-agent init
                                    container into the mounted volumes
                                    on every pod start:
                                      /workspace/.beads/metadata.json
                                      /workspace/scratch/
                                      /home/agent/.claude/agents/beads-task-runner.md
```

## What

### New init container: `seed-beads-agent`
  - Positioned AFTER `git-init`, BEFORE the main container.
  - Uses the same service image (`${local.image}:${local.image_tag}`) — the
    seed files are baked in at `/usr/share/agent-seed/`.
  - Runs as default uid 1000 (the PVCs are already chowned by `fix-perms`).
  - Shell body:
      mkdir -p /workspace/.beads /workspace/scratch /home/agent/.claude/agents
      cp /usr/share/agent-seed/beads-metadata.json     /workspace/.beads/metadata.json
      cp /usr/share/agent-seed/beads-task-runner.md    /home/agent/.claude/agents/beads-task-runner.md
  - Mounts: `workspace` at `/workspace`, `claude-home` at `/home/agent/.claude`.
  - Resources: 32Mi requests / 64Mi limits (matches `fix-perms`/`copy-claude-creds`).

### Formatting
  - `terraform fmt -recursive` also normalised whitespace in the token-expiry
    locals block and the CronJob container definition. No semantic change.

## What is NOT in this change

  - No image tag bump. The Dockerfile refactor that produces the
    `/usr/share/agent-seed/` path lands in the claude-agent-service repo
    and will roll in on the next CI build. Until that build ships and the
    tag is bumped in this file, the new init container will `cp` from a
    path that doesn't exist yet — so do NOT apply this commit until the
    corresponding image tag bump is ready. The commit is declarative prep.
  - No changes to storage class, RBAC, Service, or any other init.
  - The main container mounts remain unchanged — only the init containers
    prepare volume contents.

## Test Plan

### Automated

```
$ terraform fmt -check -recursive stacks/claude-agent-service/
(no output — clean)

$ terraform -chdir=stacks/claude-agent-service/ init -backend=false
Terraform has been successfully initialized!

$ terraform -chdir=stacks/claude-agent-service/ validate
Warning: Deprecated Resource (pre-existing; use kubernetes_namespace_v1)
Success! The configuration is valid, but there were some validation warnings
as shown above.
```

### Manual Verification (after image bump + apply)

1. Bump `local.image_tag` in main.tf to the SHA of a build that has
   `/usr/share/agent-seed/*` (verify with `docker inspect $IMAGE | jq ...`
   or `kubectl run tmp --image ... -- ls /usr/share/agent-seed`).
2. `scripts/tg apply stacks/claude-agent-service`
3. `kubectl -n claude-agent get pods -w` — all init containers complete.
4. `kubectl -n claude-agent exec deploy/claude-agent-service -c claude-agent-service -- ls -la /workspace/.beads/metadata.json /home/agent/.claude/agents/beads-task-runner.md /workspace/scratch`
   Expected: all three paths exist; first two are regular files with the
   expected content, `scratch` is a directory.
5. `kubectl -n claude-agent exec deploy/claude-agent-service -c claude-agent-service -- jq -r .dolt_server_host /workspace/.beads/metadata.json`
   Expected: `dolt.beads-server.svc.cluster.local`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:23:19 +00:00
Viktor Barzin
c9d221d578 [infra] Establish KYVERNO_LIFECYCLE_V1 drift-suppression convention [ci skip]
## Context

Phase 1 of the state-drift consolidation audit (plan Wave 3) identified that
the entire repo leans on a repeated `lifecycle { ignore_changes = [...dns_config] }`
snippet to suppress Kyverno's admission-webhook dns_config mutation (the ndots=2
override that prevents NxDomain search-domain flooding). 27 occurrences across
19 stacks. Without this suppression, every pod-owning resource shows perpetual
TF plan drift.

The original plan proposed a shared `modules/kubernetes/kyverno_lifecycle/`
module emitting the ignore-paths list as an output that stacks would consume in
their `ignore_changes` blocks. That approach is architecturally impossible:
Terraform's `ignore_changes` meta-argument accepts only static attribute paths
— it rejects module outputs, locals, variables, and any expression (the HCL
spec evaluates `lifecycle` before the regular expression graph). So a DRY
module cannot exist. The canonical pattern IS the repeated snippet.

What the snippet was missing was a *discoverability tag* so that (a) new
resources can be validated for compliance, (b) the existing 27 sites can be
grep'd in a single command, and (c) future maintainers understand the
convention rather than each reinventing it.

## This change

- Introduces `# KYVERNO_LIFECYCLE_V1` as the canonical marker comment.
  Attached inline on every `spec[0].template[0].spec[0].dns_config` line
  (or `spec[0].job_template[0].spec[0]...` for CronJobs) across all 27
  existing suppression sites.
- Documents the convention with rationale and copy-paste snippets in
  `AGENTS.md` → new "Kyverno Drift Suppression" section.
- Expands the existing `.claude/CLAUDE.md` Kyverno ndots note to reference
  the marker and explain why the module approach is blocked.
- Updates `_template/main.tf.example` so every new stack starts compliant.

## What is NOT in this change

- The `kubernetes_manifest` Kyverno annotation drift (beads `code-seq`)
  — that is Phase B with a sibling `# KYVERNO_MANIFEST_V1` marker.
- Behavioral changes — every `ignore_changes` list is byte-identical
  save for the inline comment.
- The fallback module the original plan anticipated — skipped because
  Terraform rejects expressions in `ignore_changes`.
- `terraform fmt` cleanup on adjacent unrelated blocks in three files
  (claude-agent-service, freedify/factory, hermes-agent). Reverted to
  keep this commit scoped to the convention rollout.

## Before / after

Before (cannot distinguish accidental-forgotten from intentional-convention):
```hcl
lifecycle {
  ignore_changes = [spec[0].template[0].spec[0].dns_config]
}
```

After (greppable, self-documenting, discoverable by tooling):
```hcl
lifecycle {
  ignore_changes = [spec[0].template[0].spec[0].dns_config] # KYVERNO_LIFECYCLE_V1
}
```

## Test Plan

### Automated
```
$ rg -c 'KYVERNO_LIFECYCLE_V1' stacks/ --include='*.tf' --include='*.tf.example' \
    | awk -F: '{s+=$2} END {print s}'
27

$ git diff --stat | grep -E '\.(tf|tf\.example|md)$' | wc -l
21

# All code-file diffs are 1 insertion + 1 deletion per marker site,
# except beads-server (3), ebooks (4), immich (3), uptime-kuma (2).
$ git diff --stat stacks/ | tail -1
20 files changed, 45 insertions(+), 28 deletions(-)
```

### Manual Verification

No apply required — HCL comments only. Zero effect on any stack's plan output.
Future audits: `rg 'KYVERNO_LIFECYCLE_V1' stacks/ | wc -l` must grow as new
pod-owning resources are added.

## Reproduce locally
1. `cd infra && git pull`
2. `rg 'KYVERNO_LIFECYCLE_V1' stacks/` → expect 27 hits in 19 files
3. Grep any new `kubernetes_deployment` for the marker; absence = missing
   suppression.

Closes: code-28m

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:15:51 +00:00
Viktor Barzin
a62b43d19e [infra] Document intended ignore_changes drift-workarounds [ci skip]
## Context

The infra repo has 31 `ignore_changes` blocks. Phase 1 of the state-drift
consolidation audit classified 21 as legitimate (immutable fields, cloud-computed
values) and 10 as intentional workarounds for known drift sources. The remaining
10 were indistinguishable from accidental/forgotten drift suppression without
reading the surrounding context.

This commit adds a uniform `# DRIFT_WORKAROUND: <reason>, reviewed 2026-04-18`
marker above the 8 intended-workaround blocks (6 CI image-tag decoupling + 2
non-deterministic secret hashes) so they are easy to distinguish from
accidental drift suppression during future audits.

## What is NOT in this change

- Functional behavior — `ignore_changes` lists are byte-identical.
- The Kyverno `dns_config` ignore paths (covered by Wave 3 shared module).
- Workarounds being removed — the CI decoupling is intentional by user decision.

## Files touched

CI image-tag decoupling (6):
- stacks/k8s-portal/modules/k8s-portal/main.tf (also has dns_config for Kyverno)
- stacks/novelapp/main.tf
- stacks/claude-memory/main.tf
- stacks/plotting-book/main.tf
- stacks/trading-bot/main.tf (api deployment)
- stacks/trading-bot/main.tf (workers deployment — 6 containers)

Non-deterministic secret hashes (2):
- stacks/owntracks/main.tf (htpasswd bcrypt)
- stacks/mailserver/modules/mailserver/main.tf (postfix-accounts.cf)

## Test Plan

### Automated
```
$ rg DRIFT_WORKAROUND stacks/ | wc -l
8

$ terraform fmt -recursive stacks/k8s-portal stacks/novelapp stacks/claude-memory \
    stacks/plotting-book stacks/trading-bot stacks/owntracks stacks/mailserver
(no output — already formatted)

$ git diff --stat
 stacks/claude-memory/main.tf                 | 1 +
 stacks/k8s-portal/modules/k8s-portal/main.tf | 1 +
 stacks/mailserver/modules/mailserver/main.tf | 3 ++-
 stacks/novelapp/main.tf                      | 1 +
 stacks/owntracks/main.tf                     | 1 +
 stacks/plotting-book/main.tf                 | 1 +
 stacks/trading-bot/main.tf                   | 2 ++
 7 files changed, 9 insertions(+), 1 deletion(-)
```

### Manual Verification
No apply required — HCL comments only, zero effect on plan output.

## Reproduce locally
1. `cd infra && git pull`
2. `rg "DRIFT_WORKAROUND.*reviewed 2026-04-18" stacks/ | wc -l` → expect 8
3. `terraform fmt -check -recursive stacks/` → expect clean exit

Closes: code-yrg

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 14:08:10 +00:00
Viktor Barzin
91165e31b9 [infra/beads-server] Wire BeadBoard to claude-agent-service
## Context

BeadBoard is the Next.js task visualization dashboard shipped in this
stack. We want users to trigger headless Claude agent runs directly from
a beads task row — "one-click dispatch" — instead of copy-pasting `bd`
IDs into a terminal. The agent runs in-cluster as claude-agent-service
(see stacks/claude-agent-service/), protected by a bearer token in
Vault at secret/claude-agent-service/api_bearer_token.

For BeadBoard to POST to /execute we need the service URL and the
bearer token available inside the pod as env vars. The URL is static
(cluster DNS); the token must come through External Secrets Operator
so rotation in Vault propagates without re-applying Terraform.

Secondary cleanup: the container was still pinned to :latest which
violates the 8-char-SHA convention and causes stale pulls through the
registry cache (see .claude/CLAUDE.md, Docker images). The image tag
is now variable-driven; the GHA pipeline will override the default
once it publishes the first SHA.

## This change

- Adds an ExternalSecret `beadboard-agent-service` in the
  `beads-server` namespace, mirroring the pattern in
  stacks/claude-agent-service/main.tf (same Vault path
  `secret/claude-agent-service`, same `vault-kv` ClusterSecretStore,
  same 15m refresh). Exposes exactly one key: `api_bearer_token`.

- Adds two env vars to the `beadboard` container:
  - `CLAUDE_AGENT_SERVICE_URL` — static cluster URL
    (`http://claude-agent-service.claude-agent.svc.cluster.local:8080`)
  - `CLAUDE_AGENT_BEARER_TOKEN` — `secret_key_ref` pointing at the
    ESO-managed Secret, key `api_bearer_token`

- Adds `reloader.stakater.com/auto = "true"` on the Deployment's
  top-level metadata — matches the convention used by rybbit,
  claude-memory, onlyoffice. When ESO refreshes the K8s Secret
  because Vault rotated the token, Reloader restarts the pod so the
  new token is picked up (env vars are read once at boot).

- Adds `variable "beadboard_image_tag"` (default `"latest"`, with a
  one-line comment flagging the temporary default). The image
  reference now interpolates `${var.beadboard_image_tag}`. No tfvars
  file is touched — orchestrator will flip the default to the first
  real 8-char SHA once GHA publishes it.

## What is NOT in this change

- No GHA workflow additions. The pipeline that builds
  `registry.viktorbarzin.me:5050/beadboard` lives in the BeadBoard
  repo and is out of scope here.
- No Vault-side changes. `secret/claude-agent-service/api_bearer_token`
  already exists (it powers the claude-agent-service deployment
  itself).
- No Terraform `apply`. Orchestrator applies.

## Data flow

  Vault (secret/claude-agent-service)
    │  refresh every 15m
    ▼
  ESO → K8s Secret `beadboard-agent-service` (beads-server ns)
    │  envFrom.secretKeyRef
    ▼
  BeadBoard pod (CLAUDE_AGENT_BEARER_TOKEN env)
    │  Authorization: Bearer <token>
    ▼
  claude-agent-service.claude-agent.svc:8080 /execute

On Vault rotation: ESO picks up new value at next refresh → K8s
Secret data changes → Reloader sees annotation + referenced Secret
changed → rolling-recreates the beadboard pod with the new token.

## Test Plan

### Automated
- `terraform fmt -recursive stacks/beads-server/` — clean (formatted
  the file once; subsequent run is a no-op).
- `terraform -chdir=stacks/beads-server validate` (after
  `terraform init -backend=false`) — `Success! The configuration is
  valid`. The 14 "Deprecated Resource" warnings are pre-existing
  (`kubernetes_namespace` vs `_v1` etc.) and unrelated to this
  change.

### Manual Verification
1. Orchestrator applies:
   `scripts/tg -chdir=stacks/beads-server apply`
2. Verify the ExternalSecret synced:
   `kubectl -n beads-server get externalsecret beadboard-agent-service`
   Expected: `Ready=True`, `SyncedAt` recent.
3. Verify the K8s Secret exists with one key:
   `kubectl -n beads-server get secret beadboard-agent-service -o jsonpath='{.data.api_bearer_token}' | base64 -d | head -c 8`
   Expected: first 8 chars of the bearer token.
4. Verify the deployment picked up the env vars:
   `kubectl -n beads-server get deploy beadboard -o yaml | grep -A2 CLAUDE_AGENT`
   Expected: both env entries present, bearer via `secretKeyRef`.
5. Verify the reloader annotation is on the Deployment metadata:
   `kubectl -n beads-server get deploy beadboard -o jsonpath='{.metadata.annotations.reloader\.stakater\.com/auto}'`
   Expected: `true`.
6. Verify the image tag resolved to the variable default (for now):
   `kubectl -n beads-server get deploy beadboard -o jsonpath='{.spec.template.spec.containers[0].image}'`
   Expected: `registry.viktorbarzin.me:5050/beadboard:latest`
   (will become `...:<sha>` once `beadboard_image_tag` default is
   updated).
7. Smoke-test the env var inside the pod:
   `kubectl -n beads-server exec deploy/beadboard -- sh -c 'printenv CLAUDE_AGENT_SERVICE_URL; printenv CLAUDE_AGENT_BEARER_TOKEN | head -c 8'`
   Expected: URL printed, first 8 chars of token printed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:53:51 +00:00
Viktor Barzin
82b7866bc9 [claude-agent-service] Remove orphaned DevVM SSH key wiring
## Context

The remote-executor pattern that SSHed into the DevVM (10.0.10.10) to run
`claude -p` was fully migrated to the in-cluster service
`claude-agent-service.claude-agent.svc:8080/execute` in commits 42f1c3cf and
99180bec (2026-04-18). Five parallel codebase audits (GH Actions, Woodpecker
+ scripts, K8s CronJobs/Deployments, n8n, local scripts/hooks/docs) confirmed
zero remaining SSH+claude sites.

This commit removes two cleanup artifacts left behind by that migration.

## This change

1. Deletes `.claude/skills/archived/setup-remote-executor.md` — the archived
   skill doc for the obsolete SSH-based pattern. Already in `archived/`,
   harmless but noise; deleting prevents anyone copy-pasting the old approach.

2. Removes `kubernetes_secret.ssh_key` from
   `stacks/claude-agent-service/main.tf`. The Secret was created from the
   `devvm_ssh_key` field at Vault `secret/ci/infra` but was never mounted
   into the agent pod. The pod's `git-init` init container uses HTTPS +
   `$GITHUB_TOKEN` exclusively and force-rewrites every `git@github.com:`
   and `https://github.com/` URL via `git config url.insteadOf`, so no
   downstream `git` invocation could fall through to SSH even if it tried.

3. Removes the now-orphaned `data "vault_kv_secret_v2" "ci_secrets"` block —
   the SSH key resource was its only consumer.

## What is NOT in this change

- The `devvm_ssh_key` field at Vault `secret/ci/infra` stays in place.
  Removing it requires read/modify/put of the full secret and the upside
  is one unused Vault key. Not worth it without strong justification.
- DevVM host decommission is out of scope (separate audit needed for
  non-Claude users of the host).
- Pre-existing `terraform fmt` warnings at lines 464-505 (CronJob alignment)
  left untouched per no-adjacent-refactor rule.

## Test plan

### Automated

- `terraform fmt -check stacks/claude-agent-service/main.tf` — only the
  pre-existing lines 464-505 are flagged; no new fmt warnings introduced
  by these deletions.

### Manual verification

1. `cd infra/stacks/claude-agent-service && ../../scripts/tg apply`
2. Expect exactly one resource destroyed: `kubernetes_secret.ssh_key`.
   The `ci_secrets` data source removal is plan-time only; does not appear
   in resource counts.
3. `kubectl -n claude-agent get secret ssh-key` → `NotFound`.
4. `kubectl -n claude-agent get pod` → both pods Running, no restart events.
5. Submit a synthetic agent job via HTTP API to confirm pipeline still works:
   curl -X POST http://claude-agent-service.claude-agent.svc.cluster.local:8080/execute
   with a minimal prompt; expect job completes with `exit_code=0`.

Closes: code-bck

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:31:15 +00:00
Viktor Barzin
9a2e920006 [rybbit] Narrow CF Worker routes to SITE_IDS hosts — fix free-tier quota breach
## Context

The `rybbit-analytics` Cloudflare Worker hit the free-tier quota of 100k
requests/day. CF GraphQL analytics showed **97,153 invocations in the last
24h**, up from ~0 before 2026-04-17 21:26 UTC when Rybbit script injection
migrated off the broken Traefik rewrite-body plugin (Yaegi ResponseWriter
bug on Traefik v3.6.12) onto this Worker.

Root cause: `wrangler.toml` registered two wildcard routes
(`viktorbarzin.me/*` + `*.viktorbarzin.me/*`) which match every Cloudflare-
proxied request on the zone. Only 27 of ~119 proxied hostnames appear in
`SITE_IDS` in `index.js`; the rest burn Worker invocations for nothing since
`siteId` is `null` and the Worker no-ops. Worse, the wildcard caught
`rybbit.viktorbarzin.me` itself — every tracker `script.js` fetch and event
POST round-trip was spawning its own Worker invocation (self-amplification).

CF GraphQL per-host breakdown (last 24h, zone `viktorbarzin.me`):
- Top waste (NOT in SITE_IDS): tuya-bridge 96.6k, beadboard 55.8k,
  terminal 30.2k, authentik 19.9k, claude-memory 12.6k
- Sum of 27 SITE_IDS hosts: 47.2k
- `rybbit.viktorbarzin.me` self-amplifier: 782
- Projected post-narrow: 46.4k/day (52% reduction, well under quota)

## This change

Replaces the two wildcards with an explicit list of the **26** hostnames
present in `SITE_IDS`. `rybbit.viktorbarzin.me` is deliberately excluded
even though it has a site ID — it serves `/api/script.js` (JS) and
`/api/track` (JSON), both of which fail the Worker's `text/html`
content-type guard anyway. Leaving it routed just burned invocations.

    BEFORE                              AFTER
    ──────────────────────────          ──────────────────────────────────
    viktorbarzin.me/*          ┐        viktorbarzin.me/*          ┐
    *.viktorbarzin.me/*        ┘        www.viktorbarzin.me/*      │
                                        actualbudget.vb.me/*       │
    → matches ~119 hosts                ... (26 total)             │ → matches
    → ~97k Worker inv/day                stirling-pdf.vb.me/*      │   only 26
    → rybbit → self-amplifies            vaultwarden.vb.me/*       ┘   specific
                                                                        hosts
                                        rybbit.vb.me INTENTIONALLY
                                        EXCLUDED (self-amplifier)

Deployment is unchanged — this Worker is not in Terraform. Deploy from
`stacks/rybbit/worker/` via:

    CLOUDFLARE_EMAIL=vbarzin@gmail.com \
    CLOUDFLARE_API_KEY=$(vault kv get -field=cloudflare_api_key secret/platform) \
    npx --yes wrangler@latest deploy

`wrangler deploy` replaces all worker routes on the zone with the list from
`wrangler.toml`, so there is no cleanup step. Already deployed today as
version `d7f83980-a499-40f5-ba55-f8e18d531863` — this commit just captures
the source of truth in git.

## What is NOT in this change

- Self-hosted injection (nginx `sub_filter` sidecar, compiled Traefik
  plugin). Deferred — revisit only if analytics traffic grows past 80k/day
  again, or if we add more high-traffic hosts to `SITE_IDS`.
- Cloudflare Workers Paid plan ($5/mo for 10M requests). User declined.
- Moving the Worker into Terraform. Out of scope.
- Any Rybbit backend/frontend changes. Rybbit itself continues running.

## Test plan

### Automated

Post-deploy CF API enumeration of zone routes:

    $ curl -s -H "X-Auth-Email: $CF_EMAIL" -H "X-Auth-Key: $CF_KEY" \
        "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/workers/routes" \
      | jq -r '.result[] | "\(.pattern)\t→ \(.script)"' | wc -l
    26

    # Wildcards gone:
    $ curl -s ... | jq -r '.result[].pattern' | grep -c '\*\.'
    0

### Manual Verification

Script injection behaviour, verified via `curl`:

1. SITE_IDS host — script IS injected:

       $ curl -s -L https://viktorbarzin.me/ | grep -oE '<script[^>]*rybbit[^>]*>'
       <script src="https://rybbit.viktorbarzin.me/api/script.js"
         data-site-id="da853a2438d0" defer>

       $ curl -s -L https://calibre.viktorbarzin.me/ | grep -oE '<script[^>]*rybbit[^>]*>'
       <script src="https://rybbit.viktorbarzin.me/api/script.js"
         data-site-id="ce5f8aed6bbb" defer>

2. Non-SITE_IDS host — script NOT injected:

       $ curl -s -L https://tuya-bridge.viktorbarzin.me/ | grep -c 'data-site-id'
       0

3. `rybbit.viktorbarzin.me` bypasses Worker entirely — tracker returns raw JS:

       $ curl -sI https://rybbit.viktorbarzin.me/api/script.js | grep -i content-type
       content-type: application/javascript; charset=utf-8

### Reproduce locally

    # 1. Confirm the Worker sees only the 26 narrowed routes.
    CF_EMAIL=vbarzin@gmail.com
    CF_KEY=$(vault kv get -field=cloudflare_api_key secret/platform)
    ZONE_ID=fd2c5dd4efe8fe38958944e74d0ced6d
    curl -s -H "X-Auth-Email: $CF_EMAIL" -H "X-Auth-Key: $CF_KEY" \
      "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/workers/routes" \
      | jq -r '.result[] | .pattern' | sort

    # 2. 24h after deploy, re-check invocation count — expect < 80k.
    curl -s https://api.cloudflare.com/client/v4/graphql \
      -H "X-Auth-Email: $CF_EMAIL" -H "X-Auth-Key: $CF_KEY" \
      -H "Content-Type: application/json" \
      -d '{"query":"query($acc:String!,$since:Time!,$until:Time!){viewer{accounts(filter:{accountTag:$acc}){workersInvocationsAdaptive(limit:100,filter:{datetime_geq:$since,datetime_leq:$until}){sum{requests} dimensions{scriptName date}}}}}",
           "variables":{"acc":"02e035473cfc4834fb10c5d35470d8b4",
                        "since":"'"$(date -u -d '24 hours ago' +%Y-%m-%dT%H:%M:%SZ)"'",
                        "until":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'"}}'

Follow-up monitoring tracked in code-dka (P3, 3-day check).

Closes: code-l9b

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:23:15 +00:00
Viktor Barzin
9ea7eec362 [actualbudget] Upgrade 26.3.0 → 26.4.0 for native Sankey report
## Context

Actual Budget v26.4.0 (released 2026-04-05) re-introduces the Sankey
chart report for income/expense flow visualization (PR #7220). An earlier
experimental implementation was deleted in March 2024 (PR #2417) but a
proper reimplementation with "Other" grouping, date-range selection, and
percentage toggle is now shipped behind the experimental feature flag.

Viktor wanted Sankey visualization of budget cash flow; this is the lowest-
cost path since his existing Actual Budget deployment already holds all the
transaction data.

## This change

Bumps the `tag` input on all three factory module calls (viktor, anca, emo)
from `26.3.0` to `26.4.0`. No breaking changes, schema migrations, or config
changes per the 26.4.0 release notes.

## Rollout

Applied via `scripts/tg apply --non-interactive`. All three pods rolled
successfully to `actualbudget/actual-server:26.4.0` and passed readiness
probes. The http-api sidecars (`jhonderson/actual-http-api`) were untouched.

## Post-upgrade

Users need to toggle Settings → Experimental features → Sankey report to
access the chart, then Reports → new Sankey widget.

Closes: code-oof

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:19:27 +00:00
Viktor Barzin
cacc282f1a .gitignore: ignore terragrunt_rendered.json debug output
Generated by `terragrunt render-json` for debugging. Not meant to be
tracked — a stale one was sitting untracked in stacks/dbaas/.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 13:18:05 +00:00
Viktor Barzin
4eb68d6b1a [meshcentral] Remove accidentally-committed Terragrunt-generated files
My previous commit (c0ac24a5, [meshcentral] Import existing cluster
state + PVC) unintentionally committed two Terragrunt-generated
provider/locals files. These are auto-generated on every plan/apply
(marked 'Generated by Terragrunt. Sig:') and do not belong in the
repo. Mirrors 3e11bd1b which did the same cleanup for kyverno.

Removes from tracking only — files remain on disk so concurrent work
is unaffected.

Updates: code-w97
2026-04-18 12:35:44 +00:00
Viktor Barzin
c0ac24a54c [meshcentral] Import existing cluster state + PVC (bd-w97)
Imported the two proxmox-lvm-encrypted PVCs into the Tier 1 PG state.
All other declared resources (namespace, deployment, service, ingress,
NFS-backed PV/PVC, tls secret) were already state-managed.

Imported:
- kubernetes_persistent_volume_claim.data_encrypted
    (meshcentral/meshcentral-data-encrypted, proxmox-lvm-encrypted, 1Gi)
- kubernetes_persistent_volume_claim.files_encrypted
    (meshcentral/meshcentral-files-encrypted, proxmox-lvm-encrypted, 1Gi)

Pre-import plan: 2 to add, 3 to change, 0 to destroy
Post-import plan: 0 to add, 5 to change, 0 to destroy (benign drift)
Apply: 0 added, 5 changed, 0 destroyed

Benign drift reconciled on apply:
- PVC wait_until_bound attribute aligned (true -> false)
- tls-secret Kyverno sync labels cleared
- deployment/namespace annotation drift

Source reconciliation: none required. Both declared PVCs already match
the cluster (proxmox-lvm-encrypted, 1Gi, RWO, names identical). NFS
PV/PVC meshcentral-backups-host (nfs-truenas, 10Gi, RWX) remained
bound throughout. Deployment kept 1/1 replicas on the same pod
(meshcentral-6c4f47c6f8-mj8sk).

Commits the auto-generated cloudflare_provider.tf and tiers.tf so the
stack matches the repo convention used by its peers.

Updates: code-w97
2026-04-18 12:35:26 +00:00
Viktor Barzin
3e11bd1b67 [kyverno] Remove accidentally-committed Terragrunt-generated files
My previous commit (dacf3d9e, [kyverno] Import existing cluster state)
unintentionally picked up two Terragrunt-generated provider/locals
files from the meshcentral stack that a parallel worker had just
created. These are auto-generated on every plan/apply (marked
"Generated by Terragrunt. Sig:") and do not belong in the repo.

Removes from tracking only — files remain on disk so concurrent work
is unaffected.

Files removed:
- stacks/meshcentral/cloudflare_provider.tf
- stacks/meshcentral/tiers.tf

No impact on the kyverno import work. State-level changes from
dacf3d9e (3 imports + 3 in-place updates) stand.

Updates: code-w97

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 12:34:59 +00:00
Viktor Barzin
dacf3d9e11 [kyverno] Import existing cluster state (bd-w97)
Imported 3 missing cluster resources into the Tier 1 PG state for the
kyverno stack. The Helm release, 6 PriorityClasses, 14 ClusterPolicies,
both Secrets (registry-credentials, tls-secret), and all prior RBAC
resources were already managed in state. The strip-cpu-limits
ClusterPolicy (commit 1de2ee30, 56m prior to this import) was already
in state from its targeted apply.

Resources imported:
- module.kyverno.kubernetes_cluster_role_v1.kyverno_cleanup_pods
  (kyverno:cleanup-controller:pods — RBAC for ClusterCleanupPolicy)
- module.kyverno.kubernetes_cluster_role_binding_v1.kyverno_cleanup_pods
  (kyverno:cleanup-controller:pods — binding to cleanup-controller SA)
- module.kyverno.kubernetes_manifest.cleanup_failed_pods
  (apiVersion=kyverno.io/v2,kind=ClusterCleanupPolicy,name=cleanup-failed-pods)

All three originated from commit cf578516 (auto-cleanup failed/evicted
pods), which added the declarations but apparently never made it into
PG state before the global state reorg.

Pre-import plan:  3 to add,  2 to change, 0 to destroy
Post-import plan: 0 to add,  3 to change, 0 to destroy (benign)
Apply:            0 added,   3 changed,   0 destroyed

Benign drift reconciled on apply:
- cleanup_failed_pods manifest field populated in state post-import
  (annotations re-applied, no spec change)
- registry_credentials + tls_secret: null `generate.kyverno.io/clone-source`
  label dropped from Terraform metadata (no K8s object change — the label
  was only `null` in state, never existed on the live Secret)

Safety checks — all clean:
- ClusterPolicy count: 16 (unchanged, 14 owned here + 1 external
  goldilocks-vpa-auto-mode + strip-cpu-limits); all status=Ready=True
- ClusterCleanupPolicy cleanup-failed-pods: intact, schedule 15 * * * *
- helm_release.kyverno: no diff (revision unchanged)
- Mutating/validating webhook configurations: 3 + 7 intact
- All 4 Kyverno Deployments Running (admission x2, background, cleanup, reports)

Kyverno failurePolicy stays Ignore (forceFailurePolicyIgnore=true) so
admission degrades open if ever unavailable.

Updates: code-w97

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 12:34:32 +00:00
Viktor Barzin
50dea8f0a7 [monitoring] Add Claude OAuth token expiry monitoring + alerts
## Context

The new CLAUDE_CODE_OAUTH_TOKEN mechanism (commit 8a054752) uses
long-lived 1-year tokens minted via `claude setup-token`. Tokens don't
auto-refresh — at the 1-year mark they expire hard and the upgrade
agent stops working. We need to be told 30 days ahead, not find out
when DIUN fires and gets 401 again.

A cron rotator doesn't make sense here (tokens don't refresh, they
just expire) so we alert instead. Two spares at
`secret/claude-agent-service-spare-{1,2}` provide failover runway —
monitor covers all three.

## This change

**CronJob** (`claude-agent` ns, every 6h): reads a ConfigMap
containing `<path> → expiry_unix_timestamp` entries, pushes
`claude_oauth_token_expiry_timestamp{path="..."}` and
`claude_oauth_expiry_monitor_last_push_timestamp` to Pushgateway at
`prometheus-prometheus-pushgateway.monitoring:9091`.

**ConfigMap** generated from a Terraform local `claude_oauth_token_mint_epochs`
— source of truth for mint times. On rotation, update the map + apply.
TTL is a shared local (365d).

**PrometheusRules** (in prometheus_chart_values.tpl):
- `ClaudeOAuthTokenExpiringSoon`  — <30d, warning, for 1h
- `ClaudeOAuthTokenCritical`      — <7d,  critical, for 10m
- `ClaudeOAuthTokenMonitorStale`  — last push >48h, warning
- `ClaudeOAuthTokenMonitorNeverRun` — metric absent for 2h, warning

Alert labels include `{{ $labels.path }}` so we know which token is
expiring (primary / spare-1 / spare-2).

## Verification

```
$ kubectl -n claude-agent create job --from=cronjob/claude-oauth-expiry-monitor manual
$ curl pushgateway/metrics | grep claude_oauth_token_expiry
claude_oauth_token_expiry_timestamp{...,path="primary"} 1.808064429e+09
claude_oauth_token_expiry_timestamp{...,path="spare-1"} 1.80806428e+09
claude_oauth_token_expiry_timestamp{...,path="spare-2"} 1.808064429e+09

$ query: (claude_oauth_token_expiry_timestamp - time()) / 86400
  primary: 365.2 days
  spare-1: 365.2 days
  spare-2: 365.2 days
```

## Rotation playbook (future)

1. `kubectl run -it --rm --image=registry.viktorbarzin.me/claude-agent-service:latest tokmint -- claude setup-token`
   (or harvest via `harvest3.py` pattern in memory for headless flow)
2. `vault kv patch secret/claude-agent-service claude_oauth_token=<new>`
3. Update `claude_oauth_token_mint_epochs["primary"]` in
   `stacks/claude-agent-service/main.tf` with new unix timestamp
4. `scripts/tg apply` claude-agent-service + monitoring
5. Alert clears within 6h (next cron tick) + 1h of the
   `ClaudeOAuthTokenExpiringSoon` "for:" duration

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 12:27:11 +00:00
Viktor Barzin
8a05475218 [claude-agent-service] Add CLAUDE_CODE_OAUTH_TOKEN env var — 1-year long-lived auth
## Context

Earlier today we hit a silent auth failure on the upgrade agent: the
short-lived `sk-ant-oat01-*` access token in `.credentials.json` had
expired and the CLI's refresh path failed (refresh token either stale
or invalidated after the creds sat in Vault for 5 days).

The real fix isn't "refresh more often" — it's switching to the
long-lived auth mechanism `claude setup-token` provides. Unlike
`claude login` (OAuth flow → 6–8h access token + refresh token JSON),
`setup-token` mints a single opaque token valid for **1 year** that
the CLI consumes via `CLAUDE_CODE_OAUTH_TOKEN` env var. No refresh
dance, no JSON file, no rotation for a year.

## This change

Adds `CLAUDE_CODE_OAUTH_TOKEN` to the existing
`claude-agent-secrets` ExternalSecret, sourced from a new
`claude_oauth_token` field at `secret/claude-agent-service`. The
container already pulls that secret via `envFrom`, so no other wiring
needed.

The Claude CLI prefers `CLAUDE_CODE_OAUTH_TOKEN` over the OAuth JSON
file when both are present, so this is additive — `.credentials.json`
stays mounted as a fallback while we validate the long-lived path.
Future cleanup can remove the JSON mount entirely.

Verified E2E: synthetic DIUN webhook for `docker.io/library/httpd`
→ n8n → claude-agent-service /execute → agent job `fea5ff70dcfe`
completed in 30s with exit_code=0, agent correctly identified no
matching stack and aborted without changes. No API auth errors.

## Spares

Harvested two additional long-lived tokens and stored them at
`secret/claude-agent-service-spare-{1,2}` for failover if the
primary is compromised or revoked. Verified both coexist with the
primary (no revocation on mint).

## What is NOT in this change

- No removal of `.credentials.json` mount or its Vault source (keep
  as fallback until we've run for 24h on env-var auth with no issues).
- No cron rotator — 1-year TTL means this can be a yearly manual
  rotation, alerted on from Vault metadata. If we add rotation, we'll
  source from the spares pool rather than minting new tokens.

## Reproduce locally

```
1. vault login -method=oidc
2. vault kv get -field=claude_oauth_token secret/claude-agent-service | head -c 25
3. cd stacks/claude-agent-service && ../../scripts/tg apply
4. kubectl -n claude-agent exec deploy/claude-agent-service -- \
     printenv CLAUDE_CODE_OAUTH_TOKEN   # should be 108 chars
5. Fire synthetic DIUN webhook (see docs/architecture/automated-upgrades.md)
```

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 12:12:30 +00:00
Viktor Barzin
50e8184d99 [uptime-kuma] Codify MySQL monitor (id=663) via idempotent sync CronJob
## Context

Monitor id 663 "MySQL Standalone (dbaas)" was created manually yesterday via
the `uptime-kuma-api` Python library when the dbaas stack migrated from
InnoDB Cluster to standalone MySQL. It worked and was UP, but lived only in
Uptime Kuma's MariaDB — if UK's DB were wiped or restored from an older
backup, the monitor would be lost.

## This change

Adds declarative, self-healing management for internal-service monitors
(databases, non-HTTP endpoints) that can't be discovered from ingress
annotations. Modelled on the existing `external-monitor-sync` CronJob.

- `local.internal_monitors` — list of desired monitors (name, type,
  connection string, Vault password key, interval, retries). Seeded with
  the MySQL Standalone monitor. Add new entries here to manage more.
- `kubernetes_secret.internal_monitor_sync` — pulls admin password and all
  referenced DB passwords from Vault `secret/viktor` at apply time. Secret
  key names are derived from monitor name (`DB_PASSWORD_<upper_snake>`).
- `kubernetes_config_map_v1.internal_monitor_targets` — renders the target
  list to JSON for the sync container.
- `kubernetes_cron_job_v1.internal_monitor_sync` — runs every 10 min,
  looks up monitors by name, creates if missing, patches if drifted,
  leaves id and history untouched when already in desired state.

## Why this approach (Option B, not a Terraform provider)

The `louislam/uptime-kuma` Terraform provider does NOT exist in the public
registry (verified — only a CLI tool of the same name). Option A from the
task brief was therefore unavailable. Option B (idempotent K8s CronJob)
matches the established pattern in the same module for
`external-monitor-sync` — no new machinery introduced.

## Monitor 663: no-op on first sync

Manual import was not possible (no provider → no state to import). The
sync job correctly identifies the existing monitor by name and reports:

  Monitor MySQL Standalone (dbaas) (id=663) already in desired state
  Internal monitor sync complete

DB heartbeats confirm monitor 663 stayed UP throughout with `status=1` and
`Rows: 1` responses every 60s — no disruption.

## Vault key — left manual (by design)

`secret/viktor` is not Terraform-managed anywhere in the repo (only read
via `data "vault_kv_secret_v2"`). It is a user-edited Vault entry holding
135 keys. The `uptimekuma_db_password` key was added manually yesterday;
this change does NOT codify it. Codifying the whole `secret/viktor` entry
is out of scope for this task (would need a separate migration + rotation
story). The sync job reads the existing value at apply time — so if the
value is ever rotated in Vault, the next sync picks it up.

## Plan + apply

  Plan: 3 to add, 0 to change, 0 to destroy.
  Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
  Re-plan: No changes. Your infrastructure matches the configuration.

Also updated `.claude/skills/uptime-kuma/SKILL.md` with the new pattern.

Closes: code-ed2
2026-04-18 12:04:17 +00:00
Viktor Barzin
dad62647cd [grampsweb] Align PVC resource to encrypted storage; imported state
## Context

Grampsweb stack had an empty Terraform state — 7 K8s resources (namespace,
PVC, service, deployment, ingress, ExternalSecret manifest, TLS secret)
existed in the cluster but weren't tracked. This blocked commit 7b248897
(ollama LLM env-var removal) from being applied because any apply would
attempt to re-create existing resources.

Additionally, the TF source declared a **grampsweb-data-proxmox** PVC on
**storage_class=proxmox-lvm**, while the cluster had **grampsweb-data-encrypted**
on **proxmox-lvm-encrypted** (1 Gi, bound). The deployment was referencing
the encrypted PVC. This divergence predated this change — the source was
simply out of date vs cluster reality.

## This change

Two things:

1. **Source alignment** (the only file diff):
   - Renames `kubernetes_persistent_volume_claim.data_proxmox` →
     `data_encrypted`, metadata.name to match cluster, storage class to
     `proxmox-lvm-encrypted`.
   - Updates the deployment volume `claim_name` reference accordingly.
   - Aligns with the newer project convention documented in
     `.claude/CLAUDE.md`: "Default for sensitive data is
     proxmox-lvm-encrypted" and "Convention: PVC names end in `-encrypted`".
   - No destroy/recreate: the PVC and deployment already use the encrypted
     PVC in the cluster; TF source now just describes reality.

2. **State imports** (out-of-band, via `scripts/tg import`, not in diff):
   - `kubernetes_namespace.grampsweb` <- `grampsweb`
   - `kubernetes_persistent_volume_claim.data_encrypted` <- `grampsweb/grampsweb-data-encrypted`
   - `kubernetes_service.grampsweb` <- `grampsweb/grampsweb`
   - `kubernetes_deployment.grampsweb` <- `grampsweb/grampsweb`
   - `module.ingress.kubernetes_ingress_v1.proxied-ingress` <- `grampsweb/family`
   - `module.tls_secret.kubernetes_secret.tls_secret` <- `grampsweb/tls-secret`
   - `kubernetes_manifest.external_secret` <- `apiVersion=external-secrets.io/v1beta1,kind=ExternalSecret,namespace=grampsweb,name=grampsweb-secrets`

## Apply result

`Apply complete! Resources: 0 added, 7 changed, 0 destroyed.`

In-place updates applied:
- Deployment: dropped `GRAMPSWEB_LLM_BASE_URL` + `GRAMPSWEB_LLM_MODEL` env
  vars (both containers) — realising the intent of commit 7b248897.
- Ingress: realigned Traefik middleware annotation + cleaned stale
  `uptime.viktorbarzin.me/external-monitor=false` annotation.
- TLS secret: removed Kyverno-generated labels (Kyverno's
  `sync-tls-secret` ClusterPolicy re-applies them on next reconcile —
  no functional impact; same pattern in 29 other stacks using
  `setup_tls_secret` module).
- Namespace, PVC, service: trivial metadata alignments (label /
  `wait_until_bound` / `wait_for_load_balancer`).
- `kubernetes_manifest.external_secret`: populated the `manifest`
  attribute after import (expected).

## What is NOT in this change

- No replica bump: deployment stays at `replicas=0` (stack is intentionally
  inactive per 2026-03-14 OOM incident note).
- No destroy/recreate of any resource.
- The broader code-w97 (11 stacks with empty state) is NOT closed — only
  grampsweb is imported. 10 stacks remain: beads-server, insta2spotify,
  isponsorblocktv, kyverno, meshcentral, pvc-autoresizer, shadowsocks,
  tor-proxy, travel_blog, + meshcentral PVC.

## Reproduce locally

```
KUBECONFIG=/home/wizard/code/config kubectl get all,ingress,pvc,externalsecret,secret -n grampsweb
# Deployment still replicas=0; PVC grampsweb-data-encrypted Bound; ingress 'family'
# on family.viktorbarzin.me; ExternalSecret SecretSynced True.

cd /home/wizard/code/infra/stacks/grampsweb
/home/wizard/code/infra/scripts/tg plan
# Expected: 'No changes.' (clean state after apply).
```

## Test Plan

### Automated
```
$ cd /home/wizard/code/infra/stacks/grampsweb && /home/wizard/code/infra/scripts/tg plan
Plan: 0 to add, 7 to change, 0 to destroy.  [pre-apply]

$ /home/wizard/code/infra/scripts/tg apply --non-interactive
Plan: 0 to add, 7 to change, 0 to destroy.
kubernetes_namespace.grampsweb: Modifications complete after 0s [id=grampsweb]
kubernetes_persistent_volume_claim.data_encrypted: Modifications complete after 0s [id=grampsweb/grampsweb-data-encrypted]
kubernetes_service.grampsweb: Modifications complete after 0s [id=grampsweb/grampsweb]
module.ingress.kubernetes_ingress_v1.proxied-ingress: Modifications complete after 0s [id=grampsweb/family]
module.tls_secret.kubernetes_secret.tls_secret: Modifications complete after 0s [id=grampsweb/tls-secret]
kubernetes_manifest.external_secret: Modifications complete after 0s
kubernetes_deployment.grampsweb: Modifications complete after 1s [id=grampsweb/grampsweb]
Apply complete! Resources: 0 added, 7 changed, 0 destroyed.

$ terraform fmt -check -recursive stacks/grampsweb
(no output - formatted clean)
```

### Manual Verification
```
$ KUBECONFIG=/home/wizard/code/config kubectl get all,ingress,pvc,externalsecret,secret -n grampsweb
# - deployment.apps/grampsweb 0/0 0 0 47d   (replicas=0 preserved)
# - service/grampsweb ClusterIP 10.106.232.205:80/TCP
# - persistentvolumeclaim/grampsweb-data-encrypted Bound pvc-c9a5dcf4... 1Gi RWO proxmox-lvm-encrypted
# - ingress/family traefik family.viktorbarzin.me -> 10.0.20.200:80,443
# - externalsecret/grampsweb-secrets vault-kv 15m SecretSynced True
# - secret/tls-secret kubernetes.io/tls
# No pod crashes (no pods — replicas=0).
```

Closes: code-8m6
2026-04-18 11:37:45 +00:00
Viktor Barzin
1de2ee307f kyverno: strip resources.limits.cpu cluster-wide via ClusterPolicy
Context
-------
The cluster policy is "no CPU limits anywhere" — CFS throttling causes
more harm than good for bursty single-threaded workloads (Node.js,
Python). LimitRanges are already correct (defaultRequest.cpu only, no
default.cpu), but 22 pods still carried CPU limits injected by upstream
Helm chart defaults — CrowdSec (lapi + agents), descheduler,
kubernetes-dashboard (×4), nvidia gpu-operator.

Previous attempts were ad-hoc: patch each values.yaml, occasionally
missing things on chart upgrade. This replaces that with a declarative
Kyverno mutation at admission time.

This change
-----------
Adds a new ClusterPolicy `strip-cpu-limits` with two foreach rules:

  strip-container-cpu-limit      → containers[]
  strip-initcontainer-cpu-limit  → initContainers[]

Each rule uses `patchesJson6902` with an `op: remove` on
`resources/limits/cpu`. JSON6902 `remove` fails on missing paths, so
per-element preconditions gate the mutation — pods without CPU limits
pass through untouched. A top-level rule precondition short-circuits
using JMESPath filter (`[?resources.limits.cpu != null] | length(@) > 0`)
so the mutation is a no-op for the overwhelming majority of pods.

Admission-time only. No `mutateExistingOnPolicyUpdate`, no `background`.
Existing pods keep their CPU limits until they're restarted naturally
(Helm upgrade, node drain, rollout). We rely on churn, not forced
restarts, to avoid unnecessary thrash.

Memory limits are preserved — they prevent OOM, still useful.

Flow
----

    admission request → match Pod + CREATE
                     → top-level precondition: any container has limits.cpu?
                           no  → skip (fast path)
                           yes → foreach container:
                                   element.limits.cpu present?
                                       no  → skip element
                                       yes → remove /spec/containers/N/resources/limits/cpu
                     → same again for initContainers
                     → mutated pod proceeds to API server

Verification
------------
  kubectl run test-strip-cpu --overrides='{limits:{cpu:500m,memory:64Mi}}'
    → admitted pod.resources = {limits:{memory:64Mi}, requests:{cpu:50m,memory:32Mi}}
    → CPU limit stripped, memory preserved, requests untouched

  kubectl rollout restart deploy/kubernetes-dashboard-metrics-scraper
    → new pod.resources = {limits:{memory:400Mi}, requests:{cpu:100m,memory:200Mi}}
    → cluster-wide count of pods with CPU limits: 22 → 21

Rollout
-------
Remaining 21 pods will drop their CPU limits on natural churn. No manual
restarts in this change — user may want to time a mass restart with a
maintenance window.

Closes: code-eaf
Closes: code-4bz

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:34:39 +00:00
Viktor Barzin
903fc8377f [cleanup] Remove ollama from dashy + docs + nfs_directories
## Context
Final stage (9) of ollama decommission. After the stack was destroyed in
commit 0386f03f, several residual references remained:
- Vault KV `secret/ollama` (metadata + versions)
- `secrets/nfs_directories.txt` line listing `ollama` as a backup target
- `stacks/dashy/conf.yml` — "Ollama" tile linking to `ollama.viktorbarzin.me`
- `stacks/homepage/INGRESS_WIDGET_MAPPING.md` — 3 rows documenting the
  now-removed ingresses (ollama, ollama-api, ollama-server)

## This change
- `vault kv metadata delete secret/ollama` → all versions + metadata deleted.
- `secrets/nfs_directories.txt`: removed the `ollama` entry (line 71).
- `stacks/dashy/conf.yml`: removed the Ollama tile (`&ref_42`) and its
  reference at the end of the list; applied via Terragrunt so the running
  dashy ConfigMap picks up the change. Dashy apply: 0 added, 4 changed, 0
  destroyed (the ConfigMap diff plus the usual benign Kyverno drift).
- `stacks/homepage/INGRESS_WIDGET_MAPPING.md`: removed the 3 ollama rows.

## What was considered but NOT changed
- `stacks/ytdlp/yt-highlights/app/main.py`: `OLLAMA_URL = os.getenv("OLLAMA_URL", "")`
  already falls back to empty string when unset; the env var is no longer
  injected (stage 3) so this path is dead at runtime. Leaving source alone
  to keep this commit scoped to infra-only cleanup — future app-level
  cleanup can remove the dead fallback code.
- `stacks/k8s-portal/modules/k8s-portal/files/src/routes/agent/+server.ts`:
  only mentions `var.ollama_host` in a documentation string inside a
  system-prompt template — non-functional. Will fix in a separate commit
  alongside the k8s-portal agent docs pass.

## Test plan
### Automated
- `vault kv get secret/ollama` → "No value found" (confirmed after delete).
- `scripts/tg apply` on dashy → "Apply complete! Resources: 0 added, 4 changed, 0 destroyed."
- `grep -n ollama secrets/nfs_directories.txt` → empty.

### Manual Verification
1. Open `https://dashy.viktorbarzin.me/` → Ollama tile is gone.
2. `kubectl get cm -n dashy dashy-config -o yaml | grep -i ollama` → no matches.
3. `vault kv get secret/ollama` → error "No value found at secret/data/ollama".
4. On PVE host: `rm -rf /srv/nfs-ssd/ollama` (optional — I skipped the
   on-host disk cleanup; it's a manual ops step the user can run when
   comfortable).

Closes: code-1gu

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:17:59 +00:00
Viktor Barzin
0386f03f1a [ollama] Destroy stack — decommissioned
## Context
Stage 8 of ollama decommission. With the ollama-tcp Traefik entrypoint and
IngressRouteTCP removed (stages 1-2), all downstream consumers re-routed or
cleaned (stages 3-6), and the root tfvar dropped (stage 7), the ollama stack
has no live consumers and can be destroyed.

## This change
- `terragrunt destroy -auto-approve` on stacks/ollama.
- Result: `Destroy complete! Resources: 18 destroyed.`
  - 1 namespace (ollama)
  - 2 deployments (ollama, ollama-ui)
  - 2 services (ollama, ollama-ui)
  - 3 ingresses (ollama, ollama-server, ollama-api) + 3 Cloudflare DNS
    records (proxied ollama, non-proxied A + AAAA for ollama-api)
  - 2 PVCs (ollama-data-host NFS, ollama-ui-data-proxmox — including the
    stuck Pending one from 47h ago; no finalizer trick needed)
  - 1 NFS PV (ollama-data-host)
  - 1 middleware (ollama_api_basic_auth_middleware)
  - 2 secrets (tls_secret, ollama_api_basic_auth)
  - 1 ExternalSecret manifest (external_secret)
- Directory `stacks/ollama/` fully removed.
- Verified `kubectl get ns ollama` → NotFound.

## Destroy blocker and fix
The initial `tg destroy` failed because `variable "ollama_host"` in
`stacks/ollama/main.tf` had no default and we had already removed it from
`config.tfvars` in stage 7. Added `default = "ollama.ollama.svc.cluster.local"`
to the variable, re-ran destroy successfully, then removed the whole
directory as part of this commit (so the temporary default never ships).

## What is NOT in this change
- Vault `secret/ollama` still present (stage 9 cleanup pending if vault
  authenticated interactively).
- NFS data at `/srv/nfs-ssd/ollama/` still present (stage 9 cleanup).
- `/home/wizard/code/infra/secrets/nfs_directories.txt` still lists ollama
  (stage 9 — requires git-crypt unlock).

## Test plan
### Automated
- `scripts/tg destroy -auto-approve` → "Destroy complete! Resources: 18 destroyed."
- `kubectl get ns ollama` → "NotFound" (confirmed).

### Manual Verification
1. `kubectl get ns ollama` → NotFound.
2. `dig ollama.viktorbarzin.me @1.1.1.1` → Cloudflare record removed
   (propagation may take up to 5m).
3. `ls /home/wizard/code/infra/stacks/ollama/` → directory does not exist.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:16:21 +00:00
Viktor Barzin
57fdea4b99 [rybbit] Remove ollama favicon cache entry (deploy on next manual wrangler)
## Context
Stage 6 of ollama decommission. The Cloudflare Worker at
stacks/rybbit/worker/index.js maps hostnames → rybbit analytics site IDs.
With `ollama.viktorbarzin.me` going away, the mapping is dead.

## This change
- Removes the `"ollama.viktorbarzin.me": "e73bebea399f"` entry from SITE_IDS.
- **Source-only** — does NOT auto-deploy. Cloudflare Workers are deployed
  via `wrangler deploy` (manual, per user preference). The change will take
  effect on the next manual deploy at the user's convenience.

## Manual deploy (when convenient)
```
cd stacks/rybbit/worker
wrangler deploy
```

## Test plan
### Automated
- Node syntax check: file remains valid JS (trailing comma rules preserved).

### Manual Verification
After `wrangler deploy`:
1. Hit `ollama.viktorbarzin.me` (while it still exists) — should NOT inject
   rybbit script (map lookup misses, DEFAULT_SITE_ID is null).
2. Hit any other mapped host (e.g. `immich.viktorbarzin.me`) — should
   continue to inject correctly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:14:38 +00:00
Viktor Barzin
7091ef2dd6 [trading-bot] Remove ollama refs from commented-out source
## Context
Stage 5 of ollama decommission. The `trading-bot` stack has been entirely
commented out since 2026-04-06 (deployments scaled to 0, infra disabled to
prevent re-creation on apply). The commented body still contained references
to `var.ollama_host`, `TRADING_OLLAMA_HOST`, and `TRADING_OLLAMA_MODEL`.
Removing them now so if/when the stack is ever re-enabled, those dead
references don't need remembering.

## This change
- Removes `variable "ollama_host"` from the commented-out block.
- Removes `TRADING_OLLAMA_HOST` and `TRADING_OLLAMA_MODEL` from the
  commented `common_env` locals.
- Verified the outer `/* ... */` comment block still wraps the entire stack
  (head: `/*`, tail: `*/`).
- No apply needed — stack is disabled.

## Test plan
### Automated
- None — file content is inside a block comment; Terraform parser ignores it.
- `terraform fmt` check: no effect (commented content).

### Manual Verification
- `head -1 stacks/trading-bot/main.tf` → `/*`
- `tail -1 stacks/trading-bot/main.tf` → `*/`

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:14:22 +00:00
Viktor Barzin
7b248897d3 [grampsweb] Remove ollama_host source refs (apply blocked by bd-w97)
## Context
Stage 4 of ollama decommission. `grampsweb` referenced `var.ollama_host` for
its `GRAMPSWEB_LLM_BASE_URL` + `GRAMPSWEB_LLM_MODEL` env vars. This stack is
currently missing from Terraform state (blocked by bd-w97, which handles
state imports for 11 stacks including grampsweb) — so an apply would fail on
"resource already exists" errors.

## This change
- Deletes `variable "ollama_host"` declaration (stacks/grampsweb/main.tf).
- Deletes the two env entries `GRAMPSWEB_LLM_BASE_URL` and
  `GRAMPSWEB_LLM_MODEL` from the `common_env` locals block.
- **Source-only** — NO apply performed, because the stack cannot apply
  cleanly until bd-w97 resolves state imports. When that unblocks, the next
  apply will pick up the already-clean source.

## Why not apply now
- Running `scripts/tg apply` would try to create ~7 resources that already
  exist in K8s (namespace, PVCs, deployments, ingress, etc.), producing
  "already exists" errors for each.
- Once bd-w97 imports those into state, the next apply will be a no-op for
  them and will rollout the LLM env-var removal without issue.

## Test plan
### Automated
- No apply performed — stack blocked on bd-w97.
- `terraform fmt` on main.tf: no issues.

### Manual Verification
After bd-w97 resolves:
1. `scripts/tg plan` should show only the env-var removal on `grampsweb`
   deployments (no resource creates).
2. `scripts/tg apply` → deployments rollout with `GRAMPSWEB_LLM_*` vars gone.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:14:01 +00:00
Viktor Barzin
c175cfd69b [ytdlp] Remove ollama_host variable and fallback env vars
## Context
Stage 3 of ollama decommission. `ytdlp` had an Ollama fallback path for when
OpenRouter models failed. With ollama going away, that fallback is
inoperable — removing the variable and two env entries prevents pods from
ever attempting to hit a service that no longer exists.

## This change
- Drops `variable "ollama_host"` from stacks/ytdlp/main.tf.
- Drops the two env entries `OLLAMA_URL` and `OLLAMA_MODEL` (plus their
  preceding comment) from the yt-highlights container.
- Apply: `0 added, 4 changed, 0 destroyed` — deployments rolled out fresh
  env, plus benign Kyverno ndots drift (already accepted).
- Verified `kubectl get deploy -n ytdlp` no longer exposes OLLAMA_URL.

## What is NOT in this change
- OpenRouter primary path unchanged.
- config.tfvars `ollama_host` still present (stage 7 removes it).

## Test plan
### Automated
- `scripts/tg plan` → 4 in-place updates, 0 destroy.
- `scripts/tg apply` → "Apply complete! Resources: 0 added, 4 changed, 0 destroyed."

### Manual Verification
1. `kubectl get deploy -n ytdlp -o yaml | grep OLLAMA` → empty.
2. yt-highlights continues processing via OpenRouter (check container logs for
   successful OpenRouter responses).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:13:42 +00:00
Viktor Barzin
cc44bccfaa [traefik] Remove ollama-tcp entrypoint
## Context
Stage 2 of ollama decommission. The Traefik `ollama-tcp` entrypoint on port
11434 forwarded TCP traffic to the ollama service. With the IngressRouteTCP
already deleted (previous commit), the entrypoint is now orphaned — removing
it cleans up the Helm values and closes the port on the LB IP.

## This change
- Deletes the `ollama-tcp` entry from the `ports` map in traefik Helm values.
- Apply: `0 added, 4 changed, 0 destroyed` — helm_release.traefik rolled out
  new config, 3 auxiliary deployments picked up benign Kyverno ndots drift
  (already accepted per user approval).

## Verification
- `kubectl get svc -n traefik traefik -o jsonpath='{.spec.ports[*].name}'`
  output: `piper-tcp web websecure websecure-http3 whisper-tcp`
- `ollama-tcp` no longer listed.

## Test plan
### Automated
- `scripts/tg plan` showed 4 in-place updates, 0 destroy.
- `scripts/tg apply` → "Apply complete! Resources: 0 added, 4 changed, 0 destroyed."

### Manual Verification
1. `kubectl get svc -n traefik traefik -o jsonpath='{.spec.ports[*].name}'`
2. Confirm `ollama-tcp` is absent from the output.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:12:59 +00:00
Viktor Barzin
dbf7732a66 [uptime-kuma] Bump CPU + memory requests to reduce TTFB jitter
## Context
Uptime Kuma TTFB was bimodal — fast ~150ms responses mixed with slow
~3s responses — median 1.7s, p95 3.2s across 20 samples. CPU request
was 50m (5% of one core) against a Node.js process that handles ~190
monitors plus SQLite DB maintenance. Memory request was 64Mi while
actual RSS sat around 221Mi, so the pod was also running above its
guaranteed memory floor and subject to eviction pressure when nodes got
tight.

CPU limits are intentionally absent cluster-wide (CFS throttling caused
more pain than it solved), so the only knob to give the scheduler a
higher floor is the request itself. Raising the request makes the node
reserve more CPU for the pod and lets the kernel's CFS weight it more
generously when the node is busy — should reduce the tail on the slow
path without introducing throttling.

## This change
- requests.cpu: 50m -> 100m
- requests.memory: 64Mi -> 128Mi
- limits.memory: unchanged at 512Mi
- limits.cpu: still unset (explicit — cluster-wide rule)

## What is NOT in this change
- No CPU limit added
- No readiness/liveness probe tuning
- No replica count change (still 1, Recreate strategy)
- No DB layer / SQLite tuning

## Measurements (20 curl samples of https://uptime.viktorbarzin.me/)

Before:
  min    0.143s
  median 1.727s
  p95    3.163s
  max    3.204s
  mean   1.768s

After:
  min    0.149s
  median 1.228s
  p95    3.154s
  max    3.283s
  mean   1.590s

Median dropped ~29% (1.73s -> 1.23s). Tail (p95/max) essentially
unchanged — the slow bucket appears driven by something other than
CPU scheduling (likely socket.io / SSR render path inside the app,
or TLS/cf-tunnel handshake — worth a separate investigation).

Closes: code-79d
2026-04-18 11:11:39 +00:00
Viktor Barzin
80b6591e8b [whisper] Remove ollama_tcp IngressRouteTCP (ollama decom)
## Context
Ollama is being decommissioned. The `ollama_tcp_ingressroute` manifest in
stacks/whisper routed Traefik TCP entrypoint 11434 → ollama service in the
ollama namespace. With ollama going away, this route is dead weight and
blocks the subsequent destroy of the ollama stack.

## This change
- Deletes `kubernetes_manifest.ollama_tcp_ingressroute` from stacks/whisper/main.tf
- Apply result: 0 added, 5 changed, 0 destroyed (the manifest destroy happened in a
  previous partial-apply; the 5 "changed" resources are benign Kyverno ndots /
  PVC ownership drift which was already accepted per the user's approval).
- Verified `kubectl get ingressroutetcp -n traefik ollama-tcp` returns NotFound.

## What is NOT in this change
- Traefik entrypoint 11434 still exists (stage 2)
- Ollama namespace, deployments, services still present (stage 8)

## Test plan
### Automated
- `scripts/tg plan` showed 1 destroy (ollama_tcp_ingressroute), 1 create (data_proxmox
  PVC import), 4 benign updates.
- `scripts/tg apply -auto-approve` → "Apply complete! Resources: 0 added, 5 changed, 0 destroyed."

### Manual Verification
- kubectl get ingressroutetcp -n traefik ollama-tcp → NotFound (confirmed)
- kubectl get ingressroutetcp -n traefik whisper-tcp piper-tcp → still present

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 11:11:21 +00:00
Viktor Barzin
99180bec42 [n8n] Fix broken DIUN auto-upgrade pipeline — missing auth token to claude-agent-service
## Context

DIUN has been detecting image updates and firing Slack + webhook
notifications for weeks, but zero automated upgrades ran because the
handoff from n8n to claude-agent-service was silently 401-ing.

The pipeline (DIUN → n8n webhook → claude-agent-service /execute →
service-upgrade agent) was migrated from DevVM SSH to K8s HTTP in
42f1c3cf. The migration wired `claude-agent-service` (API_BEARER_TOKEN
env set), updated the n8n workflow JSON to POST with `Authorization:
Bearer $env.CLAUDE_AGENT_API_TOKEN`, but missed two things on the n8n
side:

1. The deployment didn't expose `CLAUDE_AGENT_API_TOKEN` to the n8n
   container — workflow sent `Authorization: Bearer ` (empty).
2. The workflow header expression used JS concat (`='Bearer ' + $env.X`)
   which n8n 1.x does NOT evaluate in HTTP Request node header params.
   It needs template-literal form: `=Bearer {{ $env.X }}`.

Evidence: `claude-agent-service` logs showed only `/health` probes —
zero `/execute` calls over 12h despite DIUN firing webhooks. n8n PG
execution 2250 returned `401 Missing bearer token`.

## This change

- Adds ExternalSecret `claude-agent-token` in the `n8n` namespace that
  pulls `api_bearer_token` from Vault `secret/claude-agent-service`
  (same source as the receiving service's token).
- Wires the token into the n8n container as env var
  `CLAUDE_AGENT_API_TOKEN` via `secret_key_ref`.
- Sets `N8N_BLOCK_ENV_ACCESS_IN_NODE=false` so expressions CAN read
  `$env.*` at all (default in 1.x is false already, but setting
  explicitly guards against upstream default flips).
- Fixes the workflow JSON backup (`workflows/diun-upgrade.json`) header
  expression to use `{{ $env.X }}` template syntax.

The live workflow in n8n's PG DB was also patched in place (one-time
`UPDATE workflow_entity SET nodes = REPLACE(...)` — workflows are not
TF-managed; they were imported once).

## What is NOT in this change

- No retroactive re-run of skipped DIUN events. They'll be rediscovered
  in future scans.
- No change to the `claude-agent-service` side — its token and endpoint
  were already correct.
- No Slack alert on n8n HTTP-node failures — future work; right now a
  broken workflow fails silently unless you check Execution History.

## End-to-end verification

```
$ curl -X POST n8n.viktorbarzin.me/webhook/30805ab6-... \
    -d '{"diun_entry_status":"update","diun_entry_image":"docker.io/library/httpd","diun_entry_imagetag":"2.4.66",...}'
{"message":"Workflow was started"}  HTTP 200

# n8n PG: execution_entity latest row  → status=success
# claude-agent-service logs           → "POST /execute HTTP/1.1" 202 Accepted
```

## Reproduce locally

```
1. vault login -method=oidc
2. cd stacks/n8n && ../../scripts/tg apply
3. kubectl -n n8n exec deploy/n8n -- printenv CLAUDE_AGENT_API_TOKEN
   (should print 64-char hex)
4. Fire synthetic webhook with non-critical image (httpd / alpine)
5. Check n8n execution is success, claude-agent-service shows 202
```

Closes: code-ekz
Related: code-bck

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 10:41:09 +00:00
Viktor Barzin
42f1c3cf4f [claude-agent-service] Migrate all pipelines from DevVM SSH to K8s HTTP
## Context

The claude-agent-service K8s pod (deployed 2026-04-15) provides an HTTP API
for running Claude headless agents. Three workflows still SSH'd to the DevVM
(10.0.10.10) to invoke `claude -p`. This eliminates that dependency.

## This change

Pipeline migrations (SSH → HTTP POST to claude-agent-service):
- `.woodpecker/issue-automation.yml` — Vault auth fetches API token instead
  of SSH key; curl POST /execute + poll /jobs/{id} replaces SSH invocation
- `scripts/postmortem-pipeline.sh` — same pattern; uses jq for safe JSON
  construction of TODO payloads
- `.woodpecker/postmortem-todos.yml` — drop openssh-client from apk install
- `stacks/n8n/workflows/diun-upgrade.json` — SSH node replaced with HTTP
  Request node; API token via $env.CLAUDE_AGENT_API_TOKEN (added to Vault
  secret/n8n)

Documentation updates:
- `docs/architecture/incident-response.md` — Mermaid diagram: DevVM → K8s
- `docs/architecture/automated-upgrades.md` — pipeline diagram + n8n action
- `AGENTS.md` — pipeline description updated

## What is NOT in this change

- DevVM decommissioning (still hosts terminal/foolery services)
- Removal of SSH key secrets from Vault (kept for rollback)
- n8n workflow import (must be done manually in n8n UI)

[ci skip]

Co-Authored-By: Claude Opus 4 (1M context) <noreply@anthropic.com>
2026-04-18 10:12:02 +00:00
Viktor Barzin
947f8ace54 [monitoring] Remove stale MySQL InnoDB Cluster alerts
MySQL migrated from InnoDB Cluster (Bitnami chart + mysql-operator) to
a standalone StatefulSet on 2026-04-16. Two Prometheus alerts still
referenced the old topology and were firing falsely against resources
that no longer exist:

- MySQLDown: queried kube_statefulset_status_replicas_ready{statefulset="mysql-cluster"}
  — that StatefulSet was deleted as part of Phase 1 of the migration.
- MySQLOperatorDown: queried kube_deployment_status_replicas_available{namespace="mysql-operator"}
  — the operator Deployment was removed in Phase 1.

Replacement availability monitoring for the standalone MySQL pod will
be handled via an Uptime Kuma MySQL-connection monitor (out of scope
for this change — no Prometheus replacement alert is being added, per
the migration plan's "simpler is better" principle).

MySQLBackupStale and MySQLBackupNeverSucceeded are retained — they
query the mysql-backup CronJob which is unchanged by the migration.

Also removes MySQLDown from the two inhibition rules (NodeDown and
NFSServerUnresponsive) that previously suppressed it during cascade
outages — the alert no longer exists so the reference became dead.

Closes: code-3sa

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 10:03:58 +00:00
Viktor Barzin
99688bbb02 [uptime-kuma] Omit trailing slash when path annotation not set
## Context

After commit f6812fe6 every external-monitor-sync run updated all ~107
monitors without any real change — because the new code always appended
`/` to the host (default path), while historical monitors had been
created with bare `https://host` URLs. Sync saw `https://host` !=
`https://host/` and re-wrote every monitor on each cycle: noisy logs,
wasted Uptime Kuma writes.

## This change

When the `uptime.viktorbarzin.me/external-monitor-path` annotation is
absent, build the URL WITHOUT a trailing slash so it matches the shape
of pre-existing monitors. When the annotation is set, append it as
before (e.g. `https://forgejo.viktorbarzin.me/api/healthz`).

Also flip the lenient/strict codes branch to trigger off the same
"annotation set?" signal instead of comparing against DEFAULT_PATH.

## Verification

Verified via two consecutive manual triggers of the CronJob against the
live stack:

    Pass 1 (migration): 0 created, 107 updated, 0 deleted, 1 unchanged
    Pass 2 (stable):    0 created,   0 updated, 0 deleted, 108 unchanged

`[External] forgejo` still probes `https://forgejo.viktorbarzin.me/api/healthz`
with strict `200-299`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 22:41:02 +00:00
Viktor Barzin
b30bfd4690 [dbaas] Fix mysql_static_user heredoc quoting
## Context

The null_resource.mysql_static_user provisioner in commit 2033e767 used
a bash -c wrapper with nested single quotes (`'"$DB"'`-style injection)
to interpolate the app-specific database name and credentials. The outer
bash -c '...' single-quoted string was broken by the inner ' characters
long before reaching the container, so the local (tg) shell saw `$DB`
and `$USER` unset and produced an empty database name:

    ERROR 1102 (42000) at line 1: Incorrect database name ''

Apply failed for both forgejo and roundcubemail.

## This change

Feed the SQL to mysql on the pod via stdin through `kubectl exec -i`:

- Outer command: `kubectl exec -i ... -- sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"'`
- Single-quoted shell heredoc (`<<'SQL'`) carries the SQL statements
- HCL interpolates `${each.key}`, `${each.value.database}`,
  `${each.value.password}` into the heredoc body before the shell runs
- No nested quoting — one single-quote layer, one double-quote layer,
  one heredoc layer

Plan/apply verified on the live stack: 2 added (forgejo + roundcubemail),
7 pre-existing drift items changed, 0 destroyed. Both users now log in
with their app-cached passwords.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 22:34:12 +00:00
Viktor Barzin
2033e76798 [dbaas] Declare forgejo + roundcubemail MySQL users in Terraform
## Context

The 2026-04-16 MySQL InnoDB Cluster → standalone migration recreated the
MySQL user table but scripted fresh passwords for every app user. Two apps
(forgejo, roundcubemail) store their DB password inside their own
application config — forgejo in `/data/gitea/conf/app.ini` (baked into the
PVC), roundcubemail in the ROUNDCUBEMAIL_DB_PASSWORD env from the
mailserver stack (sourced from Vault `secret/platform`). Neither app
could be restarted with a new password without rewriting its own config.

Both apps silently broke with `Access denied for user 'X'@'%'` after the
migration. Remediation on 2026-04-17 was a manual `ALTER USER ... IDENTIFIED
BY '<app_password>'` to re-sync MySQL with what each app already has. With
nothing in Terraform managing those users, the next migration would break
them again — that's the gap this change closes.

## What this change does

Codifies both MySQL users in `stacks/dbaas/modules/dbaas/` using the same
`null_resource` + `local-exec` + `kubectl exec` pattern already used for
`pg_terraform_state_db` (line 1373 of the same file). Rejected alternatives:

- `petoju/mysql` Terraform provider — no existing usage in the repo; would
  be a net-new dependency. Module-level `for_each` over `mysql_user` +
  `mysql_grant` is cleaner, but the added machinery (new provider block,
  extra auth path via `MYSQL_HOST`/`MYSQL_USERNAME`/`MYSQL_PASSWORD` TF
  env vars, state-dependent password reads) outweighs the benefit for two
  static users.
- K8s Job — adds lifecycle management for a one-shot resource; needs
  secret mounts and is harder to retry. `local-exec` is exactly what the
  existing PG bootstrap uses.

Idempotency contract:

    CREATE DATABASE IF NOT EXISTS <db>;
    CREATE USER IF NOT EXISTS '<user>'@'%' IDENTIFIED WITH caching_sha2_password BY '<pw>';
    ALTER USER '<user>'@'%' IDENTIFIED WITH caching_sha2_password BY '<pw>';
    GRANT ALL PRIVILEGES ON <db>.* TO '<user>'@'%';
    FLUSH PRIVILEGES;

The `ALTER USER` on every re-run re-syncs the password if Vault was rotated
out-of-band (healing drift). The `sha256(password)` trigger also re-runs
the provisioner when the Vault password legitimately changes, so the
resource is responsive to both new and rotated passwords. `caching_sha2_password`
matches the live plugin returned by `SHOW CREATE USER`; forcing it prevents
silent drift to `mysql_native_password`.

Flow (apply-time):

    scripts/tg apply
        │
        ├── data.vault_kv_secret_v2.viktor  ── reads mysql_{forgejo,roundcubemail}_password
        │
        ▼
    module.dbaas
        │
        ├── mysql-standalone-0 (StatefulSet, already running)
        │
        ├── null_resource.mysql_static_user["forgejo"]
        │     └── kubectl exec ... mysql -uroot -p$ROOT_PASSWORD ... CREATE/ALTER/GRANT
        │
        └── null_resource.mysql_static_user["roundcubemail"]
              └── (same, for roundcubemail)

## Secrets

Two new keys added to Vault `secret/viktor`:

    mysql_forgejo_password        # bound to forgejo `[database]` in app.ini
    mysql_roundcubemail_password  # duplicates secret/platform
                                  # mailserver_roundcubemail_db_password;
                                  # secret/viktor is the personal vault of
                                  # record per .claude/CLAUDE.md

Passwords are never written to the repo — both come from Vault via
`data "vault_kv_secret_v2" "viktor"` in the dbaas root module.

## What is NOT in this change

- PG-side users (managed by Vault DB engine static-roles already — see
  MEMORY.md "Database rotation")
- Other MySQL users (speedtest, wrongmove, codimd, nextcloud, shlink,
  grafana, phpipam are all rotated by Vault DB engine; root users
  excluded by design)
- Removing the old mysql-operator / InnoDB Cluster helm releases (Phase 4
  cleanup tracked under the MySQL standalone migration work — still
  pending)

## Test plan

### Automated

`terraform fmt -check -recursive stacks/dbaas` → exit 0
`scripts/tg plan` in stacks/dbaas →

    Plan: 2 to add, 7 to change, 0 to destroy.
    # module.dbaas.null_resource.mysql_static_user["forgejo"]     will be created
    # module.dbaas.null_resource.mysql_static_user["roundcubemail"] will be created

The 7 "update in-place" entries are pre-existing drift (Kyverno labels on
LimitRange, MetalLB ip-allocated-from-pool annotation on postgresql_lb,
Kyverno-injected `dns_config` on 4 CronJobs lacking the
`ignore_changes` workaround, `resize.topolvm.io/storage_limit` bump
30Gi→50Gi on mysql-standalone PVC). None of those are introduced by this
commit and all are benign (no data loss, no pod restart).

### Manual Verification

    # 1. Sanity check pre-apply — users are in their current (manually-fixed) state.
    kubectl exec -n dbaas mysql-standalone-0 -c mysql -- bash -c \
      'mysql -uroot -p"$MYSQL_ROOT_PASSWORD" -N -e \
       "SELECT user,host,plugin FROM mysql.user WHERE user IN (\"forgejo\",\"roundcubemail\");"'
    # Expected:
    #   forgejo       %   caching_sha2_password
    #   roundcubemail %   caching_sha2_password

    # 2. Apply and confirm the provisioner exits 0.
    cd stacks/dbaas && ../../scripts/tg apply
    # Expect: null_resource.mysql_static_user["forgejo"]: Creation complete
    #         null_resource.mysql_static_user["roundcubemail"]: Creation complete

    # 3. App-level smoke: log in to forgejo.viktorbarzin.me (any git push)
    #    and load https://mail.viktorbarzin.me/roundcube (IMAP login). Both
    #    must succeed.

    # 4. Destructive test (run ONCE, off-hours):
    kubectl exec -n dbaas mysql-standalone-0 -c mysql -- bash -c \
      'mysql -uroot -p"$MYSQL_ROOT_PASSWORD" -e "DROP USER '\''forgejo'\''@'\''%'\''"'
    cd stacks/dbaas && ../../scripts/tg apply
    # Expected: apply recreates the user with the Vault password, forgejo UI
    # recovers without touching /data/gitea/conf/app.ini.

### Reproduce locally

    1. vault login -method=oidc
    2. cd infra/stacks/dbaas
    3. ../../scripts/tg plan
    4. Expected: "Plan: 2 to add, 7 to change, 0 to destroy." with the two
       null_resource.mysql_static_user additions. 7 changes are pre-existing
       drift unrelated to this commit.

Closes: code-6th
Closes: code-96w

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 22:06:23 +00:00
Viktor Barzin
b326c572a6 [forgejo] Probe /api/healthz for external monitor
Forgejo's /api/healthz verifies cache + DB and returns 503 when
degraded, where / returns 200 even with a broken backend. Prevents
recurrence of the false-negative from the 2026-04-17 outage.

Closes: code-ut0

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 22:06:23 +00:00
Viktor Barzin
f6812fe69f [uptime-kuma] Support per-ingress probe path annotation
## Context

The `external-monitor-sync` CronJob probed `https://<host>/` for every
`*.viktorbarzin.me` ingress. Homepages frequently return 200 (or
allow-listed 30x/40x) even when the backend or DB is broken, producing
false-negatives — the forgejo outage on 2026-04-17 was not caught for
this reason: `/` returned a login page while `/api/healthz` returned
503 from the DB probe.

Manual monitor edits don't stick: the next sync is create-if-missing
only, so a deleted monitor gets recreated pointing at `/` again.

## This change

Teaches the sync three things:

1. **Reads a new annotation** `uptime.viktorbarzin.me/external-monitor-path`.
   The annotation value is appended as the probe path; default `/`
   preserves today's behaviour for every ingress that hasn't opted in.
2. **Tightens accepted status codes** when an explicit path is set:
   `['200-299']` (strict — we expect a real healthz). The default `/`
   path keeps the existing lenient set `['200-299','300-399','400-499']`
   because homepages routinely 30x redirect or 40x on missing auth.
3. **Updates existing monitors** when the target URL or accepted
   status codes drift. Previously the loop was create-if-missing only,
   so annotating an already-monitored ingress had no effect until the
   monitor was deleted. Now re-running the sync after changing the
   annotation converges the live monitor.

## What is NOT in this change

- No change to the Ingress annotations on any individual stack. Each
  service that wants a non-`/` probe path opts in separately.
- No change to the ConfigMap fallback payload shape — legacy entries
  still get the lenient status codes.
- Monitor DB state in Uptime Kuma's SQLite is untouched at plan time;
  the sync CronJob is what reconciles state on each run.

## Flow

```
  ingress annotation           CronJob Python
  ------------------           --------------
  (none)                 -->   url = https://host/        codes = lenient
  external-monitor-path  -->   url = https://host<path>   codes = strict ['200-299']
  ^^ "/api/healthz"            https://host/api/healthz   codes = ['200-299']

  existing monitor + drifted target url  -->  api.edit_monitor(id, url=..., accepted_statuscodes=...)
```

## Test Plan

### Automated

- `terraform fmt -check -recursive stacks/uptime-kuma` — exit 0.
- `scripts/tg plan` on `stacks/uptime-kuma` — `Plan: 0 to add, 1 to
  change, 0 to destroy`. The single in-place change is the CronJob
  command (Python heredoc re-rendered). No other resources drift.
- Embedded Python compiles: extracted the `PYEOF` block and ran
  `python3 -m py_compile` — OK.

### Manual Verification

1. Annotate an ingress: `kubectl annotate ingress/<name> -n <ns> uptime.viktorbarzin.me/external-monitor-path=/api/healthz`
2. Trigger sync early: `kubectl -n uptime-kuma create job --from=cronjob/external-monitor-sync external-monitor-sync-manual`
3. Expected log line:
   `Updating monitor [External] <name>: https://host/ -> https://host/api/healthz (codes ['200-299','300-399','400-499'] -> ['200-299'])`
4. Inspect monitor in Uptime Kuma UI: URL and accepted status codes
   reflect the annotation.
5. Final summary line includes updated count:
   `Sync complete: 0 created, 1 updated, 0 deleted, N unchanged`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 22:06:23 +00:00
Viktor Barzin
4117809a54 [rybbit] Deploy Cloudflare Worker for analytics injection
Replaces the broken Traefik rewrite-body plugin with a Cloudflare Worker
using HTMLRewriter to inject the rybbit tracking script into HTML responses
at the CDN edge.

- Wildcard route: *.viktorbarzin.me/* covers all proxied services
- 28 services have explicit site ID mappings
- Unmapped hosts pass through without injection
- Zero Traefik dependency, zero performance impact

Closes: code-sed

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 21:26:16 +00:00
Viktor Barzin
498e7f3305 [uptime-kuma] Fix duplicate monitor creation + clean up down monitors
## Duplicate bug fix
The external-monitor-sync deduped targets by hostname (`host in seen`) but
multiple ingresses can share the same hostname. Changed to dedupe by final
monitor name (`f"{PREFIX}{label}" in seen`) — prevents creating duplicate
[External] monitors on every sync run. This caused 90 duplicates.

## Monitor cleanup
Deleted 118 monitors total:
- 90 duplicate [External] monitors (kept lower ID of each pair)
- 14 paused internal monitors for decommissioned services
- 14 external monitors for non-existent, scaled-down, or non-HTTP services
  (xray-vless, complaints, hermes-agent, etc.)

## Opt-outs
Added `uptime.viktorbarzin.me/external-monitor=false` annotation to ingresses
that shouldn't have external HTTP monitors: xray (non-HTTP protocol),
council-complaints, hermes-agent, task-webhook, torrserver, www (no CF DNS).

329 monitors → ~210 monitors. Zero down monitors expected.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 21:12:31 +00:00
Viktor Barzin
5319f03ebc [storage] Fix owntracks + wealthfolio: switch to encrypted PVCs
Some checks failed
Build Custom DIUN Image / build (push) Has been cancelled
Deploy Post-Mortems to GitHub Pages / deploy (push) Has been cancelled
Both services were running against empty unencrypted PVCs after the
proxmox-lvm-encrypted migration. Data copied from old Released PVs
via LUKS-unlock on PVE host, deployments switched to encrypted PVCs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 20:29:57 +00:00
Viktor Barzin
e51bdb2af8 Add broker-sync Terraform stack (#7)
* [f1-stream] Remove committed cluster-admin kubeconfig

## Context
A kubeconfig granting cluster-admin access was accidentally committed into
the f1-stream stack's application bundle in c7c7047f (2026-02-22). It
contained the cluster CA certificate plus the kubernetes-admin client
certificate and its RSA private key. Both remotes (github.com, forgejo)
are public, so the credential has been reachable for ~2 months.

Grep across the repo confirms no .tf / .hcl / .sh / .yaml file references
this path; the file is a stray local artifact, likely swept in during a
bulk `git add`.

## This change
- git rm stacks/f1-stream/files/.config

## What is NOT in this change
- Cluster-admin cert rotation on the control plane. The leaked client cert
  must be invalidated separately via `kubeadm certs renew admin.conf` or
  CA regeneration. Tracked in the broader secrets-remediation plan.
- Git-history rewrite. The file is still reachable in every commit since
  c7c7047f. A `git filter-repo --path ... --invert-paths` pass against a
  fresh mirror is planned and will be force-pushed to both remotes.

## Test plan
### Automated
No tests needed for a file removal. Sanity:
  $ grep -rn 'f1-stream/files/\.config' --include='*.tf' --include='*.hcl' \
       --include='*.yaml' --include='*.yml' --include='*.sh'
  (no output)

### Manual Verification
1. `git show HEAD --stat` shows exactly one path deleted:
     stacks/f1-stream/files/.config | 19 -------------------
2. `test ! -e stacks/f1-stream/files/.config` returns true.
3. A copy of the leaked file is at /tmp/leaked.conf for post-rotation
   verification (confirming `kubectl --kubeconfig /tmp/leaked.conf get ns`
   fails with 401/403 once the admin cert is renewed).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [frigate] Remove orphan config.yaml with leaked RTSP passwords

## Context
A Frigate configuration file was added to modules/kubernetes/frigate/ in
bcad200a (2026-04-15, ~2 days ago) as part of a bulk `chore: add untracked
stacks, scripts, and agent configs` commit. The file contains 14 inline
rtsp://admin:<password>@<host>:554/... URLs, leaking two distinct RTSP
passwords for the cameras at 192.168.1.10 (LAN-only) and
valchedrym.ddns.net (confirmed reachable from public internet on port
554). Both remotes are public, so the creds have been exposed for ~2 days.

Grep across the repo confirms nothing references this config.yaml — the
active stacks/frigate/main.tf stack reads its configuration from a
persistent volume claim named `frigate-config-encrypted`, not from this
file. The file is therefore an orphan from the bulk add, with no
production function.

## This change
- git rm modules/kubernetes/frigate/config.yaml

## What is NOT in this change
- Camera password rotation. The user does not own the cameras; rotation
  must be coordinated out-of-band with the camera operators. The DDNS
  camera (valchedrym.ddns.net:554) is internet-reachable, so the leaked
  password is high-priority to rotate from the device side.
- Git-history rewrite. The file plus its leaked strings remain in all
  commits from bcad200a forward. Scheduled to be purged via
  `git filter-repo --path modules/kubernetes/frigate/config.yaml
  --invert-paths --replace-text <list>` in the broader remediation pass.
- Future Frigate config provisioning. If the stack is re-platformed to
  source config from Git rather than the PVC, the replacement should go
  through ExternalSecret + env-var interpolation, not an inline YAML.

## Test plan
### Automated
  $ grep -rn 'frigate/config\.yaml' --include='*.tf' --include='*.hcl' \
       --include='*.yaml' --include='*.yml' --include='*.sh'
  (no output — confirms orphan status)

### Manual Verification
1. `git show HEAD --stat` shows exactly one deletion:
     modules/kubernetes/frigate/config.yaml | 229 ---------------------------------
2. `test ! -e modules/kubernetes/frigate/config.yaml` returns true.
3. `kubectl -n frigate get pvc frigate-config-encrypted` still shows the
   PVC bound (unaffected by this change).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [setup-tls-secret] Delete deprecated renew.sh with hardcoded Technitium token

## Context
modules/kubernetes/setup_tls_secret/renew.sh is a 2.5-year-old
expect(1) script for manual Let's Encrypt wildcard-cert renewal via
Technitium DNS TXT-record challenges. It hardcodes a 64-char Technitium
API token on line 7 (as an expect variable) and line 27 (inside a
certbot-cleanup heredoc). Both remotes are public, so the token has been
exposed for ~2.5 years.

The script is not invoked by the module's Terraform (main.tf only creates
a kubernetes.io/tls Secret from PEM files); it is a standalone
run-it-yourself tool. grep across the repo confirms nothing references
`renew.sh` — neither the 20+ stacks that consume the `setup_tls_secret`
module, nor any CI pipeline, nor any shell wrapper.

A replacement script `renew2.sh` (4 weeks old) lives alongside it. It
sources the Technitium token from `$TECHNITIUM_API_KEY` env var and also
supports Cloudflare DNS-01 challenges via `$CLOUDFLARE_TOKEN`. It is the
current renewal path.

## This change
- git rm modules/kubernetes/setup_tls_secret/renew.sh

## What is NOT in this change
- Technitium token rotation. The leaked token still works against
  `technitium-web.technitium.svc.cluster.local:5380` until revoked in the
  Technitium admin UI. Rotation is a prerequisite for the upcoming
  git-history scrub, which will remove the token from every commit via
  `git filter-repo --replace-text`.
- renew2.sh is retained as-is (already env-var-sourced; clean).
- The setup_tls_secret module's main.tf is not touched; 20+ consuming
  stacks keep working.

## Test plan
### Automated
  $ grep -rn 'renew\.sh' --include='*.tf' --include='*.hcl' \
       --include='*.yaml' --include='*.yml' --include='*.sh'
  (no output — confirms no consumer)
  $ git grep -n 'e28818f309a9ce7f72f0fcc867a365cf5d57b214751b75e2ef3ea74943ef23be'
  (no output in HEAD after this commit)

### Manual Verification
1. `git show HEAD --stat` shows exactly one deletion:
     modules/kubernetes/setup_tls_secret/renew.sh | 136 ---------
2. `test ! -e modules/kubernetes/setup_tls_secret/renew.sh` returns true.
3. `renew2.sh` still exists and is executable:
     ls -la modules/kubernetes/setup_tls_secret/renew2.sh
4. Next cert-renewal run uses renew2.sh with env-var-sourced token; no
   behavioral regression because renew.sh was never part of the automated
   flow.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [monitoring] Delete orphan server-power-cycle/main.sh with iDRAC default creds

## Context
stacks/monitoring/modules/monitoring/server-power-cycle/main.sh is an old
shell implementation of a power-cycle watchdog that polled the Dell iDRAC
on 192.168.1.4 for PSU voltage. It hardcoded the Dell iDRAC default
credentials (root:calvin) in 5 `curl -u root:calvin` calls. Both remotes
are public, so those credentials — and the implicit statement that 'this
host has not rotated the default BMC password' — have been exposed.

The current implementation is main.py in the same directory. It reads
iDRAC credentials from the environment variables `idrac_user` and
`idrac_password` (see module's iDRAC_USER_ENV_VAR / iDRAC_PASSWORD_ENV_VAR
constants), which are populated from Vault via ExternalSecret at runtime.
main.sh is not referenced by any Terraform, ConfigMap, or deploy script —
grep confirms no `file()` / `templatefile()` / `filebase64()` call loads
it, and no hand-rolled shell wrapper invokes it.

## This change
- git rm stacks/monitoring/modules/monitoring/server-power-cycle/main.sh

main.py is retained unchanged.

## What is NOT in this change
- iDRAC password rotation on 192.168.1.4. The BMC should be moved off the
  vendor default `calvin` regardless; rotation is tracked in the broader
  remediation plan and in the iDRAC web UI.
- A separate finding in stacks/monitoring/modules/monitoring/idrac.tf
  (the redfish-exporter ConfigMap has `default: username: root, password:
  calvin` as a fallback for iDRAC hosts not explicitly listed) is NOT
  addressed here — filed as its own task so the fix (drop the default
  block vs. source from env) can be considered in isolation.
- Git-history scrub of main.sh is pending the broader filter-repo pass.

## Test plan
### Automated
  $ grep -rn 'server-power-cycle/main\.sh\|main\.sh' \
       --include='*.tf' --include='*.hcl' --include='*.yaml' \
       --include='*.yml' --include='*.sh'
  (no consumer references)

### Manual Verification
1. `git show HEAD --stat` shows only the one deletion.
2. `test ! -e stacks/monitoring/modules/monitoring/server-power-cycle/main.sh`
3. `kubectl -n monitoring get deploy idrac-redfish-exporter` still shows
   the exporter running — unrelated to this file.
4. main.py continues to run its watchdog loop without regression, because
   it was never coupled to main.sh.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* [tls] Move 3 outlier stacks from per-stack PEMs to root-wildcard symlink

## Context
foolery, terminal, and claude-memory each had their own
`stacks/<x>/secrets/` directory with a plaintext EC-256 private key
(privkey.pem, 241 B) and matching TLS certificate (fullchain.pem, 2868 B)
for *.viktorbarzin.me. The 92 other stacks under stacks/ symlink
`secrets/` → `../../secrets`, which resolves to the repo-root
/secrets/ directory covered by the `secrets/** filter=git-crypt`
.gitattributes rule — i.e., every other stack consumes the same
git-crypt-encrypted root wildcard cert.

The 3 outliers shipped their keys in plaintext because `.gitattributes`
secrets/** rule matches only repo-root /secrets/, not
stacks/*/secrets/. Both remotes are public, so the 6 plaintext PEM files
have been exposed for 1–6 weeks (commits 5a988133 2026-03-11,
a6f71fc6 2026-03-18, 9820f2ce 2026-04-10).

Verified:
- Root wildcard cert subject = CN viktorbarzin.me,
  SAN *.viktorbarzin.me + viktorbarzin.me — covers the 3 subdomains.
- Root privkey + fullchain are a valid key pair (pubkey SHA256 match).
- All 3 outlier certs have the same subject/SAN as root; different
  distinct cert material but equivalent coverage.

## This change
- Delete plaintext PEMs in all 3 outlier stacks (6 files total).
- Replace each stacks/<x>/secrets directory with a symlink to
  ../../secrets, matching the fleet pattern.
- Add `stacks/**/secrets/** filter=git-crypt diff=git-crypt` to
  .gitattributes as a regression guard — any future real file placed
  under stacks/<x>/secrets/ gets git-crypt-encrypted automatically.

setup_tls_secret module (modules/kubernetes/setup_tls_secret/main.tf) is
unchanged. It still reads `file("${path.root}/secrets/fullchain.pem")`,
which via the symlink resolves to the root wildcard.

## What is NOT in this change
- Revocation of the 3 leaked per-stack certs. Backed up the leaked PEMs
  to /tmp/leaked-certs/ for `certbot revoke --reason keycompromise`
  once the user's LE account is authenticated. Revocation must happen
  before or alongside the history-rewrite force-push to both remotes.
- Git-history scrub. The leaked PEM blobs are still reachable in every
  commit from 2026-03-11 forward. Scheduled for removal via
  `git filter-repo --path stacks/<x>/secrets/privkey.pem --invert-paths`
  (and fullchain.pem for each stack) in the broader remediation pass.
- cert-manager introduction. The fleet does not use cert-manager today;
  this commit matches the existing symlink-to-wildcard pattern rather
  than introducing a new component.

## Test plan
### Automated
  $ readlink stacks/foolery/secrets
  ../../secrets
  (likewise for terminal, claude-memory)

  $ for s in foolery terminal claude-memory; do
      openssl x509 -in stacks/$s/secrets/fullchain.pem -noout -subject
    done
  subject=CN = viktorbarzin.me  (x3 — all resolve via symlink to root wildcard)

  $ git check-attr filter -- stacks/foolery/secrets/fullchain.pem
  stacks/foolery/secrets/fullchain.pem: filter: git-crypt
  (now matched by the new rule, though for the symlink target the
   repo-root rule already applied)

### Manual Verification
1. `terragrunt plan` in stacks/foolery, stacks/terminal, stacks/claude-memory
   shows only the K8s TLS secret being re-created with the root-wildcard
   material. No ingress changes.
2. `terragrunt apply` for each stack → `kubectl -n <ns> get secret
   <name>-tls -o yaml` → tls.crt decodes to CN viktorbarzin.me with
   the root serial (different from the pre-change per-stack serials).
3. `curl -v https://foolery.viktorbarzin.me/` (and likewise terminal,
   claude-memory) → cert chain presents the new serial, handshake OK.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Add broker-sync Terraform stack (pending apply)

Context
-------
Part of the broker-sync rollout — see the plan at
~/.claude/plans/let-s-work-on-linking-temporal-valiant.md and the
companion repo at ViktorBarzin/broker-sync.

This change
-----------
New stack `stacks/broker-sync/`:
- `broker-sync` namespace, aux tier.
- ExternalSecret pulling `secret/broker-sync` via vault-kv
  ClusterSecretStore.
- `broker-sync-data-encrypted` PVC (1Gi, proxmox-lvm-encrypted,
  auto-resizer) — holds the sync SQLite db, FX cache, Wealthfolio
  cookie, CSV archive, watermarks.
- Five CronJobs (all under `viktorbarzin/broker-sync:<tag>`, public
  DockerHub image; no pull secret):
    * `broker-sync-version` — daily 01:00 liveness probe (`broker-sync
      version`), used to smoke-test each new image.
    * `broker-sync-trading212` — daily 02:00 `broker-sync trading212
      --mode steady`.
    * `broker-sync-imap` — daily 02:30, SUSPENDED (Phase 2).
    * `broker-sync-csv` — daily 03:00, SUSPENDED (Phase 3).
    * `broker-sync-fx-reconcile` — 7th of month 05:05, SUSPENDED
      (Phase 1 tail).
- `broker-sync-backup` — daily 04:15, snapshots /data into
  NFS `/srv/nfs/broker-sync-backup/` with 30-day retention, matches
  the convention in infra/.claude/CLAUDE.md §3-2-1.

NOT in this commit:
- Old `wealthfolio-sync` CronJob retirement in
  stacks/wealthfolio/main.tf — happens in the same commit that first
  applies this stack, per the plan's "clean cutover" decision.
- Vault seed. `secret/broker-sync` must be populated before apply;
  required keys documented in the ExternalSecret comment block.

Test plan
---------
## Automated
- `terraform fmt` — clean (ran before commit).
- `terraform validate` needs `terragrunt init` first; deferred to
  apply time.

## Manual Verification
1. Seed Vault `secret/broker-sync/*` (see comment block on the
   ExternalSecret in main.tf).
2. `cd stacks/broker-sync && scripts/tg apply`.
3. `kubectl -n broker-sync get cronjob` — expect 6 CJs, 3 suspended.
4. `kubectl -n broker-sync create job smoke --from=cronjob/broker-sync-version`.
5. `kubectl -n broker-sync logs -l job-name=smoke` — expect
   `broker-sync 0.1.0`.

* fix(beads-server): disable Authentik + CrowdSec on Workbench

Authentik forward-auth returns 400 for dolt-workbench (no Authentik
application configured for this domain). CrowdSec bouncer also
intermittently returns 400. Both disabled — Workbench is accessible
via Cloudflare tunnel only.

TODO: Create Authentik application for dolt-workbench.viktorbarzin.me

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 21:17:45 +01:00