Document scans (receipts, contracts, IDs) are unambiguously sensitive
PII. Storage decision rule defaults sensitive data to
`proxmox-lvm-encrypted`, but paperless-ngx had been left on plain
`proxmox-lvm` by an abandoned migration attempt that left a dormant,
non-Terraform-managed encrypted PVC sitting unbound for 11 days.
Cleaned up the orphan, added the encrypted PVC properly via Terraform,
rsynced data with deployment scaled to 0, swapped claim_name. Plain
`proxmox-lvm` PVC retained for a 7-day soak before removal.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
New panel 16 (barchart, h=11, y=179): one stacked bar per tax year showing
total comp split into net pay (bank deposit), cash income tax, RSU tax
(band-aware marginal: PAYE+NI), cash NI, student loan, pension salary-
sacrifice, and RSU offset (Variant A only).
X-axis = tax_year (categorical), y-axis = currencyGBP. Bar height ≈
gross_pay + pension_sacrifice (small over-attribution in Variant A years
where the band-aware model exceeds recorded payslip PAYE).
Phase 2 of the NFS-hostile migration: data + audit storageClass on
the vault helm release switches from nfs-proxmox to
proxmox-lvm-encrypted, then per-pod rolling swap (24h soak between).
vault-0 swap done. vault-1 + vault-2 still on NFS — the rolling part
is what makes this safe (raft quorum maintained by 2 healthy pods
while one is replaced).
Also restores chart-default pod securityContext fields. The previous
`statefulSet.securityContext.pod = {fsGroupChangePolicy = "..."}`
block REPLACED (not merged) the chart's defaults — fsGroup,
runAsGroup, runAsUser, runAsNonRoot were all silently dropped. NFS
exports were permissive enough to mask the missing fsGroup; ext4 LV
volume root is root:root and the vault user (UID 100) couldn't open
vault.db, CrashLoopBackOff. Fix: provide all five fields explicitly,
survives future chart bumps. vault-1 and vault-2 retained their
correct securityContext from when their pod specs were written to
etcd, before the partial customization landed — the bug only surfaces
when a pod is recreated.
Pre-flight raft snapshot saved at /tmp/vault-pre-migration-*.snap
(recovery anchor).
Refs: code-gy7h
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Replaces the flat 47% (45 PAYE + 2 NI) RSU marginal across panels 3, 7, 8, 11,
and 12 with an exact piecewise band-aware computation. Each row computes
ani_prior/ani_pre/ani_post over the tax-year YTD (chronological model — the
RSU is taxed at the band its YTD ANI position occupies at the vest date,
mirroring PAYE withholding behaviour).
Bands (2024/25+, applied to all years):
IT: 0% / 20% / 40% / 60% (PA-taper) / 45% at 12,570 / 50,270 / 100k / 125,140
NI: 0% / 8% / 2% at 12,570 / 50,270
PA-taper modelled as 60% effective IT marginal in £100k–£125,140
(40% on the £1 + 40% on the £0.50 of lost PA = 60%).
Spot-checked per tax-year totals via psql; numbers diverge from the flat
47% baseline most for years where vests cross PA-taper or basic-rate bands
(2020/21 ~35%, 2024/25 ~41%, 2025/26 ~43%).
Live PG data moves to a 10Gi LUKS-encrypted RWO PVC. WAL fsync per
commit on NFS contributed to the 2026-04-22 NFS writeback storm
(2h43m recovery, 3 of 4 nodes hard-reset). Backups remain on NFS
(append-only, NFS-tolerant).
The init container that writes postgresql.override.conf is now gated
on PG_VERSION presence — on a fresh PVC the file would otherwise make
initdb refuse the non-empty PGDATA. First boot skips the override and
initdb's cleanly; second boot (after a forced restart) writes the
override so vchord/vectors/pg_prewarm load before the dump restore.
Idempotent on initialised PVCs.
Migration executed: pg_dumpall (1.9GB) → restore on encrypted PVC →
REINDEX clip_index/face_index → 111,843 assets verified, external
HTTP 200, all 10 extensions present (vector minor 0.8.0→0.8.1 only).
LV created on PVE host, picked up by lvm-pvc-snapshot.
See docs/plans/2026-04-25-nfs-hostile-migration-{design,plan}.md.
Phase 2 (Vault Raft) follows under code-gy7h.
Closes: code-ahr7
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Drop the two misleading series in "Effective rate & take-home % (YTD
cumulative)" — both used SUM(gross_pay) as denominator while only
counting cash deductions/net in the numerator, which understated
take-home by 25-30 pp because RSU shares are absent from the cash
deposit but present in gross. Replaced with three semantically clean
angles:
- ytd_paye_rate_pct: SUM(income_tax) / SUM(taxable_pay) — HMRC audit
rate (~41-42% in additional-rate band), kept as before.
- ytd_cash_take_home_pct: SUM(net_pay) / SUM(gross_pay - rsu_vest) —
what fraction of cash earnings hits the bank (~62-65%).
- ytd_total_keep_pct: (SUM(net_pay) + 0.53 × SUM(rsu_vest)) /
SUM(gross_pay) — true "what I actually keep" including post-tax RSU
shares (47% marginal applied to vest value), ~55-60%.
Added field overrides for clear color-coding (red/green/blue).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Same reasoning as panel 2: cash-side income_tax and NI are inherently
bumpy in vest months due to UK cumulative PAYE catching up on YTD,
and the flat-47% strip can't fix it. Panel now shows only the
explicit RSU vest tax (orange, 47% × rsu_vest), student loan, and
pensions. The smooth view of total cash deductions stays available on
panel 12 (YTD cumulative).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Vest months still bumped 4-5x in this panel after the flat-47% strip
because UK cumulative PAYE genuinely catches up YTD tax in vest
months, on top of the marginal RSU portion — no arithmetic split can
make that line flat without distorting the data. The cash-flow
question this panel answers (what hits the bank, RSU aside) is
already covered cleanly by cash_gross + net_pay; the tax detail lives
on Panel 11 where the RSU split is now linear.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Replace the implicit pro-rata RSU/cash split with an explicit flat
47% marginal (45% PAYE + 2% NI) for the RSU vest tax stack. The orange
slice now scales linearly with rsu_vest instead of wobbling around the
month's effective PAYE rate; cash PAYE/NI slices have those amounts
subtracted out so the stack still totals to actual deductions.
Affects panel 7 (monthly), panel 12 (YTD cumulative), panel 7
(YTD uses), and the Sankey panel. Verified on 35 months of live data:
sum invariant holds exactly (cash + rsu_marginal + cash_ni ==
income_tax + national_insurance), no negatives in cash slices.
Out of scope (left raw): effective-rate %, data-integrity, payslip
table, P60/HMRC reconciliation — those are audit views that use
unmodified income_tax / cash_income_tax columns.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
cmd_prune_count's `log " Pruned: ..."` wrote to stdout, which the
caller captures via `pruned=$(cmd_prune_count)`. From 2026-04-16 onward
(7d retention kicked in), pruned snapshots polluted the captured value
with multi-line log text, breaking the Prometheus exposition format
on the metric push (`lvm_snapshot_pruned_total ${pruned}` → 400 from
Pushgateway). Snapshots themselves were always fine; only the metric
push silently failed for ~9 nights, eventually triggering
LVMSnapshotNeverRun (alert has 48h `for:`).
Fix: redirect the inner log call to stderr so cmd_prune_count's stdout
contains only the count. Also adopts `infra/scripts/lvm-pvc-snapshot.sh`
as the source-of-truth (was edited only on the PVE host) and updates
backup-dr.md to point at the .sh and document the scp deploy.
Deploy: scp infra/scripts/lvm-pvc-snapshot.sh root@192.168.1.127:/usr/local/bin/lvm-pvc-snapshot
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The tuya-bridge exporter reports `fuse_main_voltage` and
`fuse_garage_voltage` as raw uint16 from the Tuya protocol, which
encodes voltage in deciVolts (e.g. 2352 = 235.2V). The 200/260V
thresholds were comparing against the raw integer, so both
FuseMainVoltageAbnormal and FuseGarageVoltageAbnormal fired
continuously during normal mains conditions.
Dividing in the expression also makes `{{ $value }}V` render the
correct human-readable value in the alert summary.
Root fix would be in tuya-bridge `_decode_value()` where
`name.startswith("voltage")` returns `int.from_bytes(...)` without the
/10 scaling that `decode_voltage_threshold` applies. Leaving that
alone to avoid breaking the automatic_transfer_switch scrape which
uses a different code path (`parse_voltage_string`).
New alert TuyaCloudDown fires when any *_tuya_cloud_up gauge == 0
(i.e., the Tuya Cloud API rejects scrape calls — the symptom during
last night's iot.tuya.com trial expiry, code=28841002). 5m for-duration
beats the 15m window of the seven downstream *MetricsMissing alerts, so
the new Alertmanager inhibit rule suppresses the per-device noise and
only TuyaCloudDown pages.
Also flips helm_release.prometheus.force_update from true to false:
force_update was tripping on the pushgateway PVC added in rev 188
(commit e51c104) — Helm's --force path tried to reset spec.volumeName
on a bound PVC. Disabled here; re-enable temporarily when a
StatefulSet volumeClaimTemplate change actually needs --force.
Bundled with pre-existing working-tree additions for Fuse/Thermostat
threshold alerts and expanded PowerOutage inhibit regex (landed in the
same Helm revision 190).
Verified: rule loaded, value=7 (all 7 tuya-bridge devices report
cloud_up=0 right now), TuyaCloudDown moved pending→firing after 5m,
3 *MetricsMissing alerts currently suppressed in Alertmanager with
inhibitedBy=1 (thermostat alerts still pending their 15m window, will
be suppressed on transition).
The bridge was down 10h 40m on 2026-04-22 without being restarted —
the liveness probe hit `/` (trivial Flask handler) which passed while
the actual Tuya-cloud call path was stuck. /health now reports Tuya
cloud reachability via a background probe in the app; point both
probes at it. Liveness: 60s grace + 6x30s = 3min of 503s before
restart; readiness: 2x15s = 30s before removal from service.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Deployment is scaled to replicas=0 to silence ExternalAccessDivergence,
but the ingress at poison.viktorbarzin.me was still auto-annotated
`external-monitor=true` by ingress_factory (dns_type=non-proxied path),
so external-monitor-sync kept creating `[External] poison` which
probed a backend with no endpoints and flagged DOWN.
Setting `external_monitor = false` emits the explicit opt-out
annotation; next sync run deleted the orphaned monitor.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two independent root-cause fixes surfaced by the 2026-04-22 cluster
health check:
1. Pushgateway lost all in-memory metrics when node3 kubelet hiccuped
at 11:42 UTC, hiding backup_last_success_timestamp{job="offsite-
backup-sync"} until the next 06:01 UTC push — a ~18h false-negative
window. Enable persistence on a 2Gi proxmox-lvm-encrypted PVC with
--persistence.interval=1m. Chart note: values key is
`prometheus-pushgateway:` (subchart alias), not `pushgateway:`.
2. poison-fountain-fetcher CronJob runs curlimages/curl as UID 100
but the NFS mount /srv/nfs/poison-fountain is root:root 755 and
the main Deployment runs as root, so mkdir /data/cache fails
every 6h. Set run_as_user=0 on the CronJob container (no_root_squash
is set on the export).
Closes the backup_offsite_sync FAIL on the next 06:01 UTC offsite
sync; closes the recurring poison-fountain evicted-pod noise on the
next 00:00 UTC cron tick.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When a zone is created against a stale primary IP (e.g. the old primary
pod IP 10.10.36.189 before the technitium-primary ClusterIP service
existed), AXFR refresh keeps failing forever while every other zone on
the same replica refreshes fine from 10.110.37.186. The resync-only
branch didn't touch zone options, so the bad IP was pinned indefinitely.
This surfaced as rpi-sofia.viktorbarzin.lan returning 192.168.1.16
(pre-move) on secondaries while primary had the correct .10 from
2026-04-22 morning — Uptime Kuma Sofia RPI monitor DOWN, cluster
cluster_healthcheck FAIL.
The sync loop now re-applies primaryNameServerAddresses on every run
for existing zones. Idempotent — Technitium accepts identical values
— and self-heals any drift within 30 min. Env renamed PRIMARY_IP →
PRIMARY_HOST for consistency with the reconcile semantics.
Hostname form (technitium-primary.technitium.svc.cluster.local) was
tried but Technitium's own resolver doesn't forward svc.cluster.local,
so the field must stay a literal IP. Terraform tracks the ClusterIP on
every apply and the reconcile loop propagates it to replicas.
Main container crashes with "mkdir: cannot create directory '/opt/data':
Permission denied". Init container writes fine but main container runs
with different fsGroup/runAsUser. Scaling to 0 until the PVC permission
model is reworked.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
After k8s-node1 was silently cordoned and broke Frigate camera streams,
existing alerts (NvidiaExporterDown, PodUnschedulable) didn't catch the
root cause proactively. This alert fires within 5m of the GPU node being
cordoned, before any pod restart attempts to schedule and fails.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove every hardcoded reference to k8s-node1 that pinned GPU
scheduling to a specific host:
- GPU workload nodeSelectors: gpu=true -> nvidia.com/gpu.present=true
(frigate, immich, whisper, piper, ytdlp, ebook2audiobook, audiblez,
audiblez-web, nvidia-exporter, gpu-pod-exporter). The NFD label is
auto-applied by gpu-feature-discovery on any node carrying an
NVIDIA PCI device, so the selector follows the card.
- null_resource.gpu_node_config: rewrite to enumerate NFD-labeled
nodes (feature.node.kubernetes.io/pci-10de.present=true) and taint
each with nvidia.com/gpu=true:PreferNoSchedule. Drop the manual
'kubectl label gpu=true' since NFD handles labeling.
- MySQL anti-affinity: kubernetes.io/hostname NotIn [k8s-node1] ->
nvidia.com/gpu.present NotIn [true]. Same intent (keep MySQL off
the GPU node) but portable when the card relocates.
Net effect: moving the GPU card between nodes no longer requires any
Terraform edit. Verified no-op for current scheduling — both old and
new labels resolve to node1 today.
Docs updated to match: AGENTS.md, compute.md, overview.md,
proxmox-inventory.md, k8s-portal agent-guidance string.
The initial recovery at 11:03 was premature; vault-1's audit writes over
NFS started hanging ~15 min later and the cluster regressed to 503.
Full recovery required rebooting node4 (to free vault-0's stuck NFS
mount and shed PVE NFS thread contention) and a second reboot of node3
(to clear another round of kernel NFS client degradation). Final
recovery at 11:43:28 UTC with vault-2 as active leader on the quorum
vault-0 + vault-2.
vault-1 remains stuck in ContainerCreating on node2 — a third node2
reboot is required for full 3/3 quorum, but 2/3 is operationally
sufficient, so that's deferred.
The 2026-04-22 Vault outage caught kubelet in a 2-minute chown loop that
never exited because the default fsGroupChangePolicy (Always) walks every
file on the NFS-backed data PVC. With retrans=3,timeo=30 NFS options and
a 1GB audit log, the recursive chown outlasted the deadline and restarted
forever — blocking raft quorum recovery. OnRootMismatch makes chown a
no-op when the volume root is already correct, which it always is after
initial setup.
The breakglass fix was applied live via kubectl patch at 10:54 UTC; this
commit persists it in Terraform so the next apply doesn't revert.
The post-mortem also documents the upstream raft stuck-leader pattern,
NFS kernel client corruption after force-kill, and the path to migrate
Vault off NFS to proxmox-lvm-encrypted.
RPi now uses USB Ethernet (eth1) as primary uplink at .10 instead of
the old wlan1 address at .16. Camera namespace and DNAT updated to
use eth1 with systemd persistence.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Threshold was 48h + 30m for: a job that runs daily. We don't need
to wait 2.5 days to detect a broken timer — bring it down to 30h
+ 30m (just over a day of cadence + minor drift/retry grace). Also
add a description pointing to the restore runbook so the alert
text surfaces the fix path directly.
Threshold change: 172800s → 108000s. Docs in backup-dr.md synced.
Re-triggers default.yml apply now that ci/Dockerfile is rebuilt
with vault CLI — this is the first commit touching a stack that
will actually succeed since the e80b2f02 regression.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The Woodpecker CI pipeline has been silently failing to apply Tier 1
stacks since the state-migration commit e80b2f02 because the Alpine
CI image never had the vault CLI. `scripts/tg` swallowed stderr with
`2>/dev/null` and surfaced a misleading "Cannot read PG credentials
from Vault" message — the real error was `sh: vault: not found`.
Verified with an in-cluster probe: woodpecker/default SA + role=ci
already gets the terraform-state policy and has read capability on
database/static-creds/pg-terraform-state. Auth was never the problem;
the vault binary just wasn't there.
- ci/Dockerfile: pin vault v1.18.1 (matches server) and install
- scripts/tg: pre-flight check + surface real vault output on failure
- Next build-ci-image.yml run rebuilds :latest with vault included;
subsequent default.yml runs unblock monitoring apply (code-aoxk)
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Doc claimed >40m; actual fire time is 80m (60m last-success threshold +
20m 'for'). Stale since pre-existing config; now re-stale after raising
'for' from 10m to 20m in 9b4970da. Files out of sync only on this one
alert row.
- HighPowerUsage: add subsystem:gpu (line 724) + subsystem:r730 (line 775)
labels so the two same-named alerts are distinguishable in routing.
- HeadscaleDown (deployment-replicas flavor, line 1414) → rename to
HeadscaleReplicasMismatch. Line 2039 keeps HeadscaleDown as the real
up-metric critical check. NodeDown inhibit rule updated to suppress
the renamed alert too.
- EmailRoundtripStale (line 1816): for 10m → 20m. Survives one missed
20-min probe cycle before firing, cuts flapping (12 short-burst fires
over last 24h).
ATSOverload tuning skipped: 24h fire-count is 0, it's continuously
firing not flapping — already-known sustained 83% ATS load, tuning
would not change behavior.
8 backup *NeverSucceeded rules audited: all 7 using
kube_cronjob_status_last_successful_time target real K8s CronJobs with
active metrics (not Pushgateway-sourced). PrometheusBackupNeverRun
already uses absent() correctly. No fixes needed.
Registry integrity probe surfaced 38 broken manifest references
(34 unique repo:tag pairs, same OCI-index orphan pattern as the 04-19
infra-ci incident). Deleted all via registry HTTP API + ran GC;
reclaimed ~3GB blob storage.
beads-server CronJobs were stuck ImagePullBackOff on
claude-agent-service:0c24c9b6 for >6h — bumped variable default to
2fd7670d (canonical tag in claude-agent-service stack, already healthy
in registry) so new ticks can fire.
Rebuilt in-use broken tags: freedify:{latest,c803de02} and
beadboard:{17a38e43,latest} on registry VM; priority-pass via
Woodpecker pipeline #8. wealthfolio-sync:latest deferred (monthly
CronJob, next run 2026-05-01).
Probe now reports 0/39 failures. RegistryManifestIntegrityFailure
alert cleared.
Closes: code-8hk
Closes: code-jh3c
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Primary has only the Primary-type zones it owns (10). Replicas have those
+ built-in zones (localhost, in-addr.arpa reverse, etc.), so their count
(14) can never match primary. Alert expr compared max-min across all
instances, making it chronically firing.
Fix: instance!="primary" filter. The real signal this alert wants is
"did one replica drift from the others" — replica-to-replica comparison
captures that; primary was never comparable.
- HA Sofia token: auto-bootstrap from Vault secret/viktor/haos_api_token when
HOME_ASSISTANT_SOFIA_{URL,TOKEN} env vars are unset. Default URL =
https://ha-sofia.viktorbarzin.me.
- cert-manager: add cert_manager_installed() probe (kubectl get crd
certificates.cert-manager.io). When not installed — which is our current
state — report PASS "N/A" instead of noisy WARN "CRDs unavailable".
- LVM snapshot freshness: grep pattern was `-- -snap` but actual LV names use
underscore (`foo_snap_YYYY...`), so the grep matched nothing and the check
always WARN'd. Fixed to `grep _snap`.
After fix: PASS 36→40, WARN 9→6, FAIL 1→1 (new ha_entities FAIL is a real
HA issue, not a script bug — 400/1401 sensors stale on ha-sofia).
Both alerts fired as expected noise while the MAM account is in new-member
Mouse class — tracker refuses announces and the 72h seed-gate can't be met
until ratio recovers. Keeping the rest of the MAM rules (cookie expiry,
ratio, farming/janitor stalls, qbt disconnect) which still signal real
pipeline failures.
Firing count drops from 7 → 3 in healthcheck.
"Monthly cash flow — tax impact (RSU excluded)" was already stacking
group A in normal mode but rendered as 70%-opacity filled lines — the
overlap made the total-per-month figure visually inaccessible.
Switch drawStyle to bars (100% fill, 0-width lineWidth, no per-point
markers) so each month reads as a single stacked bar whose top edge is
the total cash-side deduction. Add "sum" to legend.calcs so the
tax-year totals per series show in the legend table alongside last and
max.
Panel 11 (Tax & pension — monthly, RSU-inclusive) retains the line/
area style so the two panels remain visually distinct.
Activates the Schwab/InvestEngine IMAP ingest CronJob that's been
scaffolded-but-suspended since Phase 2 of broker-sync, now that the
Schwab parser can detect vest-confirmation emails. Runs nightly 02:30 UK.
Current behaviour once deployed:
- Trade confirmations (Schwab sell-to-cover, InvestEngine orders) →
Activity rows posted to Wealthfolio. Unchanged.
- Release Confirmations (Schwab RSU vests) → parser returns gross-vest
BUY + sell-to-cover SELL Activities (to Wealthfolio) and a VestEvent
object (NOT YET persisted — Postgres sink + DB grant pending; see
follow-up under code-860). Vest detection uses a subject/body
heuristic that will need tightening against a real email fixture.
Panel 15 of the UK payslip dashboard added: per-vest-month join of
payslip.rsu_vest vs rsu_vest_events (gross_value_gbp, tax_withheld_gbp)
with delta columns. Tax-delta-percent coloured green/orange/red at
0/2%/5% thresholds. Table is empty until broker-sync starts persisting
VestEvents — harmless until then.
Before applying:
- Verify IMAP creds in Vault (secret/broker-sync: imap_host,
imap_user, imap_password, imap_directory) are still valid.
- Empty vest-event table is expected; delta columns show NULL until
the postgres sink lands.
Part of: code-860
Wires the daily ActualBudget deposit sync from the payslip-ingest app into
K8s as a CronJob, and adds dashboard Panel 14 to overlay bank deposits
against payslip net_pay.
CronJob: actualbudget-payroll-sync in payslip-ingest namespace, runs
02:00 UTC. Calls `python -m payslip_ingest sync-meta-deposits`, which
hits budget-http-api-viktor in the actualbudget namespace and upserts
matching Meta payroll deposits into payslip_ingest.external_meta_deposits.
ExternalSecret extended with three new Vault keys:
- ACTUALBUDGET_API_KEY (same as actualbudget-http-api-viktor's env API_KEY)
- ACTUALBUDGET_ENCRYPTION_PASSWORD (Viktor's budget password)
- ACTUALBUDGET_BUDGET_SYNC_ID (Viktor's sync_id)
These must be seeded at secret/payslip-ingest in Vault before the
CronJob will run — it'll CrashLoop on missing env vars otherwise. First
run can be triggered on demand via `kubectl -n payslip-ingest create
job --from=cronjob/actualbudget-payroll-sync initial-sync`.
Panel 14 plots monthly SUM(external_meta_deposits.amount) vs
SUM(payslip.net_pay), plus a delta bar series — |delta| > £50 flags
likely parser drift on net_pay.
Part of: code-860
Phase A of RSU tax spike fix. Two changes:
1. Panel 2 "Monthly cash flow (RSU stripped)" plotted raw income_tax despite
the title. Switch to COALESCE(cash_income_tax, income_tax) so the chart
is honest once the Phase B back-fill populates cash_income_tax on
variant-A slips. For slips where cash_income_tax is already populated
(variant B, 2024+) the spike is removed immediately.
2. Panel 4 "Data integrity" now surfaces rows where cash_income_tax is NULL
on vest months (rsu_vest > 0). New status value NULL_CASH_TAX (orange)
highlights the back-fill remaining population — expected to drop to 0
after Phase B lands.
Part of: code-860
Bring the architecture set in line with what's actually deployed after
today's registry reliability work (commits 7cb44d72 → 42961a5f):
- docs/architecture/ci-cd.md: expand Infra Pipelines table with
build-ci-image (+ verify-integrity step), registry-config-sync,
pve-nfs-exports-sync, postmortem-todos, drift-detection,
issue-automation, provision-user. Note registry:2.8.3 pin +
integrity probe in the image-registry flow section.
- docs/architecture/monitoring.md: add Registry Integrity Probe to
components table; add 3-alert section (Manifest Integrity Failure /
Probe Stale / Catalog Inaccessible).
- .claude/CLAUDE.md: one-line on the pin, auto-sync pipeline, and the
revision-link-not-blob rule so the next agent knows the right check.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Locally-built image pushed to registry.viktorbarzin.me/job-hunter:9c42eac9
after Woodpecker v3.13 Forgejo webhook parsing bug left CI unable to build
the initial image (server/forge/forgejo/helper.go:57 nil pointer panic on
parse — see repaired webhooks still not triggering pipelines).
Unblocks code-97n (TF apply) without waiting for CI recovery.
Refs: code-snp, code-0c6
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The original index-child scan checked if the child's blob data file
existed under /blobs/sha256/<child>/data. That's wrong in a subtle
way: registry:2 serves a per-repo manifest via the link file at
<repo>/_manifests/revisions/sha256/<child-digest>/link, NOT by blob
presence. When cleanup-tags.sh rmtrees a tag, the per-repo revision
links for its index's children also disappear — but the blob data
survives (GC owns that, and runs weekly). Result: blob present,
link absent, API 404 on HEAD — the exact 2026-04-19 failure mode.
Live proof: the registry-integrity-probe CronJob just found 38 real
orphan children (including 98f718c8 from the original incident) while
the previous fix-broken-blobs.sh scan reported 0. After the fix, both
tools agree. The probe had been authoritative all along; the scan was
a false-negative because it was asking the wrong question.
Post-mortem updated to reflect the true mechanism (link-file absence,
not blob deletion).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Vertical orange markers at every vest month added more visual noise
than signal. Panel 13 (cash-only) already conveys the "no spike on
vest months" story without needing markers across panels 1/2/3/7/11/12.
Replaces the manual scp+bounce sequence that landed registry:2.8.3 on
10.0.20.10 today (see commit 7cb44d72 + nginx-DNS-trap in runbook).
Addresses the "no repeat manual fixes" preference — future changes to
docker-compose.yml / fix-broken-blobs.sh / nginx_registry.conf /
config-private.yml / cleanup-tags.sh now deploy through CI.
Pipeline (.woodpecker/registry-config-sync.yml) mirrors
pve-nfs-exports-sync.yml: ssh-keyscan pin, scp the whole managed set,
bounce compose only when compose-visible files changed, always restart
nginx after a compose bounce (critical — nginx caches upstream DNS), end
with a dry-run fix-broken-blobs.sh to catch regressions.
Credentials:
- Woodpecker repo-secret `registry_ssh_key` (events: push, manual)
- Mirror at Vault `secret/woodpecker/registry_ssh_key`
(private_key / public_key / known_hosts_entry)
- Public key on /root/.ssh/authorized_keys on 10.0.20.10
- Key label: woodpecker-registry-config-sync
Runbook updated with "Auto-sync pipeline" section pointing at the new
flow + manual override command.
Closes: code-3vl
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Panel 11 stacks RSU-attributed income tax on top of cash PAYE, which
is mathematically correct but emotionally misleading since RSU tax is
withheld at source via sell-to-cover and never hits the bank. Adopts
the two-view convention: Panel 11 keeps the full PAYE picture; new
Panel 13 shows cash-only deductions. Dashboard-level "RSU vests"
annotation paints orange markers on every vest month across all
timeseries panels, with tooltips like "RSU vest: £31232 gross /
£15257 tax withheld".
Shifts Panels 4/5/6/8/9/10 down by 9 rows to make room for Panel 13
at y=29.