Commit graph

126 commits

Author SHA1 Message Date
Viktor Barzin
a04335d0f3 right-size 14 services and scale down GPU-heavy workloads [ci skip]
Memory right-sizing based on VPA upperBound analysis:
- Increases: stirling-pdf 1200→1536Mi, claude-memory 64→128Mi,
  dawarich 512→768Mi, kyverno-cleanup 128→192Mi, linkwarden 768→1Gi,
  navidrome 64→128Mi, listenarr 768→896Mi, privatebin 64→128Mi,
  ntfy 64→128Mi, health 128→256Mi, dbaas quota 16→20Gi,
  mysql-operator 384→512Mi
- Decreases: rybbit 768→384Mi, nvidia-exporter added explicit 192Mi,
  dcgm-exporter 2560→1536Mi
- Scale to 0: ebook2audiobook/audiblez-web, whisper (GPU node pressure)

Net effect: -496Mi cluster-wide, 13 ContainerNearOOM alerts resolved,
all ResourceQuota pressures cleared, GPU health green.
2026-03-15 23:00:49 +00:00
Viktor Barzin
50620e6047 add generic multi-user cluster onboarding system
Data-driven user onboarding: add a JSON entry to Vault KV k8s_users,
apply vault + platform + woodpecker stacks, and everything is auto-generated.

Vault stack: namespace creation, per-user Vault policies with secret isolation
via identity entities/aliases, K8s deployer roles, CI policy update.

Platform stack: domains field in k8s_users type, TLS secrets per user namespace,
user domains merged into Cloudflare DNS, user-roles ConfigMap mounted in portal.

Woodpecker stack: admin list auto-generated from k8s_users, WOODPECKER_OPEN=true.

K8s-portal: dual-track onboarding (general/namespace-owner), namespace-owner
dashboard with Vault/kubectl commands, setup script adds Vault+Terraform+Terragrunt,
contributing page with CI pipeline template, versioned image tags in CI pipeline.

New: stacks/_template/ with copyable stack template for namespace-owners.
2026-03-15 22:23:36 +00:00
Viktor Barzin
0f262ceda3 add pod dependency management via Kyverno init container injection
Kyverno ClusterPolicy reads dependency.kyverno.io/wait-for annotation
and injects busybox init containers that block until each dependency
is reachable (nc -z). Annotations added to 18 stacks (24 deployments).

Includes graceful-db-maintenance.sh script for planned DB maintenance
(scales dependents to 0, saves replica counts, restores on startup).
2026-03-15 19:17:57 +00:00
Viktor Barzin
f0312df2be fix gpu-workload Kyverno policy: use replace with explicit priority value
The API server doesn't re-resolve priority from PriorityClassName after
webhook mutation. Changed from remove+add to replace with explicit
priority=1200000 and preemptionPolicy=PreemptLowerPriority.
2026-03-15 19:11:44 +00:00
Viktor Barzin
1acf8cc4e8 migrate consuming stacks to ESO + remove k8s-dashboard static token
Phase 9: ExternalSecret migration across 26 stacks:

Fully migrated (vault data source removed, ESO delivers secrets):
- speedtest, shadowsocks, wealthfolio, plotting-book, f1-stream, tandoor
- n8n, dawarich, diun, netbox, onlyoffice, tuya-bridge
- hackmd (ESO template for DB URL), health (ESO template for DB URL)
- trading-bot (ESO template for DATABASE_URL + 7 secret env vars)
- forgejo (removed unused vault data source)

Partially migrated (vault kept for plan-time, ESO added for runtime):
- immich, linkwarden, nextcloud, paperless-ngx (jsondecode for homepage)
- claude-memory, rybbit, url, webhook_handler (plan-time in locals/jobs)
- woodpecker, openclaw, resume (plan-time in helm values/jobs/modules)

17 stacks unchanged (all plan-time: homepage annotations, configmaps,
module inputs) — vault data source works with OIDC auth.

Phase 17a: Remove k8s-dashboard static admin token secret.
Users now get tokens via: vault write kubernetes/creds/dashboard-admin
2026-03-15 19:05:04 +00:00
Viktor Barzin
90b7d6ebb5 etcd defrag cronjob: add --command-timeout=60s
Default 5s timeout causes defrag to fail on fragmented DBs.
Discovered during manual defrag that took ~7s.
2026-03-15 17:24:24 +00:00
Viktor Barzin
c034adab5f mitigate cluster instability during terraform applies
- Recreate strategy for heavy single-replica deployments (onlyoffice, stirling-pdf)
- Reduce maxSurge on multi-replica deployments (traefik, authentik, grafana, kyverno)
  to prevent memory request surge overwhelming scheduler
- Weekly etcd defrag CronJob (Sunday 3 AM) to prevent fragmentation buildup
- Disable Kyverno policy reports (ephemeral report cleanup)
- Cloud-init: journald persistence + 4Gi swap for worker nodes
- Kubelet: LimitedSwap behavior for memory pressure relief
2026-03-15 17:23:39 +00:00
Viktor Barzin
5beb481dc4 fix immich TF drift from Kyverno ndots injection, right-size nvidia GPU operator
- immich: add lifecycle ignore_changes for dns_config on all 3 deployments
  to prevent perpetual plan drift from Kyverno ndots:2 mutation policy
- nvidia dcgm-exporter: 768Mi → 2560Mi (VPA upper 2091Mi, was under-provisioned)
- nvidia cuda-validator: 1024Mi → 256Mi (one-shot job, vastly over-provisioned)
2026-03-15 15:36:19 +00:00
Viktor Barzin
a6d281dbc6 vaultwarden: upgrade to 1.35.4, use Recreate strategy
- Upgrade from 1.35.2 to 1.35.4 (fixes API key login userDecryptionOptions bug)
- Switch deployment strategy from RollingUpdate to Recreate (iSCSI PVC can't multi-attach)
2026-03-15 15:35:09 +00:00
Viktor Barzin
194281e527 right-size cluster memory: reduce overprovisioned, fix under-provisioned services
Phase 1 - Quick wins (~4.5 Gi saved):
- democratic-csi: add explicit sidecar resources (64-80Mi vs 256Mi LimitRange default)
- caretta: 768Mi → 600Mi (VPA upper 485Mi)
- immich-ml: 4Gi → 3584Mi (VPA upper 2.95Gi, GPU margin)
- onlyoffice: 3Gi → 2304Mi (VPA upper 1.82Gi)

Phase 2 - Safety fixes (prevent OOMKills):
- frigate: 2Gi/8Gi → 5Gi/10Gi (VPA upper 7.7Gi, was 4% headroom)
- openclaw: 1280Mi req → 2Gi req=limit (documented 2Gi requirement)

Phase 3 - Additional right-sizing:
- authentik workers: 1Gi → 896Mi x3 (VPA upper 722Mi)
- shlink: 512Mi/768Mi → 960Mi req=limit (VPA upper 780Mi, safety increase)

Phase 4 - Burstable QoS for lower tiers:
- tier-3-edge: 128Mi/128Mi → 96Mi req / 192Mi limit
- tier-4-aux: 128Mi/128Mi → 64Mi req / 256Mi limit

Phase 5 - Monitoring:
- Add ClusterMemoryRequestsHigh alert (>85% allocatable, 15m)
- Add ContainerNearOOM alert (>85% limit, 30m)
- Add PodUnschedulable alert (5m, critical)

Cluster: 92.7% → 90.8% memory requests. Stirling-pdf now schedulable.
2026-03-15 15:30:18 +00:00
Viktor Barzin
b219961bd8 fix: MySQL memory overcommit + shlink OOMKill
- dbaas: MySQL requests 4Gi -> 2Gi (limits stay 4Gi) to free 6Gi
  of request capacity. Actual usage is 1-1.5Gi per instance.
- url/shlink: increase memory limit 512Mi -> 768Mi (OOMKilled)
2026-03-15 03:22:07 +00:00
Viktor Barzin
52bce849d7 fix cluster health: resolve 21/23 failures from healthcheck
- nvidia: change GPU taint NoSchedule -> PreferNoSchedule to allow
  overflow scheduling on k8s-node1 (frees ~7Gi capacity)
- kyverno: increase reports-controller memory 256Mi -> 512Mi (OOMKilled)
- speedtest: add missing DB_PORT=3306 env var (nc: service "" unknown)
- realestate-crawler: increase API memory 64Mi -> 256Mi (OOMKilled)
- calibre: increase liveness probe timeout 1s -> 5s (false restarts)
2026-03-15 02:33:46 +00:00
Viktor Barzin
fca4e02c54 prometheus: increase memory to 4Gi and probe delays for TSDB compaction
Compaction of 5 years of TSDB blocks was OOM-killing at 3Gi (18 restarts
in 8h), causing sustained IO pressure on the PVE host spinning disk.
Increase liveness probe delay to 300s so WAL replay completes before
the probe kills the pod.
2026-03-15 01:44:52 +00:00
Viktor Barzin
43b49f7f6c cluster recovery: fix resource limits and node1 memory
- nvidia quota: requests.memory 8Gi → 12Gi (unblock cuda-validator)
- calibre: startup probe initial_delay 60→120s, timeout 1→5s,
  wait_for_rollout=false (DOCKER_MODS install takes 10+ min)
- immich ML: memory 2Gi → 4Gi (OOMKilled loading CLIP models)

Also done outside TF (not in this commit):
- node1 VM: 16 GiB → 24 GiB RAM (Proxmox)
- tigera-operator: kubectl patch 128→256Mi
- nvidia-driver-daemonset: kubectl patch 1→4Gi memory
- kyverno reports-controller: kubectl patch 128→256Mi
- CNPG operator: kubectl rollout restart
2026-03-15 01:44:28 +00:00
Viktor Barzin
29032e0b6b fix vaultwarden backup image: use docker.io/library/alpine for Kyverno 2026-03-15 00:16:31 +00:00
Viktor Barzin
6f562b5da6 add vaultwarden daily backup CronJob to NFS
SQLite backup via Online Backup API + copy of RSA keys,
attachments, sends, and config. 30-day retention with rotation.
Pod affinity ensures co-scheduling with vaultwarden for RWO PVC access.
2026-03-15 00:03:59 +00:00
Viktor Barzin
92cc3f01c1 migrate vaultwarden storage from NFS to iSCSI
SQLite on NFS causes DB corruption due to unreliable POSIX fcntl locking.
iSCSI provides a block device with a local filesystem where locking works
correctly. Same approach used for Redis, MySQL, PostgreSQL, etc.
2026-03-14 23:42:17 +00:00
Viktor Barzin
23019da8e5 equalize memory req=lim across 70+ containers using Prometheus 7d max data
After node2 OOM incident, right-size memory across the cluster by setting
requests=limits based on max_over_time(container_memory_working_set_bytes[7d])
with 1.3x headroom. Eliminates ~37Gi overcommit gap.

Categories:
- Safe equalization (50 containers): set req=lim where max7d well within target
- Limit increases (8 containers): raise limits for services spiking above current
- No Prometheus data (12 containers): conservatively set lim=req
- Exception: nextcloud keeps req=256Mi/lim=8Gi due to Apache memory spikes

Also increased dbaas namespace quota from 12Gi to 16Gi to accommodate mysql
4Gi limits across 3 replicas.
2026-03-14 21:46:49 +00:00
Viktor Barzin
2be858f616 fix: eliminate memory overcommit to prevent node OOM crashes
Set requests = limits (Guaranteed QoS) across LimitRange defaults and
explicit pod resources. Node2 crashed 2026-03-14 from 250% memory
overcommit (61GB limits on 24GB node).

Changes:
- LimitRange: default = defaultRequest for all 6 tiers
- Grafana: 3 → 2 replicas
- Grampsweb: document why replicas=0
- Prometheus: 1Gi/4Gi → 3Gi/3Gi
- OpenClaw: 512Mi/2Gi → 768Mi/768Mi
- Immich server: 256Mi/2Gi → 512Mi/512Mi
- Immich postgresql: 256Mi/1Gi → 512Mi/512Mi
- Calibre: 256Mi/1536Mi → 256Mi/256Mi
- Linkwarden: 256Mi/1536Mi → 768Mi/768Mi
- N8N: 256Mi/1Gi → 512Mi/512Mi
- MySQL cluster: 1Gi/3-4Gi → 2Gi/2Gi
- pg-cluster (CNPG): 512Mi/4Gi → 512Mi/512Mi
- DBaaS ResourceQuota limits.memory: 64Gi → 12Gi

[ci skip]
2026-03-14 16:01:41 +00:00
Viktor Barzin
44aa6d61c2 Reduce downtime during platform stack applies
CrowdSec fixes:
- Increase ResourceQuota requests.cpu 1→4 (was at 302%, blocking upgrades)
- Add LAPI startupProbe: 30 attempts × 10s = 5min startup window
  (LAPI pods were failing default startup probe during rolling upgrades)
- Reduce Helm timeout 3600s→900s with wait=true, wait_for_jobs=true

Prometheus startup guard on 8 rate-based alerts:
- PodCrashLooping, ContainerOOMKilled, CoreDNSErrors,
  HighServiceErrorRate, HighService4xxRate, HighServiceLatency,
  SSDHighWriteRate, HDDHighWriteRate
- Suppresses false positives for 15m after Prometheus restart
2026-03-14 12:47:56 +00:00
Viktor Barzin
4ea3ffe9d3 Reduce downtime during platform stack applies
CrowdSec Helm fix:
- Increase ResourceQuota requests.cpu from 1 to 4 — pods were at 302%
  of quota, preventing scheduling during rolling upgrades
- Reduce Helm timeout from 3600s to 600s — 1 hour hang is excessive
- Add wait=true and wait_for_jobs=true for proper readiness checking

Prometheus startup guard:
- Add startup guard to 8 rate/increase-based alerts that false-fire
  after Prometheus restarts (needs 2 scrapes for rate() to work):
  PodCrashLooping, ContainerOOMKilled, CoreDNSErrors,
  HighServiceErrorRate, HighService4xxRate, HighServiceLatency,
  SSDHighWriteRate, HDDHighWriteRate
- Guard: and on() (time() - process_start_time_seconds) > 900
  suppresses alerts for 15m after Prometheus startup
2026-03-14 12:09:09 +00:00
Viktor Barzin
17065304dc Fix NFSServerUnresponsive false positives
Root cause: sum(rate(node_nfs_requests_total[5m])) == 0 was too fragile:
- rate() returns nothing after Prometheus restarts (needs 2 scrapes)
- Individual nodes show zero NFS rate during scrape gaps or low activity
- The sum() could hit zero during quiet hours + scrape gaps

New expression uses:
- changes() instead of rate() — works with a single scrape
- Per-instance aggregation: count nodes with any NFS counter change
- Threshold < 2 nodes: single-node restarts won't trigger, real NFS
  outage (all nodes affected) will
- Prometheus startup guard: skip first 15m after restart to avoid
  false positives from empty TSDB
- Wider 15m changes() window to smooth out scrape gaps
2026-03-14 11:28:17 +00:00
Viktor Barzin
6377a8b85b Monitoring overhaul: reduce noise, add coverage gaps, auto-load dashboards
Noise reduction (8 alerts tuned):
- PoisonFountainDown: 2m→5m, critical→warning (fail-open service)
- NodeExporterDown: 2m→5m (flaps during node restarts)
- PowerOutage: add for:1m (debounce transient voltage dips)
- New Tailscale client: add for:5m (debounce headscale reauths)
- NoNodeLoadData: use absent() instead of OR vector(0)==0
- NodeHighCPUUsage: 30%→60% (normal for 70+ services)
- HighMemoryUsage GPU: 12GB/5m→14GB/15m (T4=16GB, model loading)
- PrometheusStorageFull: 50GiB→150GiB (TSDB cap is 180GB)

Alert regrouping:
- Move MailServerDown, HackmdDown, PrivatebinDown → new "Application Health"
- Move New Tailscale client → "Infrastructure Health"

New alerts (14):
- Networking: Cloudflared (2), MetalLB (2), Technitium DNS
- Storage: NFS CSI, iSCSI CSI controllers
- Critical Services: PgBouncer, CNPG operator, MySQL operator
- Infra Health: CrowdSec, Kyverno, Sealed Secrets, Woodpecker

Inhibit rules:
- Consolidate 3 NodeDown rules into 1 comprehensive rule
- Extend NFS rule to suppress NFS-dependent services
- Add PowerOutage → downstream suppression

Dashboard loading:
- Add for_each ConfigMap in grafana.tf to auto-load all 18 dashboards
- Remove duplicate caretta dashboard ConfigMap from caretta.tf
2026-03-14 10:25:31 +00:00
Viktor Barzin
2102cb2d73 Right-size CPU requests cluster-wide and remove missed CPU limits
Increase requests for under-requested pods (dashy 50m→250m, frigate 500m→1500m,
clickhouse 100m→500m, otp 100m→300m, linkwarden 25m→50m, authentik worker 50m→100m).

Reduce requests for over-requested pods (crowdsec agent/lapi 500m→25m each,
prometheus 200m→100m, dbaas mysql 1800m→100m, pg-cluster 250m→50m,
shlink-web 250m→10m, gpu-pod-exporter 50m→10m, stirling-pdf 100m→25m,
technitium 100m→25m, celery 50m→15m). Reduce crowdsec quota from 8→1 CPU.

Remove missed CPU limits in prometheus (cpu: "2") and dbaas (cpu: "3600m") tpl files.
2026-03-14 09:22:24 +00:00
Viktor Barzin
b00f810d3d Remove all CPU limits cluster-wide to eliminate CFS throttling
CPU limits cause CFS throttling even when nodes have idle capacity.
Move to a request-only CPU model: keep CPU requests for scheduling
fairness but remove all CPU limits. Memory limits stay (incompressible).

Changes across 108 files:
- Kyverno LimitRange policy: remove cpu from default/max in all 6 tiers
- Kyverno ResourceQuota policy: remove limits.cpu from all 5 tiers
- Custom ResourceQuotas: remove limits.cpu from 8 namespace quotas
- Custom LimitRanges: remove cpu from default/max (nextcloud, onlyoffice)
- RBAC module: remove cpu_limits variable and quota reference
- Freedify factory: remove cpu_limit variable and limits reference
- 86 deployment files: remove cpu from all limits blocks
- 6 Helm values files: remove cpu under limits sections
2026-03-14 08:51:45 +00:00
Viktor Barzin
c8d15adc16 Remove LokiDown alert rule and inhibit reference
Loki has been turned off — remove the orphaned alert rule and its
reference in the NodeDown inhibit configuration.
2026-03-13 22:21:11 +00:00
Viktor Barzin
b323e567e4 Add HAProxy for Redis HA master-only routing
The Redis K8s Service was load-balancing across both master and replica
nodes, causing READONLY errors when clients hit the replica. This broke
Nextcloud (DAV 500s, liveness probe timeouts, crash loops) and
potentially other services.

Replace the direct Service with HAProxy (2 replicas) that health-checks
each Redis node via INFO replication and only routes to role:master.
On Sentinel failover, HAProxy detects the new master within ~9 seconds.
2026-03-13 22:21:10 +00:00
Viktor Barzin
ce79bd5c04 Add node hang instrumentation and scale down chromium services
- Add journald collection to Alloy (loki.source.journal) for kernel OOM,
  panic, hung task, and soft lockup detection — ships system logs off-node
  so they survive hard resets
- Add 5 Loki alerting rules (KernelOOMKiller, KernelPanic, KernelHungTask,
  KernelSoftLockup, ContainerdDown) evaluating against node-journal logs
- Fix Loki ruler config: correct rules mount path (/var/loki/rules/fake),
  add alertmanager_url and enable_api
- Add Prometheus alerts: NodeMemoryPressureTrending (>85%), NodeExporterDown,
  NodeHighIOWait (>30%)
- Add caretta tolerations for control-plane and GPU nodes
- Scale down chromium-based services to 0 for cluster stability:
  f1-stream, flaresolverr, changedetection, resume/printer
2026-03-13 22:20:28 +00:00
OpenClaw
50d539908c feat(monitoring): Disable Loki centralized logging while preserving configuration
DECISION: Disable Loki due to operational overhead vs benefit analysis

EVIDENCE FROM NODE2 INCIDENT:
- Loki was the root cause of major cluster outage (PVC storage exhaustion)
- Centralized logging was unavailable when needed most (Loki was down)
- All debugging was accomplished with simpler tools (kubectl logs, events, describe)
- Prometheus metrics proved more valuable than centralized logs

OPERATIONAL OVERHEAD ELIMINATED:
 50GB iSCSI storage freed up (expensive)
 ~3.5GB memory freed up (Loki + Alloy agents across cluster)
 ~2+ CPU cores freed up for actual workloads
 Reduced complexity - fewer services to maintain and troubleshoot
 Eliminated single point of failure that can cascade cluster-wide

CONFIGURATION PRESERVED:
 All Terraform resources commented out (not deleted)
 loki.yaml preserved with 50GB configuration
 alloy.yaml preserved with log shipping configuration
 Alert rules and Grafana datasource preserved (commented)
 Easy re-enabling: just uncomment resources and apply

ALTERNATIVE DEBUGGING APPROACH:
 kubectl logs (always works, no storage dependency)
 kubectl get events (built-in Kubernetes events)
 Prometheus metrics (more reliable for monitoring)
 Enhanced health check scripts (direct status verification)

RE-ENABLING:
To restore Loki later, uncomment all /* ... */ blocks in loki.tf
and apply via Terraform. All configuration is preserved.

[ci skip] - Infrastructure changes applied locally first due to resource cleanup
2026-03-13 08:41:23 +00:00
OpenClaw
523f0ba7eb fix(monitoring): Expand Loki PVC from 15GB to 50GB to resolve storage exhaustion
ISSUE RESOLVED:
- Root cause: Loki's 15GB iSCSI PVC was completely full
- Symptom: 'no space left on device' errors during TSDB operations
- Impact: Loki service completely down, logging unavailable
- Side effects: Contributed to node2 containerd corruption incident

SOLUTION APPLIED:
- Expanded PVC storage: 15Gi → 50Gi via direct kubectl patch
- Triggered pod restart to complete filesystem resize
- Verified successful expansion and service recovery

CURRENT STATUS:
 PVC: 50Gi capacity (iscsi-truenas storage class)
 Loki StatefulSet: 1/1 ready
 Loki Pod: 2/2 containers running
 Service: Successfully processing log streams
 No storage errors in recent logs

TERRAFORM ALIGNED:
- Updated loki.yaml persistence.size to match actual PVC
- Infrastructure code now reflects deployed state

[ci skip] - Emergency fix applied locally first due to service outage
2026-03-13 08:13:05 +00:00
Viktor Barzin
81bfccaefc fix OOM kills: tune MySQL memory, reduce Nextcloud workers, increase Uptime Kuma limit
MySQL (3 OOM kills):
- Cap group_replication_message_cache_size to 128MB (default 1GB caused OOM)
- Reduce innodb_log_buffer_size from 64MB to 16MB
- Lower max_connections from 151 to 80 (peak usage ~40)
- Increase memory limit from 3Gi to 4Gi for headroom

Nextcloud (30+ apache2 OOM kills per incident):
- Reduce MaxRequestWorkers from 50 to 10 to prevent fork bomb
  when SQLite locks cause request pileup
- Lower StartServers/MinSpare/MaxSpare proportionally

Uptime Kuma (Node.js memory leak):
- Increase memory limit from 256Mi to 512Mi
- Increase CPU limit from 200m to 500m

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-12 07:26:08 +00:00
Viktor Barzin
f2c7444159 fix nvidia quota: use custom quota (32 CPU) instead of Kyverno-generated (16 CPU)
The GPU operator needs ~19 CPU limits across 16 pods (NFD, device plugin,
driver, validators, exporters). The Kyverno auto-generated quota of 16 CPU
was insufficient, blocking NFD worker and GC pods from scheduling.

- Add custom-quota label to nvidia namespace to exempt from Kyverno generation
- Add explicit ResourceQuota with limits.cpu=32, limits.memory=48Gi
- Fix: nvidia namespace tier label was missing after CI re-apply, causing
  Kyverno to use fallback LimitRange instead of tier-2-gpu specific one

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-12 07:04:34 +00:00
Viktor Barzin
d7953322dd fix cluster health: pin actualbudget, spread MySQL, scale grampsweb, fix GPU toleration
- Pin actualbudget/actual-server from edge to 26.3.0 (all 3 instances) to
  prevent recurring migration breakage from rolling nightly builds
- Add podAntiAffinity to MySQL InnoDB Cluster to spread replicas across nodes,
  relieving memory pressure on k8s-node4
- Scale grampsweb to 0 replicas (unused, consuming 1.7Gi memory)
- Add GPU toleration Kyverno policy to Terraform using patchesJson6902 instead
  of patchStrategicMerge to fix toleration array being overwritten (caused
  caretta DaemonSet pod to be unable to schedule on k8s-master)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-11 11:43:34 +00:00
Viktor Barzin
d8bcdfef2e revert MaxRequestWorkers to 50, exclude nextcloud from 5xx alert
- MaxRequestWorkers 25→50: too few workers caused ALL workers to block
  on SQLite locks, making liveness probes fail even faster (131 restarts
  vs 50 before). 50 is a compromise — enough workers for probes.
- Excluded nextcloud from HighServiceErrorRate alert (chronic SQLite issue)
- MySQL migration attempted but hit: GR error 3100 (fixed with GIPK),
  emoji in calendar/filecache (stripped), SQLite corruption (pre-existing
  from crash-looping). Migration rolled back, Nextcloud restored to SQLite.
2026-03-09 22:05:20 +00:00
Viktor Barzin
eed991a27b exclude nextcloud from HighServiceErrorRate alert
Nextcloud has chronic 5xx errors due to SQLite lock contention causing
Apache worker exhaustion. Excluding from alert until MySQL migration.
2026-03-09 20:26:30 +00:00
Viktor Barzin
ad8b90575e fix noisy JobFailed and duplicate mail server alerts
- JobFailed: only alert on jobs started within the last hour, so stale
  failed CronJob runs don't keep firing after subsequent runs succeed
- Mail server alert: renamed to MailServerDown, now targets the specific
  mailserver deployment instead of all deployments in the namespace
  (was falsely triggering on roundcubemail going down)
- Updated inhibition rule to use new MailServerDown alert name
2026-03-08 21:22:43 +00:00
Viktor Barzin
33c7976630 reduce alert noise: add cascade inhibitions, increase for durations, drop Loki alerts
- NodeDown now suppresses workload and service alerts (PodCrashLooping,
  DeploymentReplicasMismatch, StatefulSetReplicasMismatch, etc.)
- NFSServerUnresponsive suppresses pod-level alerts
- Increased for durations on transient alerts (e.g. 15m→30m for replica mismatches)
- NodeDown for: 1m→3m to avoid flapping
- Removed all 3 Loki log-based alerts (duplicated Prometheus alerts)
- Downgraded HeadscaleDown critical→warning, mail server page→warning
2026-03-08 21:13:16 +00:00
Viktor Barzin
6b3e84f465 deploy Sealed Secrets controller for encrypted secret management
Adds Sealed Secrets (Bitnami) to the platform stack so cluster users can
encrypt secrets with a public key and commit SealedSecret YAMLs to git.
The in-cluster controller decrypts them into regular K8s Secrets.

- New module: sealed-secrets (namespace + Helm chart v2.18.3, cluster tier)
- k8s-portal setup script: adds kubeseal CLI install for Linux and Mac
2026-03-08 19:49:48 +00:00
Viktor Barzin
d352d6e7f8 resource quota review: fix OOM risks, close quota gaps, add HA protections
Phase 1 - OOM fixes:
- dashy: increase memory limit 512Mi→1Gi (was at 99% utilization)
- caretta DaemonSet: set explicit resources 300Mi/512Mi (was at 85-98%)
- mysql-operator: add Helm resource values 256Mi/512Mi, create namespace
  with tier label (was at 92% of LimitRange default)
- prowlarr, flaresolverr, annas-archive-stacks: add explicit resources
  (outgrowing 256Mi LimitRange defaults)
- real-estate-crawler celery: add resources 512Mi/3Gi (608Mi actual, no
  explicit resources)

Phase 2 - Close quota gaps:
- nvidia, real-estate-crawler, trading-bot: remove custom-quota=true
  labels so Kyverno generates tier-appropriate quotas
- descheduler: add tier=1-cluster label for proper classification

Phase 3 - Reduce excessive quotas:
- monitoring: limits.memory 240Gi→64Gi, limits.cpu 120→64
- woodpecker: limits.memory 128Gi→32Gi, limits.cpu 64→16
- GPU tier default: limits.memory 96Gi→32Gi, limits.cpu 48→16

Phase 4 - Kubelet protection:
- Add cpu: 200m to systemReserved and kubeReserved in kubelet template

Phase 5 - HA improvements:
- cloudflared: add topology spread (ScheduleAnyway) + PDB (maxUnavailable:1)
- grafana: add topology spread + PDB via Helm values
- crowdsec LAPI: add topology spread + PDB via Helm values
- authentik server: add topology spread via Helm values
- authentik worker: add topology spread + PDB via Helm values
2026-03-08 18:17:46 +00:00
Viktor Barzin
ead33b23dd enable MySQL InnoDB Cluster auto-recovery after crashes
Previously manualStartOnBoot=true and exitStateAction=ABORT_SERVER meant
any ungraceful shutdown required manual rebootClusterFromCompleteOutage().

New settings:
- group_replication_start_on_boot=ON: auto-start GR after crash
- autorejoin_tries=2016: retry rejoining for ~28 minutes
- exit_state_action=OFFLINE_MODE: stay alive on expulsion (don't abort)
- member_expel_timeout=30s: tolerate brief unresponsiveness
- unreachable_majority_timeout=60s: leave group cleanly if majority lost
2026-03-08 17:13:03 +00:00
Viktor Barzin
fffc2ed0ab fix node OOM: reduce memory overcommit ratio and add kubelet eviction thresholds
LimitRange defaults had a 4-8x limit/request ratio causing the scheduler
to overpack nodes. When pods burst, nodes OOM-thrashed and became
unresponsive (k8s-node3 and k8s-node4 both went down today).

Changes:
- Increase default memory requests across all tiers (ratio now 2x):
  - core/cluster: 64Mi → 256Mi request (512Mi limit)
  - gpu: 256Mi → 1Gi request (2Gi limit)
  - edge/aux/fallback: 64Mi → 128Mi request (256Mi limit)
- Add kubelet memory reservation and eviction thresholds:
  - systemReserved: 512Mi, kubeReserved: 512Mi
  - evictionHard: 500Mi (was 100Mi), evictionSoft: 1Gi (was unset)
  - Applied to all nodes and future node template
2026-03-08 10:33:38 +00:00
Viktor Barzin
9d031290cc [ci skip] fix Homepage icons for Tandoor, Listenarr, Networking Toolbox, Goldilocks
tandoor.png → tandoor-recipes.png (dashboard-icons), podcast.png →
mdi-podcast, networking.png → mdi-lan, goldilocks.png → mdi-scale-balance
2026-03-07 21:29:51 +00:00
Viktor Barzin
7af0024473 [ci skip] fix pfSense widget: wan interface is vtnet0 not vmx0 2026-03-07 20:39:56 +00:00
Viktor Barzin
3ec643a897 [ci skip] fix pfSense widget: remove wanStatus (API v2 missing gateway endpoint)
Replace wanStatus with temp field. Remove wan interface param since
the pfSense REST API v2 package doesn't expose /status/gateway.
2026-03-07 20:39:56 +00:00
Viktor Barzin
f3042f318e [ci skip] fix widget issues: ports, Immich v2 API, Nextcloud trusted domains
- qBittorrent: use service port 80 (not container port 8080)
- Immich: add version=2 for new API endpoints (/api/server/*)
- Nextcloud: use external URL (internal rejects untrusted Host header)
- HA London: remove widget (token expired, needs manual regeneration)
- Headscale: remove widget (requires nodeId param, not overview)
2026-03-07 20:39:56 +00:00
Viktor Barzin
17256c8f76 [ci skip] fix widget URLs: use correct k8s service ports
Services expose port 80 via ClusterIP but widgets were using container
target ports (5000, 3001, 4533, 3000). Calibre was using external URL
through Authentik. All now use correct internal service URLs.
2026-03-07 20:39:56 +00:00
Viktor Barzin
57eed07370 [ci skip] add widgets for qbittorrent, navidrome, nextcloud, freshrss, linkwarden, uptime-kuma
Add API credentials to SOPS and wire homepage_credentials through
stacks. Re-add Uptime Kuma widget with new "infra" status page slug.
2026-03-07 20:39:55 +00:00
Viktor Barzin
10acdcd5a2 [ci skip] add widgets for audiobookshelf, changedetection, prowlarr, headscale
Wire homepage_credentials through servarr parent stack for prowlarr.
Fix paperless-ngx widget to use internal service URL.
2026-03-07 20:39:55 +00:00
Viktor Barzin
1f1700c4ff [ci skip] fix broken Homepage widgets + add service API tokens to SOPS
- Grafana: fix service URL (grafana not monitoring-grafana)
- Uptime Kuma: remove widget (no status page configured)
- Speedtest/Frigate/Immich: use internal k8s service URLs (external
  goes through Authentik forward auth, blocking API calls)
- pfSense: clean up annotations
- SOPS: add headscale, prowlarr, changedetection, audiobookshelf tokens
2026-03-07 20:39:55 +00:00
Viktor Barzin
a9daf50142 [ci skip] add Homepage widget credentials for Authentik, Shlink, Home Assistant
Wire homepage_credentials tokens through platform stack to enable
live widgets for Authentik, Shlink (URL shortener), and Home Assistant
London. Update SOPS with new credential entries.
2026-03-07 20:39:54 +00:00