Commit graph

200 commits

Author SHA1 Message Date
Viktor Barzin
ce79bd5c04 Add node hang instrumentation and scale down chromium services
- Add journald collection to Alloy (loki.source.journal) for kernel OOM,
  panic, hung task, and soft lockup detection — ships system logs off-node
  so they survive hard resets
- Add 5 Loki alerting rules (KernelOOMKiller, KernelPanic, KernelHungTask,
  KernelSoftLockup, ContainerdDown) evaluating against node-journal logs
- Fix Loki ruler config: correct rules mount path (/var/loki/rules/fake),
  add alertmanager_url and enable_api
- Add Prometheus alerts: NodeMemoryPressureTrending (>85%), NodeExporterDown,
  NodeHighIOWait (>30%)
- Add caretta tolerations for control-plane and GPU nodes
- Scale down chromium-based services to 0 for cluster stability:
  f1-stream, flaresolverr, changedetection, resume/printer
2026-03-13 22:20:28 +00:00
Viktor Barzin
dfcef89c35 fix Frigate GPU stall: add inference speed check to liveness probe
The existing probe only checked nvidia-smi + API availability, which passes
even when the detector falls back to CPU. Now also checks /api/stats and
restarts the pod if inference speed exceeds 100ms (normal GPU: ~20ms,
CPU fallback: 200ms+).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-13 10:25:46 +00:00
OpenClaw
50d539908c feat(monitoring): Disable Loki centralized logging while preserving configuration
DECISION: Disable Loki due to operational overhead vs benefit analysis

EVIDENCE FROM NODE2 INCIDENT:
- Loki was the root cause of major cluster outage (PVC storage exhaustion)
- Centralized logging was unavailable when needed most (Loki was down)
- All debugging was accomplished with simpler tools (kubectl logs, events, describe)
- Prometheus metrics proved more valuable than centralized logs

OPERATIONAL OVERHEAD ELIMINATED:
 50GB iSCSI storage freed up (expensive)
 ~3.5GB memory freed up (Loki + Alloy agents across cluster)
 ~2+ CPU cores freed up for actual workloads
 Reduced complexity - fewer services to maintain and troubleshoot
 Eliminated single point of failure that can cascade cluster-wide

CONFIGURATION PRESERVED:
 All Terraform resources commented out (not deleted)
 loki.yaml preserved with 50GB configuration
 alloy.yaml preserved with log shipping configuration
 Alert rules and Grafana datasource preserved (commented)
 Easy re-enabling: just uncomment resources and apply

ALTERNATIVE DEBUGGING APPROACH:
 kubectl logs (always works, no storage dependency)
 kubectl get events (built-in Kubernetes events)
 Prometheus metrics (more reliable for monitoring)
 Enhanced health check scripts (direct status verification)

RE-ENABLING:
To restore Loki later, uncomment all /* ... */ blocks in loki.tf
and apply via Terraform. All configuration is preserved.

[ci skip] - Infrastructure changes applied locally first due to resource cleanup
2026-03-13 08:41:23 +00:00
OpenClaw
523f0ba7eb fix(monitoring): Expand Loki PVC from 15GB to 50GB to resolve storage exhaustion
ISSUE RESOLVED:
- Root cause: Loki's 15GB iSCSI PVC was completely full
- Symptom: 'no space left on device' errors during TSDB operations
- Impact: Loki service completely down, logging unavailable
- Side effects: Contributed to node2 containerd corruption incident

SOLUTION APPLIED:
- Expanded PVC storage: 15Gi → 50Gi via direct kubectl patch
- Triggered pod restart to complete filesystem resize
- Verified successful expansion and service recovery

CURRENT STATUS:
 PVC: 50Gi capacity (iscsi-truenas storage class)
 Loki StatefulSet: 1/1 ready
 Loki Pod: 2/2 containers running
 Service: Successfully processing log streams
 No storage errors in recent logs

TERRAFORM ALIGNED:
- Updated loki.yaml persistence.size to match actual PVC
- Infrastructure code now reflects deployed state

[ci skip] - Emergency fix applied locally first due to service outage
2026-03-13 08:13:05 +00:00
OpenClaw
a09967e098 feat(monitoring): Enhance disk monitoring and containerd GC after node2 incident
IMMEDIATE CHANGES (Active Now):
- Lower disk warning threshold: 70% → WARN, 85% → FAIL (was 80%/90%)
- More aggressive alerting to prevent containerd corruption
- Enhanced cluster health check disk monitoring

INFRASTRUCTURE CHANGES (Requires Terraform Apply):
- Add containerd garbage collection configuration (30min intervals)
- More aggressive kubelet eviction policies (15%/20% vs 10%/15%)
- Enhanced disk space protection to prevent node2-type failures

Root Cause: node2 disk exhaustion corrupted containerd image store
Prevention: Proactive monitoring + aggressive cleanup policies

[ci skip] - Infrastructure changes require SOPS access for apply
2026-03-13 07:16:56 +00:00
OpenClaw
fd6c1cca93 fix(nextcloud): Database corruption recovery and conservative Apache tuning
- Restored clean SQLite database from pre-migration backup
- Fixed severe database corruption (rowid ordering, page corruption, index issues)
- Applied conservative MaxRequestWorkers=15 for SQLite stability
- Database integrity now perfect, all health checks passing
- Ready for future MySQL migration with clean data

[ci skip]
2026-03-12 13:38:37 +00:00
OpenClaw
db1e301eea fix(nextcloud): Increase Apache MaxRequestWorkers to resolve health check timeouts
- Increase MaxRequestWorkers from 10 to 25 for 4 CPU + 3Gi memory container
- Update Apache tuning for Redis + SQLite backend (not pure SQLite)
- Resolves CrashLoopBackOff caused by health probe timeouts
- Allows handling concurrent users without MaxRequestWorkers limit errors

[ci skip]
2026-03-12 13:14:20 +00:00
Viktor Barzin
3f0cf4ff4d stabilize Nextcloud: relax probes, reduce resources for 2-client SQLite workload
SQLite locks cause slow responses under concurrent access, triggering
liveness probe failures and restarts. With only 2 sync clients:

- Liveness: period 30→60s, timeout 10→30s, failures 6→10 (tolerates 10min)
- Readiness: period 30→60s, timeout 10→30s, failures 3→5
- Startup: timeout 10→30s
- Resources: CPU 16→4, memory 6Gi→3Gi (10 workers × 200MB = 2GB max)

[ci skip]

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-12 10:01:20 +00:00
Viktor Barzin
81bfccaefc fix OOM kills: tune MySQL memory, reduce Nextcloud workers, increase Uptime Kuma limit
MySQL (3 OOM kills):
- Cap group_replication_message_cache_size to 128MB (default 1GB caused OOM)
- Reduce innodb_log_buffer_size from 64MB to 16MB
- Lower max_connections from 151 to 80 (peak usage ~40)
- Increase memory limit from 3Gi to 4Gi for headroom

Nextcloud (30+ apache2 OOM kills per incident):
- Reduce MaxRequestWorkers from 50 to 10 to prevent fork bomb
  when SQLite locks cause request pileup
- Lower StartServers/MinSpare/MaxSpare proportionally

Uptime Kuma (Node.js memory leak):
- Increase memory limit from 256Mi to 512Mi
- Increase CPU limit from 200m to 500m

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-12 07:26:08 +00:00
Viktor Barzin
f2c7444159 fix nvidia quota: use custom quota (32 CPU) instead of Kyverno-generated (16 CPU)
The GPU operator needs ~19 CPU limits across 16 pods (NFD, device plugin,
driver, validators, exporters). The Kyverno auto-generated quota of 16 CPU
was insufficient, blocking NFD worker and GC pods from scheduling.

- Add custom-quota label to nvidia namespace to exempt from Kyverno generation
- Add explicit ResourceQuota with limits.cpu=32, limits.memory=48Gi
- Fix: nvidia namespace tier label was missing after CI re-apply, causing
  Kyverno to use fallback LimitRange instead of tier-2-gpu specific one

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-12 07:04:34 +00:00
Viktor Barzin
f07f05f9bb migrate Nextcloud data volume from NFS to iSCSI for fsync support
SQLite on NFS caused persistent 500 errors on WebDAV PROPFIND due to
missing fsync guarantees and database locking under concurrent access.
iSCSI (ext4) provides proper fsync and block-level I/O.

- Replace nfs_volume module with iscsi-truenas PVC (20Gi)
- Update Helm chart to use nextcloud-data-iscsi claim
- Excluded 12.5GB nextcloud.log and corrupted DB from migration

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-11 23:24:03 +00:00
Viktor Barzin
d7953322dd fix cluster health: pin actualbudget, spread MySQL, scale grampsweb, fix GPU toleration
- Pin actualbudget/actual-server from edge to 26.3.0 (all 3 instances) to
  prevent recurring migration breakage from rolling nightly builds
- Add podAntiAffinity to MySQL InnoDB Cluster to spread replicas across nodes,
  relieving memory pressure on k8s-node4
- Scale grampsweb to 0 replicas (unused, consuming 1.7Gi memory)
- Add GPU toleration Kyverno policy to Terraform using patchesJson6902 instead
  of patchStrategicMerge to fix toleration array being overwritten (caused
  caretta DaemonSet pod to be unable to schedule on k8s-master)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-11 11:43:34 +00:00
Viktor Barzin
6bdcd88d25 set Recreate strategy for plotting-book deployment
iSCSI volumes are ReadWriteOnce and cannot multi-attach, so the old
pod must terminate before the new one starts.
2026-03-10 23:47:30 +00:00
Viktor Barzin
5a9881337d Add terminal stack - reverse proxy to ttyd behind authentik
Exposes ttyd at 10.0.10.10:7681 via terminal.viktorbarzin.me with
Cloudflare DNS and Authentik forward-auth protection.
2026-03-10 23:46:01 +00:00
Viktor Barzin
d8bcdfef2e revert MaxRequestWorkers to 50, exclude nextcloud from 5xx alert
- MaxRequestWorkers 25→50: too few workers caused ALL workers to block
  on SQLite locks, making liveness probes fail even faster (131 restarts
  vs 50 before). 50 is a compromise — enough workers for probes.
- Excluded nextcloud from HighServiceErrorRate alert (chronic SQLite issue)
- MySQL migration attempted but hit: GR error 3100 (fixed with GIPK),
  emoji in calendar/filecache (stripped), SQLite corruption (pre-existing
  from crash-looping). Migration rolled back, Nextcloud restored to SQLite.
2026-03-09 22:05:20 +00:00
Viktor Barzin
eed991a27b exclude nextcloud from HighServiceErrorRate alert
Nextcloud has chronic 5xx errors due to SQLite lock contention causing
Apache worker exhaustion. Excluding from alert until MySQL migration.
2026-03-09 20:26:30 +00:00
Viktor Barzin
0ca81a6112 fix: mount Apache MPM config under nextcloud.extraVolumes (not top-level)
The Nextcloud Helm chart expects extraVolumes/extraVolumeMounts nested
under the nextcloud: key. Also mount to mods-available/ (the actual file)
not mods-enabled/ (which is a symlink).

Verified: MaxRequestWorkers 150→25, workers dropped from 49 to 6.
2026-03-08 21:37:39 +00:00
Viktor Barzin
ff03f2b99f tune Nextcloud Apache/PHP to fix constant crash-looping (50 restarts/6d)
Root cause: Apache prefork with 150 MaxRequestWorkers (each ~220MB RSS)
on SQLite DB causes worker exhaustion + lock contention → Apache hangs →
aggressive liveness probe (3 failures × 10s) kills container.

Fixes:
- Apache: MaxRequestWorkers 150→25, MaxConnectionsPerChild 0→200,
  StartServers 5→3 (via ConfigMap mount over mpm_prefork.conf)
- PHP: max_execution_time 0→300s, max_input_time 300s (prevent zombie workers)
- Liveness probe: period 10s→30s, failureThreshold 3→6, timeout 5s→10s
  (180s tolerance vs 30s before)
- Readiness probe: period 10s→30s, timeout 5s→10s
2026-03-08 21:33:27 +00:00
Viktor Barzin
ad8b90575e fix noisy JobFailed and duplicate mail server alerts
- JobFailed: only alert on jobs started within the last hour, so stale
  failed CronJob runs don't keep firing after subsequent runs succeed
- Mail server alert: renamed to MailServerDown, now targets the specific
  mailserver deployment instead of all deployments in the namespace
  (was falsely triggering on roundcubemail going down)
- Updated inhibition rule to use new MailServerDown alert name
2026-03-08 21:22:43 +00:00
Viktor Barzin
33c7976630 reduce alert noise: add cascade inhibitions, increase for durations, drop Loki alerts
- NodeDown now suppresses workload and service alerts (PodCrashLooping,
  DeploymentReplicasMismatch, StatefulSetReplicasMismatch, etc.)
- NFSServerUnresponsive suppresses pod-level alerts
- Increased for durations on transient alerts (e.g. 15m→30m for replica mismatches)
- NodeDown for: 1m→3m to avoid flapping
- Removed all 3 Loki log-based alerts (duplicated Prometheus alerts)
- Downgraded HeadscaleDown critical→warning, mail server page→warning
2026-03-08 21:13:16 +00:00
Viktor Barzin
2fa8ba2038 [ci skip] add sealed secrets convention: fileset + kubernetes_manifest pattern
- Document sealed secrets workflow in AGENTS.md and CLAUDE.md
- Add kubernetes_manifest + fileset(sealed-*.yaml) block to plotting-book as reference
- Users: kubeseal encrypt → commit sealed-*.yaml → CI applies via Terraform
- E2E tested: seal/commit/plan/apply/decrypt cycle verified
2026-03-08 20:03:50 +00:00
Viktor Barzin
6b3e84f465 deploy Sealed Secrets controller for encrypted secret management
Adds Sealed Secrets (Bitnami) to the platform stack so cluster users can
encrypt secrets with a public key and commit SealedSecret YAMLs to git.
The in-cluster controller decrypts them into regular K8s Secrets.

- New module: sealed-secrets (namespace + Helm chart v2.18.3, cluster tier)
- k8s-portal setup script: adds kubeseal CLI install for Linux and Mac
2026-03-08 19:49:48 +00:00
Viktor Barzin
d352d6e7f8 resource quota review: fix OOM risks, close quota gaps, add HA protections
Phase 1 - OOM fixes:
- dashy: increase memory limit 512Mi→1Gi (was at 99% utilization)
- caretta DaemonSet: set explicit resources 300Mi/512Mi (was at 85-98%)
- mysql-operator: add Helm resource values 256Mi/512Mi, create namespace
  with tier label (was at 92% of LimitRange default)
- prowlarr, flaresolverr, annas-archive-stacks: add explicit resources
  (outgrowing 256Mi LimitRange defaults)
- real-estate-crawler celery: add resources 512Mi/3Gi (608Mi actual, no
  explicit resources)

Phase 2 - Close quota gaps:
- nvidia, real-estate-crawler, trading-bot: remove custom-quota=true
  labels so Kyverno generates tier-appropriate quotas
- descheduler: add tier=1-cluster label for proper classification

Phase 3 - Reduce excessive quotas:
- monitoring: limits.memory 240Gi→64Gi, limits.cpu 120→64
- woodpecker: limits.memory 128Gi→32Gi, limits.cpu 64→16
- GPU tier default: limits.memory 96Gi→32Gi, limits.cpu 48→16

Phase 4 - Kubelet protection:
- Add cpu: 200m to systemReserved and kubeReserved in kubelet template

Phase 5 - HA improvements:
- cloudflared: add topology spread (ScheduleAnyway) + PDB (maxUnavailable:1)
- grafana: add topology spread + PDB via Helm values
- crowdsec LAPI: add topology spread + PDB via Helm values
- authentik server: add topology spread via Helm values
- authentik worker: add topology spread + PDB via Helm values
2026-03-08 18:17:46 +00:00
Viktor Barzin
ead33b23dd enable MySQL InnoDB Cluster auto-recovery after crashes
Previously manualStartOnBoot=true and exitStateAction=ABORT_SERVER meant
any ungraceful shutdown required manual rebootClusterFromCompleteOutage().

New settings:
- group_replication_start_on_boot=ON: auto-start GR after crash
- autorejoin_tries=2016: retry rejoining for ~28 minutes
- exit_state_action=OFFLINE_MODE: stay alive on expulsion (don't abort)
- member_expel_timeout=30s: tolerate brief unresponsiveness
- unreachable_majority_timeout=60s: leave group cleanly if majority lost
2026-03-08 17:13:03 +00:00
Viktor Barzin
fffc2ed0ab fix node OOM: reduce memory overcommit ratio and add kubelet eviction thresholds
LimitRange defaults had a 4-8x limit/request ratio causing the scheduler
to overpack nodes. When pods burst, nodes OOM-thrashed and became
unresponsive (k8s-node3 and k8s-node4 both went down today).

Changes:
- Increase default memory requests across all tiers (ratio now 2x):
  - core/cluster: 64Mi → 256Mi request (512Mi limit)
  - gpu: 256Mi → 1Gi request (2Gi limit)
  - edge/aux/fallback: 64Mi → 128Mi request (256Mi limit)
- Add kubelet memory reservation and eviction thresholds:
  - systemReserved: 512Mi, kubeReserved: 512Mi
  - evictionHard: 500Mi (was 100Mi), evictionSoft: 1Gi (was unset)
  - Applied to all nodes and future node template
2026-03-08 10:33:38 +00:00
Viktor Barzin
193f2e2dc5 add iSCSI persistent volume for plotting-book SQLite database
Create a 1Gi PVC using iscsi-truenas StorageClass, mount at /data,
and set DB_PATH=/data/database.sqlite for persistent storage.
2026-03-07 21:57:22 +00:00
Viktor Barzin
4374e78869 [ci skip] fix Wealthfolio Homepage icon: wealthfolio.png → mdi-finance 2026-03-07 21:32:58 +00:00
Viktor Barzin
9d031290cc [ci skip] fix Homepage icons for Tandoor, Listenarr, Networking Toolbox, Goldilocks
tandoor.png → tandoor-recipes.png (dashboard-icons), podcast.png →
mdi-podcast, networking.png → mdi-lan, goldilocks.png → mdi-scale-balance
2026-03-07 21:29:51 +00:00
Viktor Barzin
32bd30f56e [ci skip] fix invalid Homepage dashboard icons for 9 services
Use correct dashboard-icons names where available (changedetection,
gramps-web), Material Design Icons for custom apps (city-guesser,
plotting-book, resume, tuya-bridge, trading-bot, poison-fountain),
and Simple Icons for F1 Stream.
2026-03-07 21:14:17 +00:00
Viktor Barzin
76a4987eef [ci skip] add Forgejo task pipeline for OpenClaw AI agent
Forgejo issues as a task queue for OpenClaw:
- Forgejo OAuth2 with Authentik SSO, self-registration disabled
- Webhook-triggered task processing (instant) + CronJob backup (5min poll)
- Tasks processed via Mistral Large 3 (NVIDIA NIM API)
- Results posted as issue comments, auto-labeled and closed
- Comment follow-ups and reopened issues supported
- n8n RBAC for OpenClaw pod exec (future workflow integration)
2026-03-07 21:11:07 +00:00
Viktor Barzin
c2765e890b add nginx caching proxy for Homepage widget API requests
Stale-while-revalidate cache in front of Homepage reduces first-paint
latency by serving cached /api/ responses instantly while refreshing
upstream in background. Non-API paths pass through uncached.
2026-03-07 21:11:07 +00:00
root
b4f9777ecd Woodpecker CI deploy commit [CI SKIP] 2026-03-07 20:47:22 +00:00
Viktor Barzin
33b20ce111 add Google OAuth env vars to plotting-book deployment
Deploy GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET, and GOOGLE_CALLBACK_URL
to the plotting-book container. Update CSP to allow accounts.google.com
for connect-src and form-action directives.
2026-03-07 20:41:08 +00:00
Viktor Barzin
650c2a6ed4 [ci skip] add liveness probe to Send deployment
Prevents stale Redis connections from silently breaking file uploads.
The old Node.js Redis client doesn't auto-reconnect after Redis restarts,
causing all files to appear expired.
2026-03-07 20:39:57 +00:00
Viktor Barzin
7af0024473 [ci skip] fix pfSense widget: wan interface is vtnet0 not vmx0 2026-03-07 20:39:56 +00:00
Viktor Barzin
3ec643a897 [ci skip] fix pfSense widget: remove wanStatus (API v2 missing gateway endpoint)
Replace wanStatus with temp field. Remove wan interface param since
the pfSense REST API v2 package doesn't expose /status/gateway.
2026-03-07 20:39:56 +00:00
Viktor Barzin
f3042f318e [ci skip] fix widget issues: ports, Immich v2 API, Nextcloud trusted domains
- qBittorrent: use service port 80 (not container port 8080)
- Immich: add version=2 for new API endpoints (/api/server/*)
- Nextcloud: use external URL (internal rejects untrusted Host header)
- HA London: remove widget (token expired, needs manual regeneration)
- Headscale: remove widget (requires nodeId param, not overview)
2026-03-07 20:39:56 +00:00
Viktor Barzin
17256c8f76 [ci skip] fix widget URLs: use correct k8s service ports
Services expose port 80 via ClusterIP but widgets were using container
target ports (5000, 3001, 4533, 3000). Calibre was using external URL
through Authentik. All now use correct internal service URLs.
2026-03-07 20:39:56 +00:00
Viktor Barzin
c9bb470259 [ci skip] upgrade Homepage from v1.8.0 to v1.10.1 2026-03-07 20:39:56 +00:00
Viktor Barzin
57eed07370 [ci skip] add widgets for qbittorrent, navidrome, nextcloud, freshrss, linkwarden, uptime-kuma
Add API credentials to SOPS and wire homepage_credentials through
stacks. Re-add Uptime Kuma widget with new "infra" status page slug.
2026-03-07 20:39:55 +00:00
Viktor Barzin
10acdcd5a2 [ci skip] add widgets for audiobookshelf, changedetection, prowlarr, headscale
Wire homepage_credentials through servarr parent stack for prowlarr.
Fix paperless-ngx widget to use internal service URL.
2026-03-07 20:39:55 +00:00
Viktor Barzin
1f1700c4ff [ci skip] fix broken Homepage widgets + add service API tokens to SOPS
- Grafana: fix service URL (grafana not monitoring-grafana)
- Uptime Kuma: remove widget (no status page configured)
- Speedtest/Frigate/Immich: use internal k8s service URLs (external
  goes through Authentik forward auth, blocking API calls)
- pfSense: clean up annotations
- SOPS: add headscale, prowlarr, changedetection, audiobookshelf tokens
2026-03-07 20:39:55 +00:00
Viktor Barzin
a9daf50142 [ci skip] add Homepage widget credentials for Authentik, Shlink, Home Assistant
Wire homepage_credentials tokens through platform stack to enable
live widgets for Authentik, Shlink (URL shortener), and Home Assistant
London. Update SOPS with new credential entries.
2026-03-07 20:39:54 +00:00
Viktor Barzin
6bd3970579 [ci skip] add Homepage gethomepage.dev annotations to all services
Add Kubernetes ingress annotations for Homepage auto-discovery across
~88 services organized into 11 groups. Enable serviceAccount for RBAC,
configure group layouts, and add Grafana/Frigate/Speedtest widgets.
2026-03-07 20:39:54 +00:00
Viktor Barzin
2dc5ab8995 [ci skip] fix false-positive sensitive=true on kube_config_path 2026-03-07 15:48:19 +00:00
Viktor Barzin
b6aacf7b02 [ci skip] fix Svelte 5 table structure (thead/tbody required) + use versioned image tag
Fixed architecture and services pages to wrap table rows in <thead>/<tbody>
as required by Svelte 5's strict HTML validation.

E2E test passed: clean Alpine container → setup script → kubectl installed →
CA cert verified against API server → TLS SUCCESS
2026-03-07 15:34:32 +00:00
Viktor Barzin
6f8b48a73c [ci skip] k8s portal: fix setup script + add onboarding hub (5 new pages)
Bug fixes:
- CA cert now populated in ConfigMap (was empty → TLS failures)
- Remove useless heredoc quote escaping in setup script
- Fix homepage: VPN callout, correct verification command (get namespaces)
- Fix false-positive sensitive=true on ingress_path, tls_secret_name,
  truenas_host, ollama_host, client_certificate_secret_name

New pages (direct Svelte, no mdsvex dependency):
- /onboarding: step-by-step guide (VPN, kubectl, git, first PR)
- /architecture: cluster topology, storage, networking, tiers
- /services: catalog of 70+ services with URLs
- /contributing: PR workflow, what you can/can't change, NEVER list
- /troubleshooting: common issues and fixes

Navigation bar added to layout. All pages use consistent docs styling.

Requires Docker image rebuild: cd stacks/platform/modules/k8s-portal/files
&& docker build -t viktorbarzin/k8s-portal:latest . && docker push
2026-03-07 15:06:26 +00:00
Viktor Barzin
1f2c1ca361 [ci skip] phase 5+6: update CI pipelines for SOPS, add sensitive=true to secret vars
Phase 5 — CI pipelines:
- default.yml: add SOPS decrypt in prepare step, change git add . to
  specific paths (stacks/ state/ .woodpecker/), cleanup on success+failure
- renew-tls.yml: change git add . to git add secrets/ state/

Phase 6 — sensitive=true:
- Add sensitive = true to 256 variable declarations across 149 stack files
- Prevents secret values from appearing in terraform plan output
- Does NOT modify shared modules (ingress_factory, nfs_volume) to avoid
  breaking module interface contracts

Note: CI pipeline SOPS decryption requires sops_age_key Woodpecker secret
to be created before the pipeline will work with SOPS. Until then, the old
terraform.tfvars path continues to function.
2026-03-07 14:30:36 +00:00
Viktor Barzin
5b28319bc3 fix(actualbudget): raise http-api resources to prevent OOM [ci skip] 2026-03-07 00:28:02 +00:00
Viktor Barzin
197cef7f3f [ci skip] add auto-generated tiers.tf, planning docs, and helm chart cache
- tiers.tf: Terragrunt-generated tier locals for all standalone stacks
- .planning/: resource audit research and plans
- docs/plans/: cluster hardening design doc
- redis-25.3.2.tgz: Bitnami Redis Helm chart cache
2026-03-06 23:55:57 +00:00