MySQL operator ignores podSpec.containers sidecar resource overrides,
always injecting 6Gi limit defaults. Added sidecar to CR spec for
documentation but raised quota from 32Gi to 40Gi as the practical fix.
Quota usage drops from 99% to 79%.
Changed from simple time-based (24h on inverter) to condition-based:
only fires when on inverter AND battery charge <80% for 1h. This means
normal daytime inverter usage won't trigger alerts — only fires when
the grid is unavailable and battery is draining.
- HighPowerUsage: raise from 200W to 300W (R730 idles at ~230W)
- HighServiceLatency: exclude headscale (WebSocket) and authentik (SSO)
from latency checks — both have inherently high avg response times
snmp-exporter-external.viktorbarzin.me exposed UPS metrics to the
public internet with no authentication. Removed the external ingress
and Cloudflare DNS record. ha-sofia now accesses the SNMP exporter
via the existing .lan ingress (allow_local_access_only=true) using
direct IP 10.0.20.200 with Host header.
- Add snmp-exporter-ingress-external module for external HTTPS access to snmp-exporter
- Register snmp-exporter-external.viktorbarzin.me in Cloudflare DNS (proxied via tunnel)
- Update ha-sofia REST integration to use external HTTPS endpoint
- Fix ingress backend service routing to use existing snmp-exporter service
- All UPS sensors on ha-sofia now report values (voltage, battery %, load, etc.)
- changedetection: increase memory from 64Mi to 256Mi/512Mi (was OOMKilling),
set replicas back to 1
- flaresolverr: re-enable with replicas=1, increase memory limit to 1Gi
(needed by book-search for Cloudflare bypass)
Previously only searched for the current run's specific marker subject.
If IMAP deletion failed, old emails accumulated. Now searches for all
emails with "e2e-probe" in subject and deletes them, cleaning up any
leftovers from prior failed runs.
Root cause: Traefik v3 auto-detects HTTPS for backend port 443,
ignoring the port name "http" and serversscheme annotations.
MeshCentral serves HTTP on 443 (TLSOffload mode), but Traefik
connected via HTTPS causing TLS handshake failure → 500.
Fix: Change K8s service port from 443 to 80 with target_port 443.
Traefik sees port 80 → uses HTTP → reaches MeshCentral correctly.
Also disables anti-AI scraping (internal tool behind Authentik).
The rewrite-body plugin (anti-AI trap links) was crashing when
processing MeshCentral's HTML responses, returning 500. Disabled
anti_ai_scraping since it's a protected internal tool behind Authentik.
Re-enabled Authentik protection.
The previous init container incorrectly disabled TLSOffload, causing
MeshCentral to serve HTTPS on port 443. Traefik connects via HTTP,
resulting in protocol mismatch and 500 errors. Fix ensures TLSOffload
is always enabled so MeshCentral serves plain HTTP behind Traefik.
MeshCentral was failing to start with "Zipencryptionmodule failed" error
because the service tried to fetch TLS certificates from an HTTPS endpoint
during bootstrap. When using TLSOffload (reverse proxy terminating TLS),
MeshCentral should not attempt to load certificates.
Root cause: The existing config.json had "certUrl" set to HTTPS, causing
MeshCentral to try fetching the certificate during startup. Since the pod
was bootstrapping, this failed and cascaded into the Zipencryptionmodule
failure.
Fix: Add init container that runs before the main container to disable
the certUrl by prefixing it with underscore (MeshCentral's convention for
disabled settings). The sed command ensures the fix applies to both new
and existing config.json files.
This ensures MeshCentral behaves correctly with TLSOffload enabled:
- Runs in plain HTTP mode on port 443
- Traefik/Ingress handles HTTPS termination
- No certificate bootstrap failures
MeshCentral was migrated from NFS to proxmox-lvm storage (Wave 2). The old NFS
modules for data and files are no longer used by the deployment, leaving behind
orphaned PVCs (meshcentral-data, meshcentral-files). The backups volume remains
on NFS per the backup strategy pattern.
Changes:
- Removed module.nfs_data and module.nfs_files from Terraform config
- Active volumes now: meshcentral-data-proxmox, meshcentral-files-proxmox (proxmox-lvm)
- Backups volume: meshcentral-backups (NFS) - unchanged
Pod status: healthy, running on proxmox-lvm volumes.
Query logs stopped syncing on 2026-03-16 due to password mismatch after
MySQL cluster rebuild and Technitium app config reset.
- Add Vault static role mysql-technitium (7-day rotation)
- Add ExternalSecret for technitium-db-creds in technitium namespace
- Add password-sync CronJob (6h) to push rotated password to Technitium API
- Update Grafana datasource to use ESO-managed password
- Remove stale technitium_db_password variable (replaced by ESO)
- Update databases.md and restore-mysql.md runbook
The http-api sidecar was connecting to the public URL
(https://budget-*.viktorbarzin.me) which goes through Traefik/Authentik.
When pods got rescheduled to different nodes, this caused ETIMEDOUT errors.
Changed to internal service URL (http://budget-*.actualbudget.svc.cluster.local)
which is fast and reliable regardless of pod placement.
- meshcentral: fix homepage annotations formatting (no functional change,
serversscheme was tested but not needed since MeshCentral serves HTTP)
- meshcentral: restored user DB from Dec 2024 backup (1428B → 45KB)
- technitium: remove unused technitium-config-proxmox PVC (WaitForFirstConsumer,
never mounted — primary uses NFS, replicas have their own proxmox PVCs)
- Add tertiary DNS deployment with zone-transfer replication for
externalTrafficPolicy=Local coverage across more nodes
- Reorder CoreDNS default forwarders: pfSense (10.0.20.1) first,
then public DNS fallbacks (8.8.8.8, 1.1.1.1)
- NFS CSI: fix liveness-probe port conflict (29652 → 29653)
- Immich ML: add gpu-workload priority class to enable preemption on node1
- dbaas: right-size MySQL memory limits (sidecar 6Gi→350Mi, main 4Gi→3Gi)
- Redis: add redis-master service via HAProxy for master-only routing,
update config.tfvars redis_host to use it
- CoreDNS: forward .viktorbarzin.lan to Technitium ClusterIP (10.96.0.53)
instead of stale LoadBalancer IP (10.0.20.200)
- Trading bot: comment out all resources (no longer needed)
- Vault: remove trading-bot PostgreSQL database role
- MySQL InnoDB: keep required anti-affinity but document why (2/3 members OK during node loss)
- Descheduler: increase frequency from hourly to every 5 min for faster rebalancing
- Prometheus: set terminationGracePeriodSeconds=60 to prevent drain timeout [ci skip]
iSCSI CSI (democratic-csi) was replaced by proxmox-csi in April 2026.
Controller is intentionally scaled to 0. Remove the stale alert and
update CSIDriverCrashLoop to monitor proxmox-csi instead of iscsi-csi.
Kyverno's tier-1-cluster LimitRange had max=4Gi which blocked
mysql-cluster-2 from starting after we bumped MySQL to 6Gi limit.
Also added custom LimitRange in dbaas stack (for when Terraform
manages it directly).
- Create dedicated 'matrix' PostgreSQL user (was using 'postgres' superuser)
- Add Vault DB static role pg-matrix with 24h rotation
- Add ExternalSecret matrix-db-creds syncing password from Vault
- Add inject-db-password init container that patches homeserver.yaml
with current Vault password on every pod start
- Update dependency annotation to pg-cluster-rw.dbaas
- Also updated Vault DB config to use pg-cluster-rw (was legacy postgresql.dbaas)
- Tandoor: pin image to vabene1111/recipes:1.5.27 (latest tag pull
failing with EOF from pull-through cache corruption)
- Matrix: update homeserver.yaml to use pg-cluster-rw.dbaas instead
of legacy postgresql.dbaas service, update CNPG postgres password
- Navidrome: deleted corrupted SQLite DB (malformed disk image from
proxmox-lvm migration), navidrome recreates fresh DB on startup
ENABLE_RSPAMD_REDIS=0 prevents the docker-mailserver from attempting to start
an embedded Redis server. The rspamd-redis subprocess was failing repeatedly
due to a corrupted/empty RDB file after the recent NFS-to-proxmox-lvm storage
migration. Since the DKIM signing config uses use_redis=false, Redis is not
needed.
Also correct the PVC storage request to match the actual provisioned size (2Gi).
The mismatch was causing unnecessary PVC replacement during terraform apply.
The global rate limit (10 req/s, 50 burst) was too aggressive for HA
dashboards that load 30+ JS files on page load, causing 429s. VPN tunnel
blips between London K8s and Sofia caused 502s with no retry fallback.
- Add traefik-retry middleware to reverse-proxy factory (all services)
- Add skip_global_rate_limit variable to both reverse-proxy factories
- Create ha-sofia-rate-limit middleware (100 avg, 200 burst)
- Apply to ha-sofia and music-assistant (both route to Sofia)
The Terraform Helm provider's YAML diff comparison silently ignores rules
containing {{ $labels.job }} in annotations, preventing the alerts from being
applied. Also syncs alerts to platform stack tpl.