Secondary/tertiary DNS instances had no custom zones — only the
primary had viktorbarzin.lan and viktorbarzin.me. The old setup Job
ran once at deployment and never synced new zones.
New CronJob runs every 30 minutes:
- Gets all zones from primary
- Enables zone transfer on primary
- Creates missing zones as Secondary type on replicas
- Resyncs existing zones via AXFR
Fixes .lan resolution failures (2/3 queries returned NXDOMAIN).
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Applied all 20 NFS stacks to converge PV mount_options (nfsvers=4).
State files encrypted and committed.
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Inbound:
- Direct MX to mail.viktorbarzin.me (ForwardEmail relay attempted and abandoned)
- Dedicated MetalLB IP 10.0.20.202 with ETP: Local for CrowdSec real-IP detection
- Removed Cloudflare Email Routing (can't store-and-forward)
- Fixed dual SPF violation, hardened to -all
- Added MTA-STS, TLSRPT, imported Rspamd DKIM into Terraform
- Removed dead BIND zones from config.tfvars (199 lines)
Outbound:
- Migrated from Mailgun (100/day) to Brevo (300/day free)
- Added Brevo DKIM CNAMEs and verification TXT
Monitoring:
- Probe frequency: 30m → 20m, alert thresholds adjusted to 60m
- Enabled Dovecot exporter scraping (port 9166)
- Added external SMTP monitor on public IP
Documentation:
- New docs/architecture/mailserver.md with full architecture
- New docs/architecture/mailserver-visual.html visualization
- Updated monitoring.md, CLAUDE.md, historical plan docs
Switch from restart-count based detection (increase restarts[1h] > 5) to
waiting-reason based (kube_pod_container_status_waiting_reason{reason="CrashLoopBackOff"}).
Alert auto-resolves when pod recovers, making it clear whether the issue is active.
Root cause: PMTU black hole on WireGuard tunnel. The tunnel runs over
the HE IPv6 6in4 tunnel (gif0 MTU 1280). With WG overhead (~80 bytes),
effective inner MTU is 1200 — but both sides were configured at 1420.
SSH kex packets >1200 bytes were silently dropped.
Fix: Set tun_wg0 MTU to 1200 on pfSense + peer_855 MTU to 1200 on
London GL-iNet. Re-enabled London DHCP/ARP import in remote CronJob.
All 3 sites now fully automated:
- Sofia: Kea leases + ARP every 5min
- London: DHCP + ARP via pfSense→London SSH hop, hourly
- Valchedrym: DHCP + ARP via pfSense→OpenWRT SSH hop, hourly
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ProxmoxMetricsMissing alert was firing because pve_* metrics were
excluded from the kubernetes-service-endpoints metric_relabel_configs
whitelist. The exporter was scraping successfully but metrics were
being dropped before ingestion.
- Sofia import (every 5min): Kea leases + pfSense ARP via SSH
- Remote import (hourly): Valchedrym DHCP/ARP via pfSense SSH hop
- London SSH (dropbear) hangs during kex on low-power router — disabled
for now, data imported manually. TODO: lightweight push agent
- Fixed SSH key filename (id_rsa, not id_ed25519) for RSA keys
- No more ping sweeping anywhere — all passive DHCP/ARP data
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Nextcloud persists dbpassword in config.php on its PVC and ignores
MYSQL_PASSWORD env var after initial install. When Vault rotates the
MySQL password, config.php goes stale causing HTTP 500 crash loops.
Adds a before-starting hook that patches config.php with the current
MYSQL_PASSWORD on every pod start. Combined with Stakater Reloader
annotation, the full rotation chain is now automated:
Vault rotates → ESO syncs Secret → Reloader restarts pod → hook
patches config.php → Nextcloud connects with new password.
Also fixes stale existingClaim (nextcloud-data-iscsi → nextcloud-data-proxmox).
- CronJob now SSHs to Valchedrym OpenWRT (192.168.0.1) to pull DHCP leases + ARP table
- Parses /tmp/dhcp.leases for hostname + MAC, /proc/net/arp for additional devices
- London still uses ping sweep via pfSense WG tunnel (no SSH access to GL-iNet)
- 6 Valchedrym devices tracked: router, alarm, video, termoregulator, 2 clients
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
All device discovery now handled by phpipam-pfsense-import CronJob
which queries Kea DHCP leases + pfSense ARP table every 5min.
No active scanning needed — pfSense sees all devices passively.
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- New CronJob `phpipam-pfsense-import` runs every 5min
- Queries Kea DHCP lease API (IP + MAC + hostname for all DHCP clients)
- Queries pfSense ARP table (IP + MAC for static IP devices)
- Imports into phpIPAM MySQL: new hosts get inserted, existing get MAC/hostname updates
- Reduced fping scan interval from 15min to 24h (weekly audit only)
- Faster, quieter, gets MACs (fping didn't), gets Kea hostnames
- SSH key (RSA PEM) stored in Vault, synced via ExternalSecret
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- CronJob now pulls hostnames FROM Technitium INTO phpIPAM for unnamed entries
(reverse sync: Kea DDNS registers → Technitium PTR → phpIPAM hostname)
- Kea DHCP4 now serves 192.168.1.0/24 via pfSense WAN (vtnet0)
- 42 MAC→IP reservations for all known LAN devices
- Kea DDNS registers 192.168.1.x hosts in Technitium (forward + reverse)
- DHCP pool .150-.199 for unknown devices
- Technitium update ACL extended to include 192.168.1.2 (pfSense WAN)
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- CronJob syncs phpIPAM hosts → Technitium DNS (A + PTR records) every 15min
- Queries phpIPAM MySQL directly for named hosts, pushes to Technitium API
- Covers 192.168.1.0/24 LAN (TP-Link DHCP, not Kea-managed)
- Kea DDNS configured on pfSense for 10.0.10.0/24 + 10.0.20.0/24 subnets
- Technitium zones accept dynamic updates from pfSense IPs (10.0.20.1, 10.0.10.1)
- 5 reverse DNS zones created (10.0.10, 20.0.10, 1.168.192, 2.3.10, 0.168.192)
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Lightweight IPAM with auto-discovery scanning every 15min via fping.
Replaces disabled NetBox (OOM'd). Uses existing MySQL InnoDB cluster
with Vault-rotated credentials. Cloudflare DNS + Authentik auth.
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
MySQL 8.4 remapped `utf8` to mean `utf8mb4`, but Nextcloud without this
config sends `COLLATE UTF8_general_ci` (a utf8mb3-only collation) in
queries, causing SQLSTATE[42000] errors that broke occ commands and sync.
Also removed stale `Work 🎯.csv` whose emoji filename was stripped in the
DB filecache (stored as `Work .csv`), causing permanent sync errors.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Bump immich server + ML from v2.6.3 to v2.7.3
- Increase PG shared_buffers to 2GB (memory 3Gi) to prevent
clip_index eviction by background jobs
- Switch DB_STORAGE_TYPE to SSD (effective_io_concurrency=200,
random_page_cost=1.2)
- Add pg_prewarm autoprewarm for warm restarts
- Add postgresql.override.conf via init container for tuning
- Add postStart hook to prewarm vector tables on startup
Search latency: ~1.3s → ~130ms (external), ~60ms (internal)
192.168.1.x LAN clients couldn't reach non-proxied *.viktorbarzin.me
domains because the TP-Link router doesn't support hairpin NAT.
Adds a CronJob that configures Technitium's Split Horizon
AddressTranslation post-processor on all 3 instances to translate
176.12.22.76 (public IP) → 10.0.20.200 (Traefik LB) in DNS responses
for 192.168.1.0/24 clients. Also adds viktorbarzin.me to the DNS
Rebinding Protection privateDomains allowlist so the translated private
IP isn't stripped.
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Technitium DNS was moved to its own dedicated MetalLB LoadBalancer IP
(10.0.20.201) but several references still pointed to the old shared IP
(10.0.20.200, now used by traefik/coturn/etc). This caused DNS resolution
failures for *.viktorbarzin.lan from pfSense and LAN clients.
- Update CoreDNS Corefile forward in both technitium and platform modules
- Update MetalLB annotation and remove stale allow-shared-ip
- Update zone NS records and apex A record in config.tfvars
- Update legacy BIND forwarder reference
Also fixed on pfSense (not in repo):
- Removed NAT rule redirecting UDP 53 to wrong IP (10.0.20.200)
- Added dnsmasq listen on WAN (192.168.1.2) for LAN clients
- Added domain-specific forwarding (viktorbarzin.lan -> 10.0.20.201)
- Created aliases (technitium_dns, k8s_shared_lb) for all NAT rules
[ci skip]
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The backup CronJob was stuck in ContainerCreating because it couldn't
mount the proxmox-lvm RWO PVC from a different node. Fixed by:
- Adding pod_affinity to co-locate with the headscale pod (same node)
- Mounting both data PVC (read-only) and NFS backup PVC (write)
- Adding integrity check pattern from vaultwarden backup
- Setting concurrency_policy=Replace and ttl_seconds_after_finished=10
iOS Safari doesn't support reading images via navigator.clipboard.read().
Added a camera button that opens the native file/photo picker, which works
reliably on all platforms including iOS.
- Custom index.html with xterm.js for reliable Ctrl+V text paste
- Go clipboard-upload service saves pasted images to /tmp/clipboard-images/
- Traefik IngressRoute routes /clipboard/* to upload service (same-origin)
- Authentik-protected upload path with strip-prefix middleware
MySQL operator ignores podSpec.containers sidecar resource overrides,
always injecting 6Gi limit defaults. Added sidecar to CR spec for
documentation but raised quota from 32Gi to 40Gi as the practical fix.
Quota usage drops from 99% to 79%.
Changed from simple time-based (24h on inverter) to condition-based:
only fires when on inverter AND battery charge <80% for 1h. This means
normal daytime inverter usage won't trigger alerts — only fires when
the grid is unavailable and battery is draining.
- HighPowerUsage: raise from 200W to 300W (R730 idles at ~230W)
- HighServiceLatency: exclude headscale (WebSocket) and authentik (SSO)
from latency checks — both have inherently high avg response times
snmp-exporter-external.viktorbarzin.me exposed UPS metrics to the
public internet with no authentication. Removed the external ingress
and Cloudflare DNS record. ha-sofia now accesses the SNMP exporter
via the existing .lan ingress (allow_local_access_only=true) using
direct IP 10.0.20.200 with Host header.
- Add snmp-exporter-ingress-external module for external HTTPS access to snmp-exporter
- Register snmp-exporter-external.viktorbarzin.me in Cloudflare DNS (proxied via tunnel)
- Update ha-sofia REST integration to use external HTTPS endpoint
- Fix ingress backend service routing to use existing snmp-exporter service
- All UPS sensors on ha-sofia now report values (voltage, battery %, load, etc.)
- changedetection: increase memory from 64Mi to 256Mi/512Mi (was OOMKilling),
set replicas back to 1
- flaresolverr: re-enable with replicas=1, increase memory limit to 1Gi
(needed by book-search for Cloudflare bypass)
Previously only searched for the current run's specific marker subject.
If IMAP deletion failed, old emails accumulated. Now searches for all
emails with "e2e-probe" in subject and deletes them, cleaning up any
leftovers from prior failed runs.
Root cause: Traefik v3 auto-detects HTTPS for backend port 443,
ignoring the port name "http" and serversscheme annotations.
MeshCentral serves HTTP on 443 (TLSOffload mode), but Traefik
connected via HTTPS causing TLS handshake failure → 500.
Fix: Change K8s service port from 443 to 80 with target_port 443.
Traefik sees port 80 → uses HTTP → reaches MeshCentral correctly.
Also disables anti-AI scraping (internal tool behind Authentik).
The rewrite-body plugin (anti-AI trap links) was crashing when
processing MeshCentral's HTML responses, returning 500. Disabled
anti_ai_scraping since it's a protected internal tool behind Authentik.
Re-enabled Authentik protection.
The previous init container incorrectly disabled TLSOffload, causing
MeshCentral to serve HTTPS on port 443. Traefik connects via HTTP,
resulting in protocol mismatch and 500 errors. Fix ensures TLSOffload
is always enabled so MeshCentral serves plain HTTP behind Traefik.