[docs] TrueNAS decommission cleanup — remove references from active docs

TrueNAS VM 9000 was operationally decommissioned 2026-04-13; NFS has been
served by Proxmox host (192.168.1.127) since. This commit scrubs remaining
references from active docs. VM 9000 itself remains on PVE in stopped state
pending user decision on deletion.

In-session cleanup already landed: reverse-proxy ingress + Cloudflare record
removed; Technitium DNS records deleted; Vault truenas_{api_key,ssh_private_key}
purged; homepage_credentials.reverse_proxy.truenas_token removed;
truenas_homepage_token variable + module deleted; Loki + Dashy cleaned;
config.tfvars deprecated DNS lines removed; historical-name comment added to
the nfs-truenas StorageClass (48 bound PVs, immutable name — kept).

Historical records (docs/plans/, docs/post-mortems/, .planning/) intentionally
untouched — they describe state at a point in time.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Viktor Barzin 2026-04-19 16:55:43 +00:00
parent 5f832e37d0
commit 5a0b24f54e
19 changed files with 57 additions and 61 deletions

View file

@ -155,9 +155,9 @@ Choose storage class based on workload type:
**Default for sensitive data is proxmox-lvm-encrypted.** Use plain `proxmox-lvm` only for non-sensitive workloads. Use NFS when you need RWX, backup pipeline integration, or it's a large shared media library.
**NFS servers:**
- **Proxmox host** (192.168.1.127): Primary NFS for all workloads. HDD at `/srv/nfs` (ext4 thin LV `pve/nfs-data`, 1TB). SSD at `/srv/nfs-ssd` (ext4 LV `ssd/nfs-ssd-data`, 100GB). Exports use `async,insecure` options (`async` — safe with UPS + Vault Raft replication + databases on block storage; `insecure` — pfSense NATs source ports >1024 between VLANs).
- **TrueNAS** (10.0.10.15): **Immich only** (8 PVCs). `nfs-truenas` StorageClass retained exclusively for Immich.
**NFS server:**
- **Proxmox host** (192.168.1.127): Sole NFS for all workloads. HDD at `/srv/nfs` (ext4 thin LV `pve/nfs-data`, 1TB). SSD at `/srv/nfs-ssd` (ext4 LV `ssd/nfs-ssd-data`, 100GB). Exports use `async,insecure` options (`async` — safe with UPS + Vault Raft replication + databases on block storage; `insecure` — pfSense NATs source ports >1024 between VLANs).
- **`nfs-truenas` StorageClass**: Historical name retained only because SC names are immutable on PVs (48 bound PVs reference it — renaming would require mass PV churn, not worth it). Now points to the Proxmox host, identical to `nfs-proxmox`. TrueNAS (VM 9000, 10.0.10.15) operationally decommissioned 2026-04-13; VM still exists in stopped state on PVE pending user decision on deletion.
**Migration note**: CSI PV `volumeAttributes` are immutable — cannot update NFS server in place. New PV/PVC pairs required (convention: append `-host` to PV name).
@ -237,7 +237,7 @@ resource "kubernetes_persistent_volume_claim" "data_encrypted" {
**Synology layout** (`192.168.1.13:/volume1/Backup/Viki/`):
- `pve-backup/` — PVC file backups (`pvc-data/`), SQLite backups (`sqlite-backup/`), pfSense, PVE config (synced from sda)
- `nfs/` — mirrors `/srv/nfs` on Proxmox (inotify change-tracked rsync, renamed from `truenas/`)
- `nfs/` — mirrors `/srv/nfs` on Proxmox (inotify change-tracked rsync)
- `nfs-ssd/` — mirrors `/srv/nfs-ssd` on Proxmox (inotify change-tracked rsync)
**App-level CronJobs** (write to Proxmox host NFS, synced to Synology via inotify):

View file

@ -105,7 +105,7 @@ Terragrunt-based homelab managing a Kubernetes cluster (5 nodes, v1.34.2) on Pro
- **NFS** (`nfs-proxmox` StorageClass): For app data. Use the `nfs_volume` module, never inline `nfs {}` blocks.
- **proxmox-lvm-encrypted** (`proxmox-lvm-encrypted` StorageClass): **Default for all sensitive data** — databases, auth, email, passwords, git repos, health data. LUKS2 encryption via Proxmox CSI. Passphrase in Vault, backup key on PVE host.
- **proxmox-lvm** (`proxmox-lvm` StorageClass): For non-sensitive stateful apps (configs, caches, tools). Proxmox CSI driver.
- **NFS server**: Proxmox host at 192.168.1.127. HDD NFS at `/srv/nfs` (2TB ext4 LV `pve/nfs-data`), SSD NFS at `/srv/nfs-ssd` (100GB ext4 LV `ssd/nfs-ssd-data`). Exports use `async` mode (safe with UPS + databases on block storage). TrueNAS (10.0.10.15) decommissioned.
- **NFS server**: Proxmox host at 192.168.1.127 (sole NFS). HDD NFS at `/srv/nfs` (2TB ext4 LV `pve/nfs-data`), SSD NFS at `/srv/nfs-ssd` (100GB ext4 LV `ssd/nfs-ssd-data`). Exports use `async` mode (safe with UPS + databases on block storage). TrueNAS (VM 9000, 10.0.10.15) decommissioned 2026-04-13. Legacy `nfs-truenas` StorageClass name retained (48 PVs bind it; SC names are immutable on PVs) but now points to the Proxmox host, identical to `nfs-proxmox`.
- **SQLite on NFS is unreliable** (fsync issues) — always use proxmox-lvm or local disk for databases.
- **NFS mount options**: Always `soft,timeo=30,retrans=3` to prevent uninterruptible sleep (D state).
- **NFS export directory must exist** on the Proxmox host before Terraform can create the PV.
@ -113,7 +113,7 @@ Terragrunt-based homelab managing a Kubernetes cluster (5 nodes, v1.34.2) on Pro
- **daily-backup** (Daily 05:00): Auto-discovered BACKUP_DIRS (glob), auto SQLite backup (magic number + `?mode=ro`), pfSense, PVE config. No NFS mirror step (NFS syncs directly to Synology via inotify).
- **offsite-sync-backup** (Daily 06:00): Step 1: sda→Synology `pve-backup/`. Step 2: NFS→Synology `nfs/`+`nfs-ssd/` via `rsync --files-from` (inotify change log). Monthly full `--delete`.
- **nfs-change-tracker.service**: inotifywait on `/srv/nfs` + `/srv/nfs-ssd`, logs to `/mnt/backup/.nfs-changes.log`. Incremental syncs complete in seconds.
- **Synology layout** (`/volume1/Backup/Viki/`): `pve-backup/` (from sda), `nfs/` (from `/srv/nfs`), `nfs-ssd/` (from `/srv/nfs-ssd`). `truenas/` renamed to `nfs/`, `pve-backup/nfs-mirror/` removed.
- **Synology layout** (`/volume1/Backup/Viki/`): `pve-backup/` (from sda), `nfs/` (from `/srv/nfs`), `nfs-ssd/` (from `/srv/nfs-ssd`).
## Shared Variables (never hardcode)
`var.nfs_server` (192.168.1.127), `var.redis_host`, `var.postgresql_host`, `var.mysql_host`, `var.ollama_host`, `var.mail_host`

View file

@ -80,8 +80,6 @@ def sofia():
pfsense >> k8s_switch
with Cluster('Management Network'):
mgt_switch = Switch()
# Truenas
truenas = Storage("Truenas")
# pxe server
pxe_server = Rack("PXE Server")
# HA
@ -91,7 +89,6 @@ def sofia():
devvm_vpn_client = VPN("Tailscale Client")
vpn_clients["devvm"] = devvm_vpn_client
mgt_switch >> truenas
mgt_switch >> pxe_server
mgt_switch >> home_assistant
mgt_switch >> devvm

View file

@ -20,7 +20,7 @@ This repository contains the configuration and documentation for a homelab Kuber
| [Overview](architecture/overview.md) | Infrastructure overview, hardware specs, VM inventory, and service catalog |
| [Networking](architecture/networking.md) | Network topology, VLANs, routing, and firewall rules |
| [VPN](architecture/vpn.md) | Headscale mesh VPN and Cloudflare Tunnel configuration |
| [Storage](architecture/storage.md) | TrueNAS NFS, democratic-csi, and persistent volume management |
| [Storage](architecture/storage.md) | Proxmox host NFS, Proxmox CSI (LVM-thin + LUKS2), and persistent volume management |
| [Authentication](architecture/authentication.md) | Authentik SSO, OIDC flows, and service integration |
| [Security](architecture/security.md) | CrowdSec IPS, Kyverno policies, and security controls |
| [Monitoring](architecture/monitoring.md) | Prometheus, Grafana, Loki, and observability stack |

View file

@ -209,7 +209,7 @@ graph LR
| Vault Backup | Weekly Sunday 02:00, 30d | CronJob in `vault` | raft snapshot |
| Redis Backup | Weekly Sunday 03:00, 30d | CronJob in `redis` | BGSAVE + copy |
| Vaultwarden Integrity Check | Hourly | CronJob in `vaultwarden` | PRAGMA integrity_check → metric |
| ~~TrueNAS Cloud Sync~~ | **DECOMMISSIONED** | Was TrueNAS Cloud Sync Task 1 | Replaced by offsite-sync-backup |
| ~~TrueNAS Cloud Sync~~ | **DECOMMISSIONED 2026-04-13** | Was TrueNAS Cloud Sync Task 1 | Replaced by offsite-sync-backup + inotify change tracking on Proxmox host NFS |
## How It Works
@ -334,14 +334,14 @@ Two-step offsite sync:
**Monthly full sync**: On 1st Sunday of month, runs `rsync --delete` for cleanup (removes orphaned files on Synology).
**Destination**:
- `Synology/Backup/Viki/nfs/` — mirrors `/srv/nfs` (renamed from `truenas/`)
- `Synology/Backup/Viki/nfs/` — mirrors `/srv/nfs`
- `Synology/Backup/Viki/nfs-ssd/` — mirrors `/srv/nfs-ssd`
**Monitoring**: Pushes `offsite_backup_sync_last_success_timestamp` to Pushgateway. Alerts: `OffsiteBackupSyncStale` (>8d), `OffsiteBackupSyncFailing`.
#### ~~TrueNAS Cloud Sync~~ — DECOMMISSIONED
#### ~~TrueNAS Cloud Sync~~ — DECOMMISSIONED 2026-04-13
> TrueNAS Cloud Sync was decommissioned along with TrueNAS (2026-04). The `Synology/Backup/Viki/truenas/` directory was renamed to `nfs/` to reflect the new consolidated layout.
> TrueNAS Cloud Sync was decommissioned along with TrueNAS (2026-04-13). The current offsite path is inotify-change-tracked rsync from the Proxmox host NFS (`/srv/nfs`, `/srv/nfs-ssd`) to Synology.
## Configuration

View file

@ -269,8 +269,8 @@ Push secrets (`BREVO_API_KEY`, `EMAIL_MONITOR_IMAP_PASSWORD`) come from External
| `mailserver-data-encrypted` | 2Gi (auto-resize 5Gi) | `proxmox-lvm-encrypted` (LUKS2) | Maildir + Postfix queue + state + logs |
| `roundcubemail-html-encrypted` | 1Gi | `proxmox-lvm-encrypted` | Roundcube PHP code + user session data |
| `roundcubemail-enigma-encrypted` | 1Gi | `proxmox-lvm-encrypted` | Roundcube Enigma (PGP) user keys |
| `mailserver-backup-host` (RWX) | 10Gi | `nfs-truenas` | `mailserver-backup` CronJob destination (`/srv/nfs/mailserver-backup/<YYYY-WW>/`) |
| `roundcube-backup-host` (RWX) | 10Gi | `nfs-truenas` | `roundcube-backup` CronJob destination |
| `mailserver-backup-host` (RWX) | 10Gi | `nfs-truenas` (historical SC name, Proxmox host NFS) | `mailserver-backup` CronJob destination (`/srv/nfs/mailserver-backup/<YYYY-WW>/`) |
| `roundcube-backup-host` (RWX) | 10Gi | `nfs-truenas` (historical SC name, Proxmox host NFS) | `roundcube-backup` CronJob destination |
**Backup**: daily `mailserver-backup` + `roundcube-backup` CronJobs rsync data PVCs to NFS. NFS directory is picked up by the PVE host's inotify-driven `/usr/local/bin/offsite-sync-backup` which pushes to Synology (weekly). See [Storage & Backup Architecture](storage.md) for the 3-2-1 flow.

View file

@ -28,7 +28,6 @@ graph TB
subgraph "VLAN 10 - Management<br/>10.0.10.0/24"
Proxmox[Proxmox Host<br/>10.0.10.1]
TrueNAS[TrueNAS<br/>10.0.10.15]
DevVM[DevVM<br/>10.0.10.10]
Registry[Registry VM<br/>10.0.20.10]
end
@ -64,7 +63,6 @@ graph TB
vmbr0 -.physical link.- eno1
vmbr0 --> vmbr1
vmbr1 -.VLAN 10.- Proxmox
vmbr1 -.VLAN 10.- TrueNAS
vmbr1 -.VLAN 10.- DevVM
vmbr1 -.VLAN 20.- pfSense
vmbr1 -.VLAN 20.- Tech
@ -146,7 +144,7 @@ flowchart LR
| Subnet | DHCP Server | DNS option 6 | Reservations | DDNS | Notes |
|--------|------------|--------------|--------------|------|-------|
| 10.0.10.0/24 (Mgmt) | Kea on pfSense | `10.0.10.1, 94.140.14.14` | 4 (devvm, truenas, pxe, ha) | Yes (TSIG) | VMs with static MACs |
| 10.0.10.0/24 (Mgmt) | Kea on pfSense | `10.0.10.1, 94.140.14.14` | 3 (devvm, pxe, ha) | Yes (TSIG) | VMs with static MACs |
| 10.0.20.0/24 (K8s) | Kea on pfSense | `10.0.20.1, 94.140.14.14` | 7 (master, nodes 1-5, registry) | Yes (TSIG) | K8s cluster nodes |
| 192.168.1.0/24 (LAN) | **TP-Link AP** | `192.168.1.2, 94.140.14.14` | 42 (all home devices) | Yes | pfSense Kea WAN is disabled |
| 10.3.2.0/24 (VPN) | Static | — | — | No | WireGuard peers |
@ -160,7 +158,7 @@ flowchart LR
The Proxmox host uses a dual-bridge architecture:
- **vmbr0**: Physical bridge on interface `eno1`, connected to upstream LAN (192.168.1.0/24). Proxmox management IP is 192.168.1.127.
- **vmbr1**: Internal VLAN-aware bridge, acts as a trunk carrying:
- **VLAN 10 (Management)**: 10.0.10.0/24 — Proxmox, TrueNAS, DevVM
- **VLAN 10 (Management)**: 10.0.10.0/24 — Proxmox, DevVM
- **VLAN 20 (Kubernetes)**: 10.0.20.0/24 — All K8s nodes, services, MetalLB IPs
VMs tag traffic on vmbr1 to isolate workloads. pfSense bridges VLAN 20 to the upstream LAN via NAT.
@ -369,7 +367,7 @@ Containerd on all K8s nodes uses `hosts.toml` to redirect pulls to the local cac
1. **Single flat network**: Simpler, but no isolation between management and workload traffic.
2. **Routed network with physical VLANs**: Requires switch with VLAN support.
**Decision**: vmbr0 (physical) + vmbr1 (VLAN trunk) gives isolation without requiring managed switches. Management traffic (Proxmox, TrueNAS) stays on VLAN 10, K8s workloads stay on VLAN 20. Failures in K8s don't affect access to Proxmox or storage.
**Decision**: vmbr0 (physical) + vmbr1 (VLAN trunk) gives isolation without requiring managed switches. Management traffic (Proxmox, DevVM) stays on VLAN 10, K8s workloads stay on VLAN 20. Failures in K8s don't affect access to Proxmox or storage.
### Why Cloudflared Tunnel Instead of Port Forwarding?

View file

@ -92,7 +92,7 @@ graph TB
| 203 | k8s-node3 | 8 | 32GB | vmbr1:vlan20 | - | Worker node |
| 204 | k8s-node4 | 8 | 32GB | vmbr1:vlan20 | - | Worker node |
| 220 | docker-registry | 4 | 4GB | vmbr1:vlan20 | 10.0.20.10 | Private Docker registry |
| ~~9000~~ | ~~truenas~~ | — | — | — | ~~10.0.10.15~~ | **DECOMMISSIONED** — NFS now served by Proxmox host (192.168.1.127) |
| ~~9000~~ | ~~truenas~~ | — | — | — | ~~10.0.10.15~~ | **DECOMMISSIONED 2026-04-13** — NFS now served by Proxmox host (192.168.1.127). VM still exists in stopped state on PVE pending user decision on deletion. |
### Kubernetes Cluster
@ -213,7 +213,7 @@ Secrets are stored in HashiCorp Vault under `secret/`:
**Rationale**:
- **Flexibility**: Easy to snapshot, clone, and roll back VMs during upgrades
- **Isolation**: Management network (TrueNAS, devvm) separated from Kubernetes
- **Isolation**: Management network (devvm) separated from Kubernetes
- **GPU passthrough**: Can dedicate GPU to a single node without tainting the entire host
- **Multi-purpose**: Same physical host can run non-K8s VMs (pfSense, Home Assistant)

View file

@ -16,13 +16,13 @@ All services storing sensitive data were migrated to `proxmox-lvm-encrypted` on
- **HDD NFS**: `/srv/nfs` on ext4 LV `pve/nfs-data` (2TB) — bulk media and backup targets
- **SSD NFS**: `/srv/nfs-ssd` on ext4 LV `ssd/nfs-ssd-data` (100GB) — high-performance data (Immich ML)
Both `StorageClass: nfs-truenas` (name kept for compatibility) and `StorageClass: nfs-proxmox` (identical) point to the Proxmox host. Migrated from TrueNAS (10.0.10.15) which has been fully decommissioned.
Both `StorageClass: nfs-truenas` and `StorageClass: nfs-proxmox` point to the Proxmox host and are functionally identical. The `nfs-truenas` name is historical — it was retained because StorageClass names are immutable on bound PVs (48 PVs reference it) and renaming would force mass PV churn across the cluster.
**Backup storage (sda)**: 1.1TB RAID1 SAS disk, VG `backup`, LV `data` (ext4), mounted at `/mnt/backup` on PVE host. Dedicated backup disk for weekly PVC file backups, auto SQLite backups, pfSense backups, and PVE config. NFS data syncs directly to Synology via inotify change tracking (not stored on sda). Independent of live storage (sdc).
**Migration (2026-04-02)**: All iSCSI block volumes were migrated from democratic-csi (TrueNAS iSCSI → ZFS → LVM-thin) to Proxmox CSI (direct LVM-thin hotplug). democratic-csi iSCSI driver has been removed.
**History (2026-04-02)**: iSCSI block volumes migrated from democratic-csi (TrueNAS iSCSI → ZFS → LVM-thin) to Proxmox CSI (direct LVM-thin hotplug). democratic-csi iSCSI driver removed.
**Migration (2026-04)**: TrueNAS (10.0.10.15) fully decommissioned. All NFS storage migrated to the Proxmox host (192.168.1.127). ZFS datasets under `/mnt/main/` and `/mnt/ssd/` moved to ext4 LVs at `/srv/nfs/` and `/srv/nfs-ssd/`. Legacy PVs referencing `/mnt/main/` paths still work (bind-mounted or symlinked on the Proxmox host); new PVs use `/srv/nfs/` and `/srv/nfs-ssd/`.
**History (2026-04-13)**: TrueNAS (VM 9000, 10.0.10.15) fully decommissioned. NFS storage migrated to the Proxmox host (192.168.1.127). ZFS datasets under `/mnt/main/` and `/mnt/ssd/` moved to ext4 LVs at `/srv/nfs/` and `/srv/nfs-ssd/`. Legacy PVs referencing `/mnt/main/` paths still work (bind-mounted or symlinked on the Proxmox host); new PVs use `/srv/nfs/` and `/srv/nfs-ssd/`. TrueNAS VM still exists in stopped state on PVE pending user decision on deletion.
## Architecture Diagram
@ -39,7 +39,7 @@ graph TB
end
subgraph K8s["Kubernetes Cluster"]
CSI_NFS["nfs-csi driver<br/>StorageClass: nfs-truenas / nfs-proxmox<br/>soft,timeo=30,retrans=3"]
CSI_NFS["nfs-csi driver<br/>StorageClass: nfs-proxmox (+ legacy nfs-truenas)<br/>soft,timeo=30,retrans=3"]
CSI_PVE["Proxmox CSI plugin<br/>StorageClass: proxmox-lvm<br/>StorageClass: proxmox-lvm-encrypted"]
NFS_PV["NFS PersistentVolumes<br/>RWX, ~100 volumes"]
@ -77,10 +77,10 @@ graph TB
| Proxmox NFS (HDD) | LV `pve/nfs-data`, 2TB ext4 | 192.168.1.127:/srv/nfs | Bulk NFS data for all services |
| Proxmox NFS (SSD) | LV `ssd/nfs-ssd-data`, 100GB ext4 | 192.168.1.127:/srv/nfs-ssd | High-performance data (Immich ML) |
| nfs-csi | Helm chart | Namespace: nfs-csi | NFS CSI driver |
| StorageClass `nfs-truenas` | RWX, soft mount | Cluster-wide | NFS storage (name kept for compatibility, points to Proxmox) |
| StorageClass `nfs-proxmox` | RWX, soft mount | Cluster-wide | NFS storage (identical to nfs-truenas) |
| StorageClass `nfs-proxmox` | RWX, soft mount | Cluster-wide | NFS storage, points to Proxmox host |
| StorageClass `nfs-truenas` | RWX, soft mount | Cluster-wide | **Historical name** — functionally identical to `nfs-proxmox`, points to the Proxmox host. Kept because SC names are immutable on 48 bound PVs. |
| TF module `nfs_volume` | `modules/kubernetes/nfs_volume/` | Infra repo | Static NFS PV/PVC factory |
| ~~TrueNAS VM~~ | **DECOMMISSIONED** | Was VMID 9000 at 10.0.10.15 | Replaced by Proxmox NFS (2026-04) |
| ~~TrueNAS VM~~ | **DECOMMISSIONED 2026-04-13** | Was VM 9000 at 10.0.10.15 | Replaced by Proxmox NFS. VM still in stopped state pending deletion. |
| ~~democratic-csi-iscsi~~ | **REMOVED** | Was namespace: iscsi-csi | Replaced by Proxmox CSI (2026-04-02) |
| ~~StorageClass `iscsi-truenas`~~ | **REMOVED** | Was cluster-wide | Replaced by `proxmox-lvm` |
@ -105,7 +105,7 @@ graph TB
**Note**: Some legacy PVs still reference `/mnt/main/<service>` paths. These work via compatibility symlinks/bind-mounts on the Proxmox host. New PVs should use `/srv/nfs/<service>` or `/srv/nfs-ssd/<service>`.
**CRITICAL**: Never use inline `nfs {}` blocks in pod specs — they default to `hard,timeo=600` which causes 10-minute hangs on network issues. Always use the `nfs-truenas` or `nfs-proxmox` StorageClass via PVCs.
**CRITICAL**: Never use inline `nfs {}` blocks in pod specs — they default to `hard,timeo=600` which causes 10-minute hangs on network issues. Always use the `nfs-proxmox` StorageClass (or the legacy `nfs-truenas` for existing PVs) via PVCs.
### Block Storage Flow (Proxmox CSI) — NEW
@ -164,7 +164,7 @@ SQLite uses `fsync()` to guarantee durability. NFS's soft mount + async semantic
|------|---------|
| `/etc/exports` (on Proxmox host) | NFS export configuration for all service shares |
| `stacks/proxmox-csi/` | Terraform stack for Proxmox CSI plugin + StorageClass |
| `stacks/nfs-csi/` | NFS CSI driver + StorageClasses (`nfs-truenas`, `nfs-proxmox`) |
| `stacks/nfs-csi/` | NFS CSI driver + StorageClasses (`nfs-proxmox` + legacy `nfs-truenas`) |
| `modules/kubernetes/nfs_volume/` | Reusable module for static NFS PV/PVC creation |
| `config.tfvars` | Variable `nfs_server = "192.168.1.127"` shared by all stacks |
@ -173,8 +173,10 @@ SQLite uses `fsync()` to guarantee durability. NFS's soft mount + async semantic
| Path | Contents |
|------|----------|
| `secret/viktor/proxmox_csi_encryption_passphrase` | LUKS2 encryption passphrase for `proxmox-lvm-encrypted` StorageClass |
| ~~`secret/viktor/truenas_ssh_key`~~ | **LEGACY** — was SSH key for democratic-csi SSH driver (TrueNAS decommissioned) |
| ~~`secret/viktor/truenas_root_password`~~ | **LEGACY** — was TrueNAS root password (TrueNAS decommissioned) |
| ~~`secret/viktor/truenas_ssh_key`~~ | **REMOVED** — was SSH key for democratic-csi SSH driver (TrueNAS decommissioned 2026-04-13) |
| ~~`secret/viktor/truenas_root_password`~~ | **REMOVED** — was TrueNAS root password (TrueNAS decommissioned 2026-04-13) |
| ~~`secret/viktor/truenas_api_key`~~ | **REMOVED** — was TrueNAS API key (TrueNAS decommissioned 2026-04-13) |
| ~~`secret/viktor/truenas_ssh_private_key`~~ | **REMOVED** — was TrueNAS SSH private key (TrueNAS decommissioned 2026-04-13) |
### Terraform Stacks

View file

@ -63,10 +63,10 @@ sequenceDiagram
Cloudflare-->>AdGuard: A record (Cloudflare IP)
AdGuard-->>Client: Response
Note over Client: Query: truenas.viktorbarzin.lan
Note over Client: Query: nextcloud.viktorbarzin.lan
Client->>AdGuard: DNS query
AdGuard->>Technitium: Forward (.lan domain)
Technitium-->>AdGuard: A record (10.0.10.15)
Technitium-->>AdGuard: A record (10.0.20.200)
AdGuard-->>Client: Response
Note over Client,Technitium: If Cloudflared tunnel is down:
@ -370,14 +370,14 @@ dns_config:
### Can't Resolve .lan Domains from VPN
**Symptoms**: `nslookup truenas.viktorbarzin.lan` returns `NXDOMAIN`.
**Symptoms**: `nslookup nextcloud.viktorbarzin.lan` returns `NXDOMAIN`.
**Diagnosis**: Check DNS chain: Client → AdGuard → Technitium.
**Steps**:
1. Verify AdGuard is running: `kubectl get pod -n adguard`
2. Check AdGuard conditional forwarding: Query AdGuard directly: `nslookup truenas.viktorbarzin.lan <adguard-ip>`
3. Check Technitium: `nslookup truenas.viktorbarzin.lan 10.0.20.101`
2. Check AdGuard conditional forwarding: Query AdGuard directly: `nslookup nextcloud.viktorbarzin.lan <adguard-ip>`
3. Check Technitium: `nslookup nextcloud.viktorbarzin.lan 10.0.20.101`
**Common causes**:
1. **AdGuard not forwarding .lan**: Conditional forwarding rule missing or misconfigured.

View file

@ -146,8 +146,8 @@ qm shutdown 220; sleep 10
for VMID in 102 300 103; do qm shutdown $VMID; done
sleep 20
# TrueNAS (wait for ZFS flush)
qm shutdown 9000; sleep 60
# TrueNAS (decommissioned 2026-04-13 — VM 9000 should already be stopped; skip if absent)
qm shutdown 9000 2>/dev/null || true
# pfSense (last — network gateway)
qm shutdown 101; sleep 15

View file

@ -7,7 +7,7 @@
## Backup Location
- NFS: `/mnt/main/etcd-backup/etcd-snapshot-YYYYMMDD-HHMMSS.db`
- Replicated to Synology NAS (192.168.1.13) via TrueNAS ZFS replication
- Replicated to Synology NAS (192.168.1.13) via Proxmox host offsite-sync-backup (inotify-driven rsync)
- Retention: 30 days
- Schedule: Daily at 00:00

View file

@ -8,8 +8,8 @@ Last updated: 2026-04-06
- Proxmox host failure requiring fresh VM provisioning
## Prerequisites
- Proxmox host (192.168.1.127) accessible
- TrueNAS NFS server (10.0.10.15) accessible — or Synology NAS (192.168.1.13) for backups
- Proxmox host (192.168.1.127) accessible, with NFS exports on `/srv/nfs` and `/srv/nfs-ssd`
- Synology NAS (192.168.1.13) accessible for offsite backup restore if the PVE host backup disk is also lost
- sda backup disk mounted at `/mnt/backup` on PVE host (or restore from Synology first)
- Git repo with infra code
- SOPS age keys for state decryption (`~/.config/sops/age/keys.txt`)

View file

@ -130,7 +130,7 @@ kubectl rollout restart deployment -n <namespace>
## Alternative: Restore from sda Backup
If TrueNAS NFS is unavailable but the PVE host is accessible:
If the Proxmox host NFS mount is unavailable but the PVE host itself is accessible:
```bash
# 1. SSH to PVE host
@ -148,17 +148,17 @@ kubectl run mysql-restore --rm -it --image=mysql \
## Alternative: Restore from Synology (if PVE host is down)
If both TrueNAS and PVE host are unavailable:
If the PVE host itself is unavailable:
```bash
# 1. SSH to Synology NAS
ssh Administrator@192.168.1.13
# 2. Navigate to backup directory
cd /volume1/Backup/Viki/pve-backup/nfs-mirror/mysql-backup/
cd /volume1/Backup/Viki/nfs/mysql-backup/
# 3. Copy dump to a temporary location accessible from cluster
# (e.g., via rsync to a surviving node, or restore TrueNAS first)
# (e.g., via rsync to a surviving node, or restore PVE host first)
```
## Estimated Time

View file

@ -123,7 +123,7 @@ kubectl rollout restart deployment -n <namespace>
## Alternative: Restore from sda Backup
If TrueNAS NFS is unavailable but the PVE host is accessible:
If the Proxmox host NFS mount is unavailable but the PVE host itself is accessible:
```bash
# 1. SSH to PVE host
@ -142,17 +142,17 @@ kubectl run pg-restore --rm -it --image=postgres:16.4-bullseye \
## Alternative: Restore from Synology (if PVE host is down)
If both TrueNAS and PVE host are unavailable:
If the PVE host itself is unavailable:
```bash
# 1. SSH to Synology NAS
ssh Administrator@192.168.1.13
# 2. Navigate to backup directory
cd /volume1/Backup/Viki/pve-backup/nfs-mirror/postgresql-backup/
cd /volume1/Backup/Viki/nfs/postgresql-backup/
# 3. Copy dump to a temporary location accessible from cluster
# (e.g., via rsync to a surviving node, or restore TrueNAS first)
# (e.g., via rsync to a surviving node, or restore PVE host first)
```
## Estimated Time

View file

@ -93,7 +93,7 @@ kubectl get externalsecrets -A | grep -v "SecretSynced"
## Alternative: Restore from sda Backup
If TrueNAS NFS is unavailable but the PVE host is accessible:
If the Proxmox host NFS mount is unavailable but the PVE host itself is accessible:
```bash
# 1. SSH to PVE host
@ -115,17 +115,17 @@ vault operator raft snapshot restore -force ./vault-raft-YYYYMMDD-HHMMSS.db
## Alternative: Restore from Synology (if PVE host is down)
If both TrueNAS and PVE host are unavailable:
If the PVE host itself is unavailable:
```bash
# 1. SSH to Synology NAS
ssh Administrator@192.168.1.13
# 2. Navigate to backup directory
cd /volume1/Backup/Viki/pve-backup/nfs-mirror/vault-backup/
cd /volume1/Backup/Viki/nfs/vault-backup/
# 3. Copy snapshot to local workstation
scp Administrator@192.168.1.13:/volume1/Backup/Viki/pve-backup/nfs-mirror/vault-backup/vault-raft-YYYYMMDD-HHMMSS.db ./
scp Administrator@192.168.1.13:/volume1/Backup/Viki/nfs/vault-backup/vault-raft-YYYYMMDD-HHMMSS.db ./
# 4. Restore via port-forward (same as above)
```

View file

@ -104,9 +104,9 @@ lvchange -an pve/$LV_NAME
kubectl scale deployment vaultwarden -n vaultwarden --replicas=1
```
## Alternative: Restore from sda NFS Mirror
## Alternative: Restore from sda Backup Mirror
If TrueNAS NFS is unavailable but PVE host is accessible:
If the Proxmox host NFS mount is unavailable but the PVE host itself is accessible:
```bash
# 1. SSH to PVE host

View file

@ -84,8 +84,8 @@ spec:
volumes:
- name: data
nfs:
server: 10.0.10.15
path: /mnt/main/excalidraw
server: 192.168.1.127
path: /srv/nfs/excalidraw
```
### With Authentik SSO

View file

@ -85,7 +85,6 @@ Widget-capable matches (candidate): **27**
| `reverse-proxy` | `proxmox` | `https://proxmox.viktorbarzin.me` | `link-only` |
| `reverse-proxy` | `r730` | `https://r730.viktorbarzin.me` | `link-only` |
| `reverse-proxy` | `registry` | `https://registry.viktorbarzin.me` | `link-only` |
| `reverse-proxy` | `truenas` | `https://truenas.viktorbarzin.me` | `truenas` |
| `reverse-proxy` | `valchedrym` | `https://valchedrym.viktorbarzin.me` | `link-only` |
| `rybbit` | `rybbit` | `https://rybbit.viktorbarzin.me` | `link-only` |
| `send` | `send` | `https://send.viktorbarzin.me` | `link-only` |