No description
Find a file
Viktor Barzin 6cfc4b7836 [mailserver] Add backup CronJob for Roundcube html + enigma PVCs
## Context
Roundcube webmail runs with two encrypted RWO PVCs (see roundcubemail.tf:
`roundcubemail-html-encrypted`, `roundcubemail-enigma-encrypted`). These
carry user-visible state that is NOT regenerable without user action:

- `html` PVC → Apache docroot, plugin installs, skin overrides, session
  artefacts (two_factor_webauthn keys, persistent_login tokens, rcguard
  throttle state)
- `enigma` PVC → user-uploaded PGP private keyrings

Per the subdir CLAUDE.md "Storage & Backup Architecture" rule every
proxmox-lvm* PVC MUST have a backup CronJob writing to NFS
`/mnt/main/<app>-backup/`. Mailserver already complies via code-z26's
`mailserver-backup` CronJob; Roundcube does not. Losing either Roundcube
PVC means users must re-add 2FA devices, re-install plugins, and
re-import PGP keys — none of it recoverable from a database dump.

Target task: `code-1f6`.

## This change
- Adds `module.nfs_roundcube_backup_host` sourcing
  `modules/kubernetes/nfs_volume` pointed at
  `/srv/nfs/roundcube-backup` on the Proxmox host (NFSv4, inotify
  change-tracker picks it up for Synology offsite).
- Adds `kubernetes_cron_job_v1.roundcube-backup`:
  - Schedule `10 3 * * *` — 10 minutes after `mailserver-backup`
    (`0 3 * * *`) to avoid NFS write-window contention. Roundcube PVCs
    are tiny (<200 MiB combined on current cluster) so the window is
    well under 10 min.
  - `pod_affinity` on `app=roundcubemail` (Roundcube runs 1 replica with
    `Recreate` strategy on a fresh node per pod; the backup pod must
    co-locate because both PVCs are RWO).
  - `rsync -aH --delete --link-dest=/backup/<prev-week>` into
    `/backup/<YYYY-WW>/{html,enigma}/` — hardlinks unchanged files vs
    the previous weekly snapshot, keeping storage cost ~= delta only.
  - Weekly rotation retains 8 snapshots (~2 months), matching
    `mailserver-backup`.
  - Pushgateway metrics under `job=roundcube-backup` so existing
    `BackupDurationHigh` / `BackupStale` alert patterns detect
    regressions without extra wiring.
  - `KYVERNO_LIFECYCLE_V1` `ignore_changes` for mutated `dns_config`.

## Layout
```
 NFS server 192.168.1.127:/srv/nfs/
 ├── mailserver-backup/        (0 3 * * *  — code-z26)
 │   └── <YYYY-WW>/{data,state,log}/
 └── roundcube-backup/         (10 3 * * * — this change)
     └── <YYYY-WW>/{html,enigma}/
```

## What is NOT in this change
- Changing the mailserver-backup CronJob to also cover Roundcube. Two
  separate CronJobs keep the concerns (and pod anti-affinity/affinity)
  clean; the 10-min stagger eliminates the contention justification for
  merging them.
- Retention alerting tuning — existing Pushgateway/Prometheus rule
  ecosystem suffices for now.
- Restore tooling — follows the standard pattern in
  `docs/runbooks/` (rsync back, fix perms).

## Reproduce locally
1. Plan: `cd stacks/mailserver && scripts/tg plan -lock=false` →
   2 new resources (nfs_volume module + CronJob).
2. Apply, then trigger a one-shot run:
   `kubectl -n mailserver create job --from=cronjob/roundcube-backup roundcube-backup-manual-1`
3. Expected on success:
   - `kubectl -n mailserver logs job/roundcube-backup-manual-1` → "=== Backup IO Stats ===".
   - On Proxmox host:
     `ls /srv/nfs/roundcube-backup/$(date +%Y-%W)/` → `html`, `enigma`.
   - `/mnt/backup/.nfs-changes.log` (Proxmox) lists fresh paths under
     `roundcube-backup/` within ~1s of the rsync finishing.
   - Pushgateway: `curl -s prometheus-prometheus-pushgateway.monitoring:9091/metrics | grep roundcube`
     shows `backup_duration_seconds`, `backup_last_success_timestamp`.

## Automated
- `terraform fmt -check -recursive stacks/mailserver/modules/mailserver/` → clean.
- `scripts/tg plan -lock=false` in stacks/mailserver expected to show
  `+ module.nfs_roundcube_backup_host.*`, `+ kubernetes_cron_job_v1.roundcube-backup`.

Closes: code-1f6

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-19 00:14:47 +00:00
.beads bd init: initialize beads issue tracking 2026-04-06 15:38:46 +03:00
.claude [payslip-extractor] Add RSU handling section 2026-04-18 23:37:33 +00:00
.git-crypt Add 1 git-crypt collaborator [ci skip] 2025-10-24 18:00:00 +00:00
.github chore: sort outage report service list alphabetically 2026-04-15 18:01:54 +00:00
.planning [ci skip] add auto-generated tiers.tf, planning docs, and helm chart cache 2026-03-06 23:55:57 +00:00
.woodpecker [infra] Add Woodpecker pipeline to deploy PVE /etc/exports (Wave 6b) 2026-04-18 23:21:36 +00:00
ci feat: CI/CD performance overhaul 2026-04-15 11:22:26 +00:00
cli add IPv6 connectivity via Hurricane Electric 6in4 tunnel 2026-03-23 02:22:00 +02:00
diagram [ci skip] Sunset Drone CI: remove all artifacts, DNS, configs, and references 2026-02-23 19:38:55 +00:00
docs [mailserver] Delete postfix_cf_reference_DO_NOT_USE dead code [ci skip] 2026-04-19 00:05:44 +00:00
modules [infra] Suppress Kyverno label drift on module.tls_secret Secrets [ci skip] 2026-04-18 19:23:02 +00:00
playbooks [ci skip] Reduce node config drift: GPU label, OIDC idempotency, node-exporter, rebuild docs 2026-02-22 22:59:38 +00:00
scripts [claude-agent-service] Migrate all pipelines from DevVM SSH to K8s HTTP 2026-04-18 10:12:02 +00:00
secrets Woodpecker CI Update TLS Certificates Commit 2026-04-19 00:02:53 +00:00
stacks [mailserver] Add backup CronJob for Roundcube html + enigma PVCs 2026-04-19 00:14:47 +00:00
state/stacks state(vault): update encrypted state 2026-04-18 22:12:55 +00:00
.gitattributes Add broker-sync Terraform stack (#7) 2026-04-17 21:17:45 +01:00
.gitignore .gitignore: ignore terragrunt_rendered.json debug output 2026-04-18 13:18:05 +00:00
.sops.yaml state: per-stack Transit keys for namespace-owner access control 2026-03-17 23:08:18 +00:00
AGENTS.md [infra] Document HCL import {} block convention [ci skip] 2026-04-18 21:10:05 +00:00
config.tfvars [config] Remove ollama_host root variable 2026-04-18 11:14:53 +00:00
CONTRIBUTING.md multi-user access: fix template memory default, add storage quota, add CONTRIBUTING.md [ci skip] 2026-03-19 23:49:15 +00:00
LICENSE.txt Drone CI Update TLS Certificates Commit 2025-10-12 00:13:18 +00:00
MEMORY.md Update MEMORY.md timestamp 2026-03-07 16:43:15 +00:00
README.md add architecture documentation for all infrastructure subsystems [ci skip] 2026-03-24 00:55:25 +02:00
setup-monitoring.sh fix(monitoring): Add setup script for automated health check environment 2026-03-13 13:57:11 +00:00
terragrunt.hcl [infra] Adopt Authentik catch-all Proxy Provider + Application into TF (Wave 6a) 2026-04-18 22:48:26 +00:00
tiers.tf [ci skip] Phase 1: PostgreSQL migrated to CNPG on local disk 2026-02-28 19:08:06 +00:00

This repo contains my infra-as-code sources.

My infrastructure is built using Terraform, Kubernetes and CI/CD is done using Woodpecker CI.

Read more by visiting my website: https://viktorbarzin.me

Documentation

Full architecture documentation is available in docs/ — covering networking, storage, security, monitoring, secrets, CI/CD, databases, and more.

Adding a New User (Admin)

Adding a new namespace-owner to the cluster requires three steps — no code changes needed.

1. Authentik Group Assignment

In the Authentik admin UI, add the user to:

  • kubernetes-namespace-owners group (grants OIDC group claim for K8s RBAC)
  • Headscale Users group (if they need VPN access)

2. Vault KV Entry

Add a JSON entry to secret/platformk8s_users key in Vault:

"username": {
  "role": "namespace-owner",
  "email": "user@example.com",
  "namespaces": ["username"],
  "domains": ["myapp"],
  "quota": {
    "cpu_requests": "2",
    "memory_requests": "4Gi",
    "memory_limits": "8Gi",
    "pods": "20"
  }
}
  • username key must match the user's Forgejo username (for Woodpecker admin access)
  • namespaces — K8s namespaces to create and grant admin access to
  • domains — subdomains under viktorbarzin.me for Cloudflare DNS records
  • quota — resource limits per namespace (defaults shown above)

3. Apply Stacks

vault login -method=oidc

cd stacks/vault && terragrunt apply --non-interactive
# Creates: namespace, Vault policy, identity entity, K8s deployer role

cd ../platform && terragrunt apply --non-interactive
# Creates: RBAC bindings, ResourceQuota, TLS secret, DNS records

cd ../woodpecker && terragrunt apply --non-interactive
# Adds user to Woodpecker admin list

What Gets Auto-Generated

Resource Stack
Kubernetes namespace vault
Vault policy (namespace-owner-{user}) vault
Vault identity entity + OIDC alias vault
K8s deployer Role + Vault K8s role vault
RBAC RoleBinding (namespace admin) platform
RBAC ClusterRoleBinding (cluster read-only) platform
ResourceQuota platform
TLS secret in namespace platform
Cloudflare DNS records platform
Woodpecker admin access woodpecker

New User Onboarding

If you've been added as a namespace-owner, follow these steps to get started.

1. Join the VPN

# Install Tailscale: https://tailscale.com/download
tailscale login --login-server https://headscale.viktorbarzin.me
# Send the registration URL to Viktor, wait for approval
ping 10.0.20.100  # verify connectivity

2. Install Tools

Run the setup script to install kubectl, kubelogin, Vault CLI, Terraform, and Terragrunt:

# macOS
bash <(curl -fsSL https://k8s-portal.viktorbarzin.me/setup/script?os=mac)

# Linux
bash <(curl -fsSL https://k8s-portal.viktorbarzin.me/setup/script?os=linux)

3. Authenticate

# Log into Vault (opens browser for SSO)
vault login -method=oidc

# Test kubectl (opens browser for OIDC login)
kubectl get pods -n YOUR_NAMESPACE

4. Deploy Your First App

# Clone the infra repo
git clone https://github.com/ViktorBarzin/infra.git && cd infra

# Copy the stack template
cp -r stacks/_template stacks/myapp
mv stacks/myapp/main.tf.example stacks/myapp/main.tf

# Edit main.tf — replace all <placeholders>

# Store secrets in Vault
vault kv put secret/YOUR_USERNAME/myapp DB_PASSWORD=secret123

# Submit a PR
git checkout -b feat/myapp
git add stacks/myapp/
git commit -m "add myapp stack"
git push -u origin feat/myapp

After review and merge, an admin runs cd stacks/myapp && terragrunt apply.

5. Set Up CI/CD (Optional)

Create .woodpecker.yml in your app's Forgejo repo:

steps:
  - name: build
    image: woodpeckerci/plugin-docker-buildx
    settings:
      repo: YOUR_DOCKERHUB_USER/myapp
      tag: ["${CI_PIPELINE_NUMBER}", "latest"]
      username:
        from_secret: dockerhub-username
      password:
        from_secret: dockerhub-token
      platforms: linux/amd64

  - name: deploy
    image: hashicorp/vault:1.18.1
    commands:
      - export VAULT_ADDR=http://vault-active.vault.svc.cluster.local:8200
      - export VAULT_TOKEN=$(vault write -field=token auth/kubernetes/login
          role=ci jwt=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token))
      - KUBE_TOKEN=$(vault write -field=service_account_token
          kubernetes/creds/YOUR_NAMESPACE-deployer
          kubernetes_namespace=YOUR_NAMESPACE)
      - kubectl --server=https://kubernetes.default.svc
          --token=$KUBE_TOKEN
          --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          -n YOUR_NAMESPACE set image deployment/myapp
          myapp=YOUR_DOCKERHUB_USER/myapp:${CI_PIPELINE_NUMBER}

Useful Commands

# Check your pods
kubectl get pods -n YOUR_NAMESPACE

# View quota usage
kubectl describe resourcequota -n YOUR_NAMESPACE

# Store/read secrets
vault kv put secret/YOUR_USERNAME/myapp KEY=value
vault kv get secret/YOUR_USERNAME/myapp

# Get a short-lived K8s deploy token
vault write kubernetes/creds/YOUR_NAMESPACE-deployer \
  kubernetes_namespace=YOUR_NAMESPACE

Important Rules

  • All changes go through Terraform — never kubectl apply/edit/patch directly
  • Never put secrets in code — use Vault: vault kv put secret/YOUR_USERNAME/...
  • Always use a PR — never push directly to master
  • Docker images: build for linux/amd64, use versioned tags (not :latest)

git-crypt setup

To decrypt the secrets, you need to setup git-crypt.

  1. Install git-crypt.
  2. Setup gpg keys on the machine
  3. git-crypt unlock

This will unlock the secrets and will lock them on commit