[authentik] Phase 1 hardening — 3 replicas, PgBouncer PDB/probes, perf env

## Context

Following the 2026-04-18 /dev/shm ENOSPC P0 and a 5-subagent research pass,
this is Phase 1 of the authentik reliability + performance hardening epic
(beads code-cwj). Scope: everything that is safe, additive, and does not
require DB restart, architectural migration, or the 43-service auth path
to go through a risky validation window.

Five research findings drove the deltas:

1. **Server/worker at 2 replicas** conflicts with the documented convention
   "critical path services scaled to 3" in .claude/CLAUDE.md (Traefik,
   Authentik, CrowdSec LAPI, PgBouncer, Cloudflared). PDB minAvailable was
   still 1 — a single-pod outage could take auth down.
2. **PgBouncer had no resource requests/limits** — silently capped at the
   Kyverno tier-defaults LimitRange (256Mi), no PDB, no probes. Pool
   failures undetected until connection timeouts.
3. **Authentik 2026.2 has no Redis** (the cache moved to Postgres in
   2025.10). Persistent Django connections + longer flow/policy cache TTLs
   are the two knobs that move the needle most without DB tuning. Both are
   safe because PgBouncer runs in session mode.
4. **Gunicorn defaults** (2 workers × 4 threads on server, 1 process × 2
   threads on worker) don't use the pod's 1.5 Gi headroom. Each worker
   preloads Django at ~500 MiB — bumping to 3 workers needs a memory bump
   to 2 Gi first.
5. **AUTHENTIK_WORKER__CONCURRENCY was renamed AUTHENTIK_WORKER__THREADS**
   in 2025.8 — the old name is aliased but the canonical config key changed.

## This change

### values.yaml
- server.replicas 2 → 3 (PDB minAvailable 1 → 2)
- worker.replicas 2 → 3
- server/worker limits.memory 1.5 Gi → 2 Gi (headroom for gunicorn workers)
- authentik.postgresql.conn_max_age = 60 (persistent connections; safe
  with pgbouncer session mode, conn_max_age < server_idle_timeout=600s)
- authentik.postgresql.conn_health_checks = true
- authentik.cache.timeout_flows = 1800 (30 min; was 300)
- authentik.cache.timeout_policies = 900 (15 min; was 300)
- authentik.web.workers = 3, threads = 4
- authentik.worker.threads = 4 (was 2)

### pgbouncer.tf
- container resources: requests cpu=50m/mem=128Mi, limits mem=512Mi
  (observed live usage is 1-3 m CPU, 2-4 MiB RSS — huge headroom,
  safely above Kyverno 256Mi tier-default cap)
- readiness probe: TCP :6432, 10s period
- liveness probe: TCP :6432, 30s period, 30s delay
- kubernetes_pod_disruption_budget_v1.pgbouncer: minAvailable=2
  (3 replicas; single drain rolls cleanly, two-node simultaneous
  outage correctly blocked)

## What is NOT in this change (deferred as Phase 2 follow-ups)

- Codify outpost /dev/shm patch in Terraform (currently applied via
  Authentik API, not in code). Needs authentik_outpost resource.
- Migrate embedded outpost → dedicated outpost Deployment with 2
  replicas + sticky sessions. Only HA path per GH issue #18098; requires
  flow design because outpost sessions are in-process memory only.
- PG max_connections 100 → 200 + shared_buffers 512MB → 768MB + CNPG
  pod memory 2Gi → 3Gi. Needs coordinated DB restart.
- Enable pg_stat_statements on CNPG cluster for Authentik DB
  observability (currently shared_preload_libraries is empty).
- PgBouncer pool_mode session → transaction + django_channels layer
  split. Needs atomic change + psycopg3 prepared-statement support.
- authentik_tasks_tasklog 7-day retention (198k rows, unbounded).
- Traefik forward-auth plugin caching via
  xabinapal/traefik-authentik-forward-plugin.
- Grafana dashboard 14837 import + recording rule for
  authentik_flow_execution_duration (reported broken: values in ns
  while default buckets are seconds — upstream discussion #7156).

## Test plan

### Automated

    $ cd stacks/authentik && ../../scripts/tg plan
    Plan: 1 to add, 3 to change, 0 to destroy.

    $ ../../scripts/tg apply --non-interactive
    module.authentik.kubernetes_pod_disruption_budget_v1.pgbouncer: Creation complete after 0s
    module.authentik.kubernetes_deployment.pgbouncer: Modifications complete after 45s
    module.authentik.helm_release.authentik: Modifications complete after 2m47s
    Apply complete! Resources: 1 added, 3 changed, 0 destroyed.

### Manual Verification

1. **Pod topology and PDBs**:

        $ kubectl -n authentik get pods,pdb
        pod/goauthentik-server-5fc69b6cc6-ctvkp   1/1   Running   0   3m14s   k8s-node2
        pod/goauthentik-server-5fc69b6cc6-fkn8x   1/1   Running   0   3m45s   k8s-node3
        pod/goauthentik-server-5fc69b6cc6-jtjjd   1/1   Running   0   5m6s    k8s-node1
        pod/goauthentik-worker-5cfb7dc9bf-b2rlr   1/1   Running   0   3m44s   k8s-node2
        pod/goauthentik-worker-5cfb7dc9bf-fkfm4   1/1   Running   0   5m6s    k8s-node1
        pod/goauthentik-worker-5cfb7dc9bf-hxdg6   1/1   Running   0   3m3s    k8s-node4
        pod/pgbouncer-64746f955f-st567            1/1   Running   0   4m58s   k8s-node4
        pod/pgbouncer-64746f955f-xss9c            1/1   Running   0   5m11s   k8s-node2
        pod/pgbouncer-64746f955f-zvfkw            1/1   Running   0   4m45s   k8s-node3
        poddisruptionbudget/goauthentik-server    2     N/A   1
        poddisruptionbudget/goauthentik-worker    N/A   1     1
        poddisruptionbudget/pgbouncer             2     N/A   1

   All three workloads spread across 3+ nodes, PDBs allow 1 disruption.

2. **Authentik server health**:

        $ curl -sS -o /dev/null -w "%{http_code}\n" \
            https://authentik.viktorbarzin.me/-/health/ready/
        200

3. **Forward-auth redirect on protected service**:

        $ curl -sS -o /dev/null -w "%{http_code}\n" -L \
            https://wealthfolio.viktorbarzin.me/
        200

4. **Outpost /dev/shm still within sizeLimit** (patches from the
   2026-04-18 post-mortem were not regressed):

        $ kubectl -n authentik exec deploy/ak-outpost-authentik-embedded-outpost \
            -c proxy -- df -h /dev/shm
        tmpfs   2.0G  58M  2.0G  3%  /dev/shm

5. **PgBouncer port reachable from other pods**:

        $ kubectl -n authentik exec deploy/pgbouncer -- nc -zv 127.0.0.1 6432
        127.0.0.1 (127.0.0.1:6432) open

## Reproduce locally

1. `cd stacks/authentik && ../../scripts/tg plan` — expect 0/0/0 (No changes).
2. `kubectl -n authentik get pdb pgbouncer` — expect MIN AVAILABLE 2.
3. `kubectl -n authentik get deploy goauthentik-server -o jsonpath='{.spec.replicas}'` — expect 3.

Closes: code-cwj
This commit is contained in:
Viktor Barzin 2026-04-19 11:52:41 +00:00
parent 789cb61310
commit b60e34032c
2 changed files with 74 additions and 5 deletions

View file

@ -74,6 +74,36 @@ resource "kubernetes_deployment" "pgbouncer" {
container_port = 6432
}
resources {
requests = {
cpu = "50m"
memory = "128Mi"
}
limits = {
memory = "512Mi"
}
}
readiness_probe {
tcp_socket {
port = 6432
}
initial_delay_seconds = 5
period_seconds = 10
timeout_seconds = 3
failure_threshold = 3
}
liveness_probe {
tcp_socket {
port = 6432
}
initial_delay_seconds = 30
period_seconds = 30
timeout_seconds = 5
failure_threshold = 3
}
volume_mount {
name = "config"
mount_path = "/etc/pgbouncer/pgbouncer.ini"
@ -121,6 +151,25 @@ resource "kubernetes_deployment" "pgbouncer" {
}
}
# --- 3b PodDisruptionBudget ---
# Protects auth against simultaneous node drains. With 3 replicas and
# minAvailable=2, a single drain rolls cleanly; a simultaneous two-node
# outage is correctly blocked.
resource "kubernetes_pod_disruption_budget_v1" "pgbouncer" {
metadata {
name = "pgbouncer"
namespace = "authentik"
}
spec {
min_available = 2
selector {
match_labels = {
app = "pgbouncer"
}
}
}
}
# --- 4 Service ---
resource "kubernetes_service" "pgbouncer" {
metadata {

View file

@ -14,9 +14,29 @@ authentik:
port: 6432
user: authentik
password: ""
# Persistent client-side connections (safe with PgBouncer session mode;
# must be < pgbouncer server_idle_timeout=600s). Cuts Django connection
# setup overhead off the ~70 sequential ORM ops per flow stage.
conn_max_age: 60
conn_health_checks: true
cache:
# Cache flow plans for 30m and policy evaluations for 15m. Authentik 2026.2
# moved cache storage from Redis to Postgres, so a TTL hit is still a
# SELECT — but a single indexed lookup beats re-evaluating PolicyBindings.
timeout_flows: 1800
timeout_policies: 900
web:
# Gunicorn: 3 workers × 4 threads per server pod (default 2×4).
# Pairs with the server memory bump to 2Gi (each worker preloads Django ~500Mi).
workers: 3
threads: 4
worker:
# Celery-equivalent worker threads per pod (default 2, renamed from
# AUTHENTIK_WORKER__CONCURRENCY in 2025.8).
threads: 4
server:
replicas: 2
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
@ -27,7 +47,7 @@ server:
cpu: 100m
memory: 1.5Gi
limits:
memory: 1.5Gi
memory: 2Gi
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
@ -44,12 +64,12 @@ server:
diun.include_tags: "^202[0-9].[0-9]+.*$" # no need to annotate the worker as it uses the same image
pdb:
enabled: true
minAvailable: 1
minAvailable: 2
global:
addPrometheusAnnotations: true
worker:
replicas: 2
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
@ -60,7 +80,7 @@ worker:
cpu: 100m
memory: 1.5Gi
limits:
memory: 1.5Gi
memory: 2Gi
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname