The per-site `x402_instance` module created one Deployment + Service +
PDB per protected host (9 in total, 9×64Mi). Every pod was running the
exact same logic with the same config — the only thing that varied
was the upstream URL, which we don't even need since the gateway can
return 200 to "allow" and Traefik handles the upstream itself.
Refactor to the same pattern as `ai-bot-block`:
* single deployment + service in `traefik` namespace, 2 replicas, HA
* Traefik `Middleware` CRD `x402` (forwardAuth → x402-gateway:8080/auth)
* each consumer ingress just appends `traefik-x402@kubernetescrd` to
its middleware chain via `extra_middlewares`
x402-gateway gains a `MODE=forwardauth` env var that returns 200 (allow)
or 402 (with x402 PaymentRequiredResponse body) instead of reverse-
proxying. Image: ghcr ... f4804d62.
Pod count: 9 → 2 (78% memory saved). All 9 sites verified still
serving the Anubis challenge to plain curl with identical TTFB.
DRY_RUN until `var.x402_wallet_address` is set on the traefik stack.
Removes `modules/kubernetes/x402_instance/` (dead code now).