From dbf7732a66de56378580c668159df3e1e468abe5 Mon Sep 17 00:00:00 2001 From: Viktor Barzin Date: Sat, 18 Apr 2026 11:11:39 +0000 Subject: [PATCH] [uptime-kuma] Bump CPU + memory requests to reduce TTFB jitter MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Context Uptime Kuma TTFB was bimodal — fast ~150ms responses mixed with slow ~3s responses — median 1.7s, p95 3.2s across 20 samples. CPU request was 50m (5% of one core) against a Node.js process that handles ~190 monitors plus SQLite DB maintenance. Memory request was 64Mi while actual RSS sat around 221Mi, so the pod was also running above its guaranteed memory floor and subject to eviction pressure when nodes got tight. CPU limits are intentionally absent cluster-wide (CFS throttling caused more pain than it solved), so the only knob to give the scheduler a higher floor is the request itself. Raising the request makes the node reserve more CPU for the pod and lets the kernel's CFS weight it more generously when the node is busy — should reduce the tail on the slow path without introducing throttling. ## This change - requests.cpu: 50m -> 100m - requests.memory: 64Mi -> 128Mi - limits.memory: unchanged at 512Mi - limits.cpu: still unset (explicit — cluster-wide rule) ## What is NOT in this change - No CPU limit added - No readiness/liveness probe tuning - No replica count change (still 1, Recreate strategy) - No DB layer / SQLite tuning ## Measurements (20 curl samples of https://uptime.viktorbarzin.me/) Before: min 0.143s median 1.727s p95 3.163s max 3.204s mean 1.768s After: min 0.149s median 1.228s p95 3.154s max 3.283s mean 1.590s Median dropped ~29% (1.73s -> 1.23s). Tail (p95/max) essentially unchanged — the slow bucket appears driven by something other than CPU scheduling (likely socket.io / SSR render path inside the app, or TLS/cf-tunnel handshake — worth a separate investigation). Closes: code-79d --- stacks/uptime-kuma/modules/uptime-kuma/main.tf | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/stacks/uptime-kuma/modules/uptime-kuma/main.tf b/stacks/uptime-kuma/modules/uptime-kuma/main.tf index e6009bc5..a3d2a55b 100644 --- a/stacks/uptime-kuma/modules/uptime-kuma/main.tf +++ b/stacks/uptime-kuma/modules/uptime-kuma/main.tf @@ -101,8 +101,8 @@ resource "kubernetes_deployment" "uptime-kuma" { resources { requests = { - cpu = "50m" - memory = "64Mi" + cpu = "100m" + memory = "128Mi" } limits = { memory = "512Mi"