infra/stacks/platform/modules/monitoring/caretta.tf
Viktor Barzin df44601a36 Monitoring overhaul: reduce noise, add coverage gaps, auto-load dashboards
Noise reduction (8 alerts tuned):
- PoisonFountainDown: 2m→5m, critical→warning (fail-open service)
- NodeExporterDown: 2m→5m (flaps during node restarts)
- PowerOutage: add for:1m (debounce transient voltage dips)
- New Tailscale client: add for:5m (debounce headscale reauths)
- NoNodeLoadData: use absent() instead of OR vector(0)==0
- NodeHighCPUUsage: 30%→60% (normal for 70+ services)
- HighMemoryUsage GPU: 12GB/5m→14GB/15m (T4=16GB, model loading)
- PrometheusStorageFull: 50GiB→150GiB (TSDB cap is 180GB)

Alert regrouping:
- Move MailServerDown, HackmdDown, PrivatebinDown → new "Application Health"
- Move New Tailscale client → "Infrastructure Health"

New alerts (14):
- Networking: Cloudflared (2), MetalLB (2), Technitium DNS
- Storage: NFS CSI, iSCSI CSI controllers
- Critical Services: PgBouncer, CNPG operator, MySQL operator
- Infra Health: CrowdSec, Kyverno, Sealed Secrets, Woodpecker

Inhibit rules:
- Consolidate 3 NodeDown rules into 1 comprehensive rule
- Extend NFS rule to suppress NFS-dependent services
- Add PowerOutage → downstream suppression

Dashboard loading:
- Add for_each ConfigMap in grafana.tf to auto-load all 18 dashboards
- Remove duplicate caretta dashboard ConfigMap from caretta.tf
2026-03-18 08:03:59 +00:00

62 lines
1.3 KiB
HCL

resource "helm_release" "caretta" {
namespace = kubernetes_namespace.monitoring.metadata[0].name
create_namespace = true
name = "caretta"
repository = "https://helm.groundcover.com/"
chart = "caretta"
version = "0.0.16"
values = [yamlencode({
grafana = {
enabled = false
}
victoria-metrics-single = {
enabled = false
}
tolerations = [
{
key = "node-role.kubernetes.io/control-plane"
operator = "Exists"
effect = "NoSchedule"
},
{
key = "nvidia.com/gpu"
operator = "Exists"
effect = "NoSchedule"
}
]
resources = {
requests = {
cpu = "10m"
memory = "300Mi"
}
limits = {
memory = "512Mi"
}
}
})]
}
resource "kubernetes_service" "caretta_metrics" {
metadata {
name = "caretta-metrics"
namespace = kubernetes_namespace.monitoring.metadata[0].name
labels = {
app = "caretta"
}
}
spec {
selector = {
app = "caretta"
}
port {
name = "metrics"
port = 7117
target_port = 7117
protocol = "TCP"
}
}
}
# Caretta dashboard is now loaded via the grafana_dashboards for_each in grafana.tf