[redis] Stabilise patch_redis_service trigger + document service naming

## Context

`null_resource.patch_redis_service` uses `triggers = { always = timestamp() }`,
so every `scripts/tg plan` on `stacks/redis` reports `1 to destroy, 1 to add`
even when nothing has changed. That noise hides real drift in the signal and
trains us to ignore redis-stack plans — which is exactly what you don't want
on a load-bearing patch.

The patch itself is still load-bearing (three consumers hard-code bare
`redis.redis.svc.cluster.local` — `stacks/immich/chart_values.tpl:12`,
`stacks/ytdlp/yt-highlights/app/main.py:136`, `config.tfvars:214` — plus
Bitnami's own sentinel scripts set `REDIS_SERVICE=redis.redis.svc.cluster.local`
and call it during pod startup). Removing the null_resource is a follow-up
(beads T0) once those consumers migrate to `redis-master.redis.svc`. For now
the goal is just: stop being noisy.

## This change

1. Replace the `always = timestamp()` trigger with two inputs that only change
   when re-patching is genuinely required:
   - `chart_version = helm_release.redis.version` — changes only on a Bitnami
     chart version bump, which is the one code path that rewrites the `redis`
     Service selector back to `component=node`.
   - `haproxy_config = sha256(kubernetes_config_map.haproxy.data["haproxy.cfg"])`
     — changes only when HAProxy config is edited; aligned with the existing
     `checksum/config` annotation that rolls the Deployment on config change.

   Both attributes are known at plan time (verified against `hashicorp/helm`
   v3.1.1 provider binary). Rejected alternatives — `metadata[0].revision`
   (not exposed in the plugin-framework v3 rewrite), `sha256(jsonencode(values))`
   (readability unverified on v3), and `kubernetes_deployment.haproxy.id`
   (static `namespace/name`, never changes) — don't meet the bar.

2. Add a **Redis Service Naming** section to `AGENTS.md` that explicitly
   states the write/sentinel/avoid endpoints, so new consumers start from
   `redis-master.redis.svc` (the documented `var.redis_host`) and long-lived
   connections (PUBSUB, BLPOP, Sidekiq) route around HAProxy's `timeout
   client 30s` via the sentinel headless path. Uptime Kuma's Redis monitor
   already learned that lesson the hard way (memory id=748).

## What is NOT in this change

- Deleting `null_resource.patch_redis_service` — still load-bearing (T0).
- Deleting `kubernetes_service.redis_master` — stays as the declared write API.
- Migrating any consumer off bare `redis.redis.svc` — T0 epic.
- Per-client sentinel migration — T1 epic.
- Retiring HAProxy — T2 epic (blocked on T1 + T3).

## Before / after

Before (steady state):
```
scripts/tg plan
Plan: 1 to add, 2 to change, 1 to destroy.
#   null_resource.patch_redis_service must be replaced
#     triggers = { "always" = "<timestamp>" } -> (known after apply)
```

After (steady state, post-apply):
```
scripts/tg plan
No changes. Your infrastructure matches the configuration.
```

After (chart version bump):
```
scripts/tg plan
#   null_resource.patch_redis_service must be replaced
#     triggers = { "chart_version" = "25.3.2" -> "25.4.0" }
```
— the trigger fires only when it actually needs to.

## Test Plan

### Automated

`scripts/tg plan` pre-change (confirms baseline noise):
```
# module.redis.null_resource.patch_redis_service must be replaced
-/+ resource "null_resource" "patch_redis_service" {
    ~ triggers = { # forces replacement
        ~ "always" = "2026-04-19T10:39:40Z" -> (known after apply)
      }
  }
Plan: 1 to add, 2 to change, 1 to destroy.
```

`scripts/tg plan` post-edit (confirms the one-time structural replacement):
```
# module.redis.null_resource.patch_redis_service must be replaced
-/+ resource "null_resource" "patch_redis_service" {
    ~ triggers = { # forces replacement
        - "always"         = "2026-04-19T10:39:40Z" -> null
        + "chart_version"  = "25.3.2"
        + "haproxy_config" = "989bca9483cb9f9942017320765ec0751ac8357ff447acc5ed11f0a14b609775"
      }
  }
```

Apply is deferred to the operator — the working tree on the same file also
contains an unrelated HAProxy DNS-resolvers fix (for today's immich outage)
that needs its own review before rolling out together. No `scripts/tg apply`
run from this session.

### Manual Verification

Reproduce locally:
1. `cd infra/stacks/redis && ../../scripts/tg plan`
2. Before apply: expect `null_resource.patch_redis_service` to be replaced
   exactly once, with the trigger map transitioning from `{always = <ts>}`
   to `{chart_version, haproxy_config}`.
3. After apply: `../../scripts/tg plan` twice in a row must both report
   `No changes.` (excluding unrelated drift from other work-in-progress).
4. Cluster-side invariant (must hold pre- and post-apply):
   `kubectl -n redis get svc redis -o jsonpath='{.spec.selector}'`
   → `{"app":"redis-haproxy"}`
   `kubectl -n redis get svc redis-master -o jsonpath='{.spec.selector}'`
   → `{"app":"redis-haproxy"}`
5. Regression test for the trigger doing its job: bump `helm_release.redis.version`
   in a branch, `tg plan`, expect the null_resource to replace. Revert.
This commit is contained in:
Viktor Barzin 2026-04-19 12:17:52 +00:00
parent ba697b02a2
commit 702db75f84
2 changed files with 19 additions and 1 deletions

View file

@ -286,7 +286,11 @@ resource "kubernetes_service" "redis_master" {
# This runs on every apply to ensure the Helm chart's service is always corrected.
resource "null_resource" "patch_redis_service" {
triggers = {
always = timestamp()
# Re-patch only when a Helm upgrade (chart version bump) or an HAProxy
# config change could have reset the selector / rotated HAProxy pods.
# timestamp() would force-replace on every apply, hiding real drift.
chart_version = helm_release.redis.version
haproxy_config = sha256(kubernetes_config_map.haproxy.data["haproxy.cfg"])
}
provisioner "local-exec" {