## Context
Until now, handing work to the in-cluster `beads-task-runner` agent required
opening BeadBoard and clicking the manual Dispatch button on each bead. We
want users to be able to describe work as a bead, set `assignee=agent`, and
have the agent pick it up within a couple of minutes — no clicks.
The existing pieces already provide everything we need:
- `claude-agent-service` exposes `/execute` with a single-slot `asyncio.Lock`
- BeadBoard's `/api/agent-dispatch` builds the prompt and forwards the bearer
- BeadBoard's `/api/agent-status` reports `busy` via a cached `/health` poll
- Dolt stores beads and is already in-cluster at `dolt.beads-server:3306`
So the only missing component is a poller that ties them together. This
commit adds that poller as two Kubernetes CronJobs — matching the existing
infra pattern (OpenClaw task-processor, certbot-renewal, backups) rather than
introducing n8n or in-service polling.
## Flow
```
user: bd assign <id> agent
│
▼
Dolt @ dolt.beads-server.svc:3306 ◄──── every 2 min ────┐
│ │
▼ │
CronJob: beads-dispatcher │
1. GET beadboard/api/agent-status (busy? skip) │
2. bd query 'assignee=agent AND status=open' │
3. bd update -s in_progress (claim) │
4. POST beadboard/api/agent-dispatch │
5. bd note "dispatched: job=…" │
│ │
▼ │
claude-agent-service /execute │
beads-task-runner agent runs; notes/closes bead │
│ │
▼ │
done ──► next tick picks up the next bead ───────────────┘
CronJob: beads-reaper (every 10 min)
for bead (assignee=agent, status=in_progress, updated_at > 30 min):
bd note "reaper: no progress for Nm — blocking"
bd update -s blocked
```
## Decisions
- **Sentinel assignee `agent`** — free-form, no Beads schema change. Any bd
client can set it (`bd assign <id> agent`).
- **Sequential dispatch** — matches the service's `asyncio.Lock`. With a
2-min poll cadence and ~5-min average run, throughput is ~12 beads/hour.
Parallelism is a separate plan.
- **Fixed agent `beads-task-runner`** — read-only rails, matches the manual
Dispatch button. Broader-privilege agents stay manual via BeadBoard UI.
- **Image reuse** — the claude-agent-service image already ships `bd`, `jq`,
`curl`; a new CronJob-specific image would duplicate 400MB of infra tooling.
Mirror `claude_agent_service_image_tag` locally; bump on rebuild.
- **ConfigMap-mounted `metadata.json`** — declarative TF rather than reusing
the image-seeded file. The script copies it into `/tmp/.beads/` because bd
may touch the parent dir and ConfigMap mounts are read-only.
- **Kill switch (`beads_dispatcher_enabled`)** — single bool, default true.
When false, `suspend: true` on both CronJobs; manual Dispatch keeps working.
- **Reaper threshold 30 min** — `bd note` bumps `updated_at`, so a well-behaved
`beads-task-runner` never trips the reaper. Failures trip it; pod crashes
(in-memory job state lost) also trip it.
## What is NOT in this change
- No Terraform apply — requires Vault OIDC + cluster access. Apply manually:
`cd infra/stacks/beads-server && scripts/tg apply`
- No change to `claude-agent-service/` (already ships bd/jq/curl)
- No change to `beadboard/` (`/api/agent-dispatch` + `/api/agent-status` reused)
- No change to the `beads-task-runner` agent definition (rails unchanged)
- Parallelism: single-slot is MVP; multi-slot dispatch is a separate plan.
## Deviations from plan
Minor, documented in code comments:
- Reaper uses `.updated_at` instead of the plan's `.notes[].created_at`. bd
serializes `notes` as a string (not an array), and every `bd note` bumps
`updated_at` — equivalent for the reaper's purpose.
- ISO-8601 parsed via `python3`, not `date -d` — Alpine's busybox lacks GNU
`-d` and the image has python3.
- `HOME=/tmp` set as a safety net — bd may try to write state/lock files.
## Test plan
### Automated
```
$ cd infra/stacks/beads-server && terraform init -backend=false
Terraform has been successfully initialized!
$ terraform validate
Warning: Deprecated Resource (kubernetes_namespace → v1) # pre-existing, unrelated
Success! The configuration is valid, but there were some validation warnings as shown above.
$ terraform fmt stacks/beads-server/main.tf
# (no output — already formatted)
```
### Manual verification
1. **Apply**
```
vault login -method=oidc
cd infra/stacks/beads-server
scripts/tg apply
```
Expect: `kubernetes_config_map.beads_metadata`,
`kubernetes_cron_job_v1.beads_dispatcher`, `kubernetes_cron_job_v1.beads_reaper`
created. No changes to existing resources.
2. **CronJobs exist with right schedule**
```
kubectl -n beads-server get cronjob
```
Expect `beads-dispatcher */2 * * * *` and `beads-reaper */10 * * * *`,
both with `SUSPEND=False`.
3. **End-to-end smoke**
```
bd create "auto-dispatch smoke test" \
-d "Read /etc/hostname inside the agent sandbox and close." \
--acceptance "bd note includes 'hostname=' line and bead is closed."
bd assign <new-id> agent
# within 2 min:
bd show <new-id> --json | jq '{status, notes}'
```
Expect notes to contain `auto-dispatcher claimed at …` and
`dispatched: job=<uuid>`, status `in_progress`.
4. **Reaper smoke**
Assign + dispatch a long bead, then
`kubectl -n claude-agent delete pod -l app=claude-agent-service`. Within
30 min + one reaper tick, `bd show <id>` shows `blocked` with a
`reaper: no progress for Nm — blocking` note.
5. **Kill switch**
```
cd infra/stacks/beads-server
scripts/tg apply -var=beads_dispatcher_enabled=false
kubectl -n beads-server get cronjob
```
Expect `SUSPEND=True` on both CronJobs. Assign a bead to `agent`; verify
nothing happens within 5 min. Re-apply with `=true` to re-enable.
Runbook with all above plus reaper semantics + design choices at
`infra/docs/runbooks/beads-auto-dispatch.md`.
Closes: code-8sm
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
7.1 KiB
Beads Auto-Dispatch Runbook
Users can hand work to the headless beads-task-runner agent by assigning a
bead to the sentinel user agent. Two CronJobs in the beads-server
namespace drive the pipeline:
beads-dispatcher— every 2 min: picks up the highest-priorityassignee=agent/status=openbead with non-empty acceptance criteria, claims it by flipping toin_progress, and POSTs it to BeadBoard's/api/agent-dispatch. BeadBoard forwards toclaude-agent-servicewith the existing bearer-token flow.beads-reaper— every 10 min: flips anyassignee=agent+status=in_progressbead whoseupdated_atis older than 30 min tostatus=blockedwith an explanatory note. Catches pod crashes mid-run.
The manual BeadBoard Dispatch button continues to work in parallel.
Flow diagram
user: bd assign <id> agent
│
▼
Dolt @ dolt.beads-server.svc:3306 ◄──── every 2 min ────┐
│ │
▼ │
CronJob: beads-dispatcher │
1. GET beadboard/api/agent-status (busy?) │
2. bd query 'assignee=agent AND status=open' │
3. bd update -s in_progress (claim) │
4. POST beadboard/api/agent-dispatch │
5. bd note "dispatched: job=…" │
│ │
▼ │
claude-agent-service /execute │
beads-task-runner agent runs; notes/closes bead │
│ │
▼ │
done ──► next tick picks up the next bead ───────────────┘
CronJob: beads-reaper (every 10 min)
for bead (assignee=agent, status=in_progress, updated_at > 30 min):
bd note "reaper: no progress for Nm — blocking"
bd update -s blocked
Usage
Hand a bead to the agent
bd create "Title" \
-d "Full context — files, services, error messages. Any agent with no prior context must be able to execute this." \
--acceptance "Concrete, verifiable criteria" \
-p 2
bd assign <new-id> agent
Acceptance criteria is required. Beads without it are skipped by the
dispatcher and stay in open forever. This is intentional — the
beads-task-runner agent expects clear done conditions.
Take a bead back (unassign)
bd assign <id> ""
If the bead is already in_progress, also reset it:
bd update <id> -s open
Pause auto-dispatch
cd infra/stacks/beads-server
scripts/tg apply -var=beads_dispatcher_enabled=false
This sets spec.suspend: true on both CronJobs. Existing running jobs
continue; no new ticks fire. Re-enable by re-applying with
beads_dispatcher_enabled=true (the default). Manual BeadBoard Dispatch
remains available while paused.
Read the logs
# Recent dispatcher runs
kubectl -n beads-server get jobs --selector=job-name --sort-by=.metadata.creationTimestamp | grep beads-dispatcher | tail
kubectl -n beads-server logs job/<dispatcher-job-name>
# Tail the underlying agent once a bead dispatches
kubectl -n claude-agent logs -l app=claude-agent-service -f
# Inspect reaper decisions
kubectl -n beads-server get jobs | grep beads-reaper | tail
kubectl -n beads-server logs job/<reaper-job-name>
Inspect a specific bead's dispatch history
bd show <id> --json | jq '{status, assignee, notes, updated_at}'
Both the dispatcher and reaper write dated notes (auto-dispatcher claimed at…, dispatched: job=…, reaper: no progress for…) so the audit trail
lives on the bead itself.
Reaper semantics — when a bead becomes blocked
The reaper flips a bead to blocked if:
assignee = agent, ANDstatus = in_progress, ANDupdated_atis more than 30 minutes in the past.
Every bd note bumps updated_at, so a well-behaved beads-task-runner
agent never trips the reaper — it notes progress as it works. A blocked
bead is a signal that:
- the agent pod crashed mid-run (
kubectl -n claude-agent delete podtest), - the job hit its 15-minute budget timeout inside
claude-agent-servicewithout notes (rare — the agent usually notes failure before exiting), claude-agent-servicewas restarted during the run (in-memory job state is lost; see known risks).
Recovery: read the reaper note, reopen manually if appropriate:
bd update <id> -s open
bd assign <id> agent # re-arm for next dispatcher tick
Design choices
- Sentinel assignee
agent— free-form, no Beads schema change. Any bd client can set it (bd assign <id> agent). - Sequential dispatch — matches
claude-agent-service's single-slotasyncio.Lock. With a 2-min poll cadence and ~5-min average run, throughput is ~12 beads/hour. Parallelism is a separate plan. - Fixed agent (
beads-task-runner) — read-only rails, matches BeadBoard's manual Dispatch button. Broader-privilege agents stay manual. - CronJob (not in-service polling, not n8n) — matches existing infra pattern (OpenClaw task-processor, certbot-renewal, backups), TF-managed, easy to pause.
- ConfigMap-mounted
metadata.json— declarative TF rather than reusing the image-seeded file. The CronJob's init step copies it into/tmp/.beads/becausebdmay touch the parent directory and ConfigMap mounts are read-only.
Known risks
- In-memory job state in
claude-agent-service— if the pod restarts mid-run, the job record is lost. The reaper catches this after 30 min. Persistent job store is deferred. - Prompt injection via bead fields — a malicious bead description could
try to steer the agent. The
beads-task-runnerrails + token budget + timeout are the defense. Identical exposure as the manual Dispatch button. - Image tag drift —
claude_agent_service_image_taginstacks/beads-server/main.tfmirrorslocal.image_taginstacks/claude-agent-service/main.tf. Bump both when the image rebuilds, or the dispatcher/reaper will run on an older layer. (They only needbd,curl,jq— stable across rebuilds — so the drift is low-risk.) bdJSON schema changes — the reaper'sjqreads.idand.updated_at. If a futurebdupgrade renames these, the reaper breaks silently (no reaping, no alert).BD_VERSIONis pinned in the image Dockerfile.
Verification after change
# Both CronJobs exist with the right schedule / SUSPEND state
kubectl -n beads-server get cronjob
# End-to-end smoke test
bd create "auto-dispatch smoke test" \
-d "Read /etc/hostname inside the agent sandbox and close." \
--acceptance "bd note includes 'hostname=' and bead is closed."
bd assign <new-id> agent
# within 2 min:
bd show <new-id> --json | jq '.notes'
# → contains 'auto-dispatcher claimed' + 'dispatched: job=<uuid>'