Add ~/.claude/rules/ workflow files to chezmoi
planning.md, execution.md, quality.md — the agent workflow rules loaded into every session via ~/.claude/rules/. Previously untracked locally. planning.md now includes an "For infra changes" subsection directing researchers to infra/docs/architecture/ and infra/docs/runbooks/ before dispatching researcher subagents. execution.md now includes §7 "Docs — keep infra/docs/ current" covering doc upkeep for architecture-visible changes and new operational procedures. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
eb19c1c27d
commit
7bbe203578
3 changed files with 330 additions and 0 deletions
117
dot_claude/rules/execution.md
Normal file
117
dot_claude/rules/execution.md
Normal file
|
|
@ -0,0 +1,117 @@
|
|||
# Execution (applies to ALL agents)
|
||||
|
||||
Once a plan is approved (via ExitPlanMode or explicit user go-ahead), execute end-to-end. This file governs what happens AFTER planning — see `planning.md` for the interview/research/challenger flow that precedes it.
|
||||
|
||||
## 1. Default posture — run through, no mid-task gates
|
||||
|
||||
**Keep working until the user's request is satisfied.**
|
||||
|
||||
- Don't stop after research, after a single commit, after each "phase" or "batch" — keep going.
|
||||
- No "Should I proceed with X?" / "Want me to continue?" prompts between steps of an approved plan.
|
||||
- No mid-task checkpoints or batch approvals. The user interrupts if something drifts.
|
||||
- Course-correct on user interruption, not by polling.
|
||||
|
||||
Stopping is the exception that needs a reason. The ONLY reasons to stop mid-execution are listed in §2.
|
||||
|
||||
## 2. When to stop and ask (the ONLY ASK FIRST list)
|
||||
|
||||
Pause and surface to the user (or, if you're a subagent, to the session that spawned you) for:
|
||||
|
||||
- **Destructive ops** — force-push, `git reset --hard`, `rm -rf`, dropping DB tables, killing processes the user didn't start
|
||||
- **Schema / public API changes** — REST, GraphQL, gRPC, DB schema
|
||||
- **Deleting tested code** — anything that has tests covering it
|
||||
- **Interface / contract changes** that affect other teams
|
||||
- **Live network devices** — routers, switches, firewalls (confirmed preference)
|
||||
- **Scope changes** — work beyond the approved plan
|
||||
- **New genuine ambiguity** discovered mid-execution that the research didn't resolve
|
||||
- **Configuration changes with external impact** — credentials, DNS, firewall rules, production configs
|
||||
|
||||
Anything NOT on this list → just do it. Don't invent new reasons to pause.
|
||||
|
||||
## 3. Push / merge behaviour
|
||||
|
||||
On personal repositories (this monorepo and similar):
|
||||
|
||||
- **Push after every commit** — don't batch, don't wait for approval
|
||||
- **Direct to master** — no feature branches, no PRs unless the user explicitly asks
|
||||
- **Wait for CI / deployment** to complete before marking the task done — a green local build isn't the finish line if CI still runs
|
||||
- **`git add` specific files by name** — never `git add -A` or `git add .`, which can sweep in secrets, worktree cruft, or untracked experiments
|
||||
- **Never skip hooks** (`--no-verify`, `--no-gpg-sign`) unless the user explicitly requests it
|
||||
- **Never force-push to master** — warn the user if they request it
|
||||
|
||||
Shared / corporate repos follow whatever PR flow the project uses; when in doubt, ask.
|
||||
|
||||
## 4. When to dispatch subagents
|
||||
|
||||
**Default: do the work inline.** The main session reads files, writes code, runs tests, commits. Every handoff costs clarity and context.
|
||||
|
||||
Spawn a subagent only when one of these is true:
|
||||
|
||||
- **Parallel independent work** — two or more genuinely independent investigations can run at the same time
|
||||
- **Context protection** — raw output would be huge (wide codebase sweeps, long log dumps, 50+ files) and would pollute the main context; let a subagent digest and summarise
|
||||
- **User asked** — the user explicitly requested an agent
|
||||
- **Long-running stand-alone task** — something that runs for a while and doesn't need your attention (use `run_in_background: true`)
|
||||
|
||||
### Dispatch table
|
||||
|
||||
| Task pattern | Agent type | Background? |
|
||||
|---|---|---|
|
||||
| Write code / fix code / refactor | `coder` | foreground |
|
||||
| Debug error / test failure | `debugger` | foreground |
|
||||
| Review code / plan scrutiny | `reviewer` | background |
|
||||
| Explore codebase / research | `researcher` | foreground |
|
||||
| System design / architecture | `architect` | foreground |
|
||||
| Run verification commands | `verifier` | foreground |
|
||||
|
||||
### Rules when you DO dispatch
|
||||
|
||||
1. **Parallel when independent** — 2+ independent subtasks = spawn agents in parallel in one message
|
||||
2. **Background for long-running** — agents marked "background" in the table use `run_in_background: true`
|
||||
3. **Context protection** — keep raw output inside the subagent; ask for a concise summary
|
||||
4. **Prompt quality** — every agent prompt MUST:
|
||||
- Be 50+ words
|
||||
- Start with a clear task description (action verbs: implement, fix, add, refactor)
|
||||
- Include acceptance criteria (checkboxes or "Done when" conditions)
|
||||
- Include specific file/path references
|
||||
5. **Failed agent recovery** — one retry with a refined prompt; second failure → escalate to the user with a summary
|
||||
|
||||
## 5. NEVER do these — no exceptions
|
||||
|
||||
- Create files without asking
|
||||
- Use `any` / untyped types when a specific type exists
|
||||
- Refactor adjacent code (only touch what's requested)
|
||||
- Add lint-ignore or type-error suppression comments
|
||||
- Edit generated / auto-generated files
|
||||
- Write implementation before tests (see `quality.md`)
|
||||
- Commit temp / one-off / test scripts to the repo — store them outside the repo instead
|
||||
|
||||
## 6. Before touching any directory
|
||||
|
||||
Check if it (or any parent up to repo root) contains project-specific rules (e.g., `.claude/CLAUDE.md`, `.cursor/rules`). If found, read those files first — they may override anything in this file.
|
||||
|
||||
## 7. Docs — keep `infra/docs/` current
|
||||
|
||||
For any infra change that alters state described in
|
||||
`infra/docs/architecture/` or `infra/docs/runbooks/`:
|
||||
|
||||
- **Update affected docs in the same commit.** Never land infra
|
||||
changes that leave docs stale.
|
||||
- **New architecture-visible component** (new service, storage class,
|
||||
module, integration) → add or extend the relevant
|
||||
`architecture/*.md` doc.
|
||||
- **New operational procedure** (restore flow, upgrade path, recovery
|
||||
step) → add a `runbooks/*.md` entry.
|
||||
- **Medium/large designed changes** — write a
|
||||
`plans/YYYY-MM-DD-<topic>-{design,plan}.md` pair while designing,
|
||||
not after.
|
||||
- **Incidents** → post-mortem in
|
||||
`post-mortems/YYYY-MM-DD-<topic>.md`.
|
||||
|
||||
If you can't tell which docs are affected, grep for the service or
|
||||
resource name across `infra/docs/` — treat every match as a candidate
|
||||
for update. This rule is `infra/`-scoped; other repos may or may not
|
||||
have `docs/` directories.
|
||||
|
||||
See also: `infra/.claude/CLAUDE.md` "Update docs with every change"
|
||||
for the full list of doc surfaces (includes `.claude/CLAUDE.md`,
|
||||
`AGENTS.md`, `.claude/reference/service-catalog.md`).
|
||||
141
dot_claude/rules/planning.md
Normal file
141
dot_claude/rules/planning.md
Normal file
|
|
@ -0,0 +1,141 @@
|
|||
# Planning (applies to ALL agents)
|
||||
|
||||
**No implementation without completed research.** This flow MUST run before implementation begins.
|
||||
|
||||
The approval gate at the end of planning is **ExitPlanMode** — there is no second user-gate step once the plan is accepted. See `execution.md` for what happens after approval.
|
||||
|
||||
## Phase 1: Interview the User
|
||||
|
||||
**Interview relentlessly. Make sure that all ambiguities are resolved before execution.**
|
||||
|
||||
Before dispatching any researcher, ask targeted questions:
|
||||
|
||||
- **What exactly should change?** — not "add feature X" but the specific behavior expected
|
||||
- **What are the constraints?** — backward compatibility, performance requirements, scope limits
|
||||
- **What should NOT change?** — explicit out-of-scope items
|
||||
- **Who are the consumers?** — who calls this code, who depends on it, who will be affected
|
||||
- **What edge cases worry you?** — empty input, timeouts, stale data, concurrent access
|
||||
|
||||
**Gates — do NOT proceed to Phase 2 until ALL of these are true:**
|
||||
- [ ] Every question has been answered by the user (not assumed)
|
||||
- [ ] If any answer raised new questions, those were asked and answered too
|
||||
- [ ] Zero ambiguity remains — you can describe the exact change without "probably", "maybe", "might"
|
||||
- [ ] The user has not said "I'm not sure" about any critical decision — if they did, resolve it
|
||||
|
||||
**Longer planning = fewer fixes later.**
|
||||
|
||||
## Phase 2: Code Exploration
|
||||
|
||||
Dispatch `researcher` subagents:
|
||||
|
||||
### Mandatory investigations
|
||||
|
||||
| Investigation | How | Blocker if skipped |
|
||||
|---|---|---|
|
||||
| **All callers + blast radius** | Claude Code `Grep` tool for references, IDE "find all references" | Cannot assess risk of change |
|
||||
| **Existing patterns + reusable code** | Search for similar implementations before creating new | May duplicate existing utility |
|
||||
| **Edge cases + failure modes** | Trace error handling paths, check null/empty/stale input | Bugs ship to production |
|
||||
| **Current data state** | Query real data to understand current behavior with numbers | Assumptions may be wrong |
|
||||
|
||||
### For 2-3 parallel researchers
|
||||
|
||||
| Researcher | Focus |
|
||||
|---|---|
|
||||
| R1: Existing code + patterns | Trace the code path being modified. Find all callers. Identify existing abstractions to reuse. |
|
||||
| R2: Blast radius + dependencies | Who depends on this? Cross-team callers? What breaks if we change the interface? |
|
||||
| R3: Data + edge cases | Query real data to validate assumptions. Identify failure modes. |
|
||||
|
||||
### For infra changes
|
||||
|
||||
If the work touches state described in `infra/docs/architecture/` or
|
||||
`infra/docs/runbooks/`, read the relevant files BEFORE dispatching
|
||||
researchers. These docs are the authoritative starting point for
|
||||
current infra state — what's deployed, how it's wired, which
|
||||
conventions apply.
|
||||
|
||||
- `infra/docs/architecture/` — long-lived state (networking, storage,
|
||||
auth, databases, CI/CD, monitoring, backup-dr, etc.)
|
||||
- `infra/docs/runbooks/` — operational procedures (restores, upgrades,
|
||||
recoveries)
|
||||
- `infra/docs/plans/` — dated design/plan pairs for prior changes;
|
||||
grep here first when a similar change has been done before
|
||||
- `infra/docs/post-mortems/` — incident analyses; relevant when work
|
||||
touches a past failure surface
|
||||
|
||||
If the docs contradict the live state, that mismatch is itself a
|
||||
finding — surface it and resolve before proceeding.
|
||||
|
||||
## Phase 2b: Challenge, Verify, and Counter-Propose
|
||||
|
||||
**Always required.** After researchers report back, spawn **2 independent challenger subagents** in parallel. Each works INDEPENDENTLY — they do NOT see each other's output.
|
||||
|
||||
### Challenger mandate:
|
||||
|
||||
```
|
||||
## Role: Independent Research Challenger
|
||||
|
||||
You are reviewing research findings. Your job has THREE parts:
|
||||
|
||||
### 1. Scrutinize
|
||||
- Challenge the root cause analysis — what else could explain this?
|
||||
- Question assumptions — what is assumed without evidence?
|
||||
- Find missed edge cases — what happens with empty/null/stale/concurrent?
|
||||
- Identify missing investigations — what callers/consumers were NOT checked?
|
||||
|
||||
### 2. Verify every reference
|
||||
- For every FILE PATH cited: confirm it exists
|
||||
- For every FUNCTION/CLASS cited: confirm it exists
|
||||
- For every DATA CLAIM: confirm the data source exists and the query is valid
|
||||
- Flag ANY reference that cannot be verified as UNVERIFIED
|
||||
|
||||
### 3. Counter-propose
|
||||
- Propose at least ONE alternative approach to the problem
|
||||
- Your alternative must ALSO be backed by verified code/data references
|
||||
- Explain trade-offs: why your approach might be better or worse
|
||||
|
||||
Report:
|
||||
- VERIFIED: claims you confirmed exist
|
||||
- UNVERIFIED: claims you could not confirm
|
||||
- ISSUES: genuine problems with the findings (not nitpicks)
|
||||
- COUNTER-PROPOSAL: your alternative approach with references
|
||||
- VERDICT: AGREE (approach is sound) or DISAGREE (significant issues remain)
|
||||
```
|
||||
|
||||
### Convergence check
|
||||
|
||||
After challengers return, evaluate:
|
||||
|
||||
1. **Unverified claims** — if any reference was flagged UNVERIFIED, dispatch a researcher to verify or remove it
|
||||
2. **Genuine issues** — if challengers found real problems, dispatch a new research round → back to Phase 2b
|
||||
3. **Agreement** — if both challengers AGREE the approach is sound, proceed to Phase 3
|
||||
4. **Counter-proposals** — if a counter-proposal is clearly better, adopt it and re-validate
|
||||
|
||||
**Never present unvetted findings to the user.** All claims must be verified.
|
||||
|
||||
## Phase 3: Resolve All Ambiguities
|
||||
|
||||
After researchers report back AND challengers have agreed on the approach:
|
||||
|
||||
1. **List every open question** — anything the researchers couldn't answer or disagreed on
|
||||
2. **Ask the user** (via AskUserQuestion) to resolve each one — do NOT guess, do NOT assume
|
||||
3. **Present the plan via ExitPlanMode** with:
|
||||
- **Goal**: what we're doing and why
|
||||
- **Research Decisions**: callers, blast radius, patterns to reuse, edge cases
|
||||
- **Plan**: ordered steps with checkboxes
|
||||
|
||||
ExitPlanMode is the approval gate. Once the user accepts, hand off to `execution.md` — no further "should I proceed?" check.
|
||||
|
||||
## Research completeness checklist
|
||||
|
||||
Research is NOT complete if any of these are true:
|
||||
- [ ] Callers were not traced
|
||||
- [ ] No search for existing patterns
|
||||
- [ ] Edge cases not identified
|
||||
- [ ] Assumptions not validated with data
|
||||
- [ ] Open questions remain
|
||||
- [ ] No blast radius assessment
|
||||
- [ ] References not verified (file paths, functions cited without confirming they exist)
|
||||
- [ ] No challenge round completed
|
||||
- [ ] Challengers did not agree
|
||||
|
||||
If any box is checked, research is incomplete. Go back and fill the gaps before surfacing a plan.
|
||||
72
dot_claude/rules/quality.md
Normal file
72
dot_claude/rules/quality.md
Normal file
|
|
@ -0,0 +1,72 @@
|
|||
# Quality (applies to ALL agents)
|
||||
|
||||
**If you are writing code, this applies to YOU. No exceptions.**
|
||||
|
||||
## 1. TDD cycle — test-first
|
||||
|
||||
1. **RED** — Write a failing test. Run it. Confirm it fails.
|
||||
2. **GREEN** — Write minimal code to pass. Only enough to make the test green.
|
||||
3. **REFACTOR** — Clean up. Tests stay green.
|
||||
4. **VERIFY** — All must pass before committing:
|
||||
- Run tests: `pytest` (Python) / `npm test` (JS/TS)
|
||||
- Type check: `mypy .` (Python) / `tsc --noEmit` (TS)
|
||||
- Format: `yapf` (Python) / `prettier` (JS/TS)
|
||||
- Lint: `ruff check` (Python) / `eslint` (JS/TS) — **check the output**. Fix ALL errors before proceeding.
|
||||
5. **COMMIT** — commit the verified change.
|
||||
6. **REPEAT** — back to step 1.
|
||||
|
||||
## 2. Blockers — non-negotiable
|
||||
|
||||
- Writing implementation before a failing test exists = **BLOCKED**
|
||||
- Skipping tests (with any excuse) = **BLOCKED**
|
||||
- Writing tests after implementation = **BLOCKED**
|
||||
- Lint errors (unused variables, missing imports) = **BLOCKED** — run linter and fix before proceeding.
|
||||
|
||||
## 3. Test quality rules
|
||||
|
||||
- Each test verifies ONE behavior
|
||||
- Test behavior, not implementation (don't mock internals you own)
|
||||
- Use data providers / parameterized tests for multi-scenario coverage
|
||||
- Prefer property-based tests over example-based where the language supports it
|
||||
|
||||
## 4. Code quality checklist
|
||||
|
||||
Use this checklist for all code reviews and for your own output before committing. Evaluate each section and assign a verdict.
|
||||
|
||||
### AI slop detection
|
||||
|
||||
- [ ] No comments restating code (e.g., `// increment counter`, `// set the value`)
|
||||
- [ ] No unnecessary docstrings (e.g., `/** Gets the value */` on a getter)
|
||||
- [ ] No filler phrases (e.g., `This function is responsible for...`)
|
||||
- [ ] No wrapper functions adding no value
|
||||
- [ ] No commented-out code
|
||||
- [ ] No redundant type annotations in comments
|
||||
- [ ] No over-explained variable names — context comes from scope
|
||||
|
||||
### Engineering principles
|
||||
|
||||
- [ ] SRP: each function has a single responsibility (can describe without "and")
|
||||
- [ ] DRY: existing utilities reused (searched before creating new)
|
||||
- [ ] No premature abstractions or "just in case" parameters
|
||||
- [ ] Functions ≤20 lines (excluding type signatures)
|
||||
- [ ] Functions ≤3 parameters (more = use a struct/shape/object)
|
||||
- [ ] Functions ≤1 nesting level (early return over nested if/else)
|
||||
- [ ] No boolean parameters — named constants or enums instead
|
||||
- [ ] No magic values — named constants used
|
||||
- [ ] Error handling: fail fast, fail loud — no silent `catch {}`
|
||||
|
||||
### Idiomatic patterns
|
||||
|
||||
Write idiomatic code for each language. Common AI-generated anti-patterns:
|
||||
- [ ] Sequential awaits on independent calls → use concurrent/parallel execution
|
||||
- [ ] Loop + await → use async map/batch operations
|
||||
- [ ] Loading all data into memory then filtering → use server-side/query-level filtering
|
||||
- [ ] Acronyms in names (`ra`, `exp`) → use full names (`resource_account`, `experiment`)
|
||||
|
||||
### Verdicts
|
||||
|
||||
| Verdict | Criteria |
|
||||
|---------|----------|
|
||||
| **PASS** | All checks pass |
|
||||
| **REVISE** | Non-critical issues (naming, minor DRY, style nits) |
|
||||
| **BLOCK** | AI slop present, no tests for new behavior, or SOLID violation |
|
||||
Loading…
Add table
Add a link
Reference in a new issue