## Context
Wave 5c of the state-drift consolidation plan. `local-path-provisioner`
(Rancher's node-local dynamic PV provisioner) was deployed 55d ago via raw
`kubectl apply` against the upstream manifest. It serves as the cluster's
default StorageClass and is still actively in use — the 2026-04-18 live
survey showed helper-pod-delete cycles running against existing PVCs.
Unmanaged until now: namespace, ServiceAccount, ClusterRole (+ binding),
ConfigMap with provisioner config.json + helperPod.yaml + setup/teardown
scripts, StorageClass `local-path` (default), and the 1-replica
Deployment itself. Seven resources total.
## This change
New Tier 1 stack `stacks/local-path/` with all seven resources, adopted
via Wave 8's HCL `import {}` block convention (commit 8a99be11):
- `kubernetes_namespace.local_path_storage` → id `local-path-storage`
- `kubernetes_service_account.local_path_provisioner` →
id `local-path-storage/local-path-provisioner-service-account`
- `kubernetes_cluster_role.local_path_provisioner` → id `local-path-provisioner-role`
- `kubernetes_cluster_role_binding.local_path_provisioner` → id `local-path-provisioner-bind`
- `kubernetes_config_map.local_path_config` →
id `local-path-storage/local-path-config`
- `kubernetes_storage_class_v1.local_path` → id `local-path`
- `kubernetes_deployment.local_path_provisioner` →
id `local-path-storage/local-path-provisioner`
Conventions applied:
- Namespace gets `# KYVERNO_LIFECYCLE_V1` marker suppressing the
Goldilocks `vpa-update-mode` label drift (Wave 3B, commit 8b43692a).
- Deployment gets `# KYVERNO_LIFECYCLE_V1` marker suppressing the
ndots dns_config drift (Wave 3A, commit c9d221d5 + 327ce215).
- ServiceAccount + pod spec pin `automount_service_account_token = false`
and `enable_service_links = false` to match the live spec exactly.
- `import {}` stanzas removed after the apply converged to zero-diff
(per AGENTS.md → "Adopting Existing Resources").
## Apply outcome
`Apply complete! Resources: 7 imported, 0 added, 3 changed, 0 destroyed.`
The 3 in-place changes were:
- `kubernetes_config_map.local_path_config.data` — whitespace/format
reshuffle. The live ConfigMap contained the upstream manifest's
hand-indented JSON + YAML; my HCL uses canonical `jsonencode` /
heredoc. Semantic content identical, so the provisioner continued
running (no pod restart).
- `kubernetes_deployment.local_path_provisioner.wait_for_rollout = true`
— TF-only attribute, no cluster impact.
- `kubernetes_storage_class_v1.local_path.allow_volume_expansion = false`
+ `is-default-class` annotation re-asserted — TF-schema reconciliation
only; the StorageClass remained default throughout.
Post-apply `scripts/tg plan` returns `No changes`.
## Verification
```
$ cd stacks/local-path && ../../scripts/tg plan
No changes. Your infrastructure matches the configuration.
$ kubectl -n local-path-storage get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
local-path-provisioner 1/1 1 1 55d
$ kubectl get sc local-path
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer
```
## What is NOT in this change
- Helm-release adoption — local-path-provisioner was never installed via
Helm in this cluster; raw manifests only. Keeping native typed
resources rather than retrofitting a chart.
- PV-path customisation — sticks with upstream default
`/opt/local-path-provisioner` on all nodes (via
`DEFAULT_PATH_FOR_NON_LISTED_NODES`).
Closes: code-3gp
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>