infra/stacks/local-path/main.tf
Viktor Barzin 124a756351 [infra] Adopt local-path-provisioner into Terraform (Wave 5c)
## Context

Wave 5c of the state-drift consolidation plan. `local-path-provisioner`
(Rancher's node-local dynamic PV provisioner) was deployed 55d ago via raw
`kubectl apply` against the upstream manifest. It serves as the cluster's
default StorageClass and is still actively in use — the 2026-04-18 live
survey showed helper-pod-delete cycles running against existing PVCs.

Unmanaged until now: namespace, ServiceAccount, ClusterRole (+ binding),
ConfigMap with provisioner config.json + helperPod.yaml + setup/teardown
scripts, StorageClass `local-path` (default), and the 1-replica
Deployment itself. Seven resources total.

## This change

New Tier 1 stack `stacks/local-path/` with all seven resources, adopted
via Wave 8's HCL `import {}` block convention (commit 8a99be11):

- `kubernetes_namespace.local_path_storage` → id `local-path-storage`
- `kubernetes_service_account.local_path_provisioner` →
  id `local-path-storage/local-path-provisioner-service-account`
- `kubernetes_cluster_role.local_path_provisioner` → id `local-path-provisioner-role`
- `kubernetes_cluster_role_binding.local_path_provisioner` → id `local-path-provisioner-bind`
- `kubernetes_config_map.local_path_config` →
  id `local-path-storage/local-path-config`
- `kubernetes_storage_class_v1.local_path` → id `local-path`
- `kubernetes_deployment.local_path_provisioner` →
  id `local-path-storage/local-path-provisioner`

Conventions applied:
- Namespace gets `# KYVERNO_LIFECYCLE_V1` marker suppressing the
  Goldilocks `vpa-update-mode` label drift (Wave 3B, commit 8b43692a).
- Deployment gets `# KYVERNO_LIFECYCLE_V1` marker suppressing the
  ndots dns_config drift (Wave 3A, commit c9d221d5 + 327ce215).
- ServiceAccount + pod spec pin `automount_service_account_token = false`
  and `enable_service_links = false` to match the live spec exactly.
- `import {}` stanzas removed after the apply converged to zero-diff
  (per AGENTS.md → "Adopting Existing Resources").

## Apply outcome

`Apply complete! Resources: 7 imported, 0 added, 3 changed, 0 destroyed.`

The 3 in-place changes were:
- `kubernetes_config_map.local_path_config.data` — whitespace/format
  reshuffle. The live ConfigMap contained the upstream manifest's
  hand-indented JSON + YAML; my HCL uses canonical `jsonencode` /
  heredoc. Semantic content identical, so the provisioner continued
  running (no pod restart).
- `kubernetes_deployment.local_path_provisioner.wait_for_rollout = true`
  — TF-only attribute, no cluster impact.
- `kubernetes_storage_class_v1.local_path.allow_volume_expansion = false`
  + `is-default-class` annotation re-asserted — TF-schema reconciliation
  only; the StorageClass remained default throughout.

Post-apply `scripts/tg plan` returns `No changes`.

## Verification

```
$ cd stacks/local-path && ../../scripts/tg plan
No changes. Your infrastructure matches the configuration.

$ kubectl -n local-path-storage get deploy
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
local-path-provisioner   1/1     1            1           55d

$ kubectl get sc local-path
NAME                    PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE
local-path (default)    rancher.io/local-path    Delete          WaitForFirstConsumer
```

## What is NOT in this change

- Helm-release adoption — local-path-provisioner was never installed via
  Helm in this cluster; raw manifests only. Keeping native typed
  resources rather than retrofitting a chart.
- PV-path customisation — sticks with upstream default
  `/opt/local-path-provisioner` on all nodes (via
  `DEFAULT_PATH_FOR_NON_LISTED_NODES`).

Closes: code-3gp

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 22:39:55 +00:00

198 lines
5.2 KiB
HCL

# local-path-provisioner
#
# Rancher's local-path provisioner — backs PVCs with node-local
# /opt/local-path-provisioner directories. Currently serves as the default
# StorageClass. Deployed via raw kubectl apply 55d ago; adopted into TF
# (Wave 5c) on 2026-04-18.
#
# Upstream: https://github.com/rancher/local-path-provisioner
# Version pinned to rancher/local-path-provisioner:v0.0.31
resource "kubernetes_namespace" "local_path_storage" {
metadata {
name = "local-path-storage"
}
lifecycle {
# KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
}
resource "kubernetes_service_account" "local_path_provisioner" {
metadata {
name = "local-path-provisioner-service-account"
namespace = kubernetes_namespace.local_path_storage.metadata[0].name
}
automount_service_account_token = false
}
resource "kubernetes_cluster_role" "local_path_provisioner" {
metadata {
name = "local-path-provisioner-role"
}
rule {
api_groups = [""]
resources = ["nodes", "persistentvolumeclaims", "configmaps", "pods", "pods/log"]
verbs = ["get", "list", "watch"]
}
rule {
api_groups = [""]
resources = ["persistentvolumes"]
verbs = ["get", "list", "watch", "create", "patch", "update", "delete"]
}
rule {
api_groups = [""]
resources = ["events"]
verbs = ["create", "patch"]
}
rule {
api_groups = ["storage.k8s.io"]
resources = ["storageclasses"]
verbs = ["get", "list", "watch"]
}
}
resource "kubernetes_cluster_role_binding" "local_path_provisioner" {
metadata {
name = "local-path-provisioner-bind"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = kubernetes_cluster_role.local_path_provisioner.metadata[0].name
}
subject {
kind = "ServiceAccount"
name = kubernetes_service_account.local_path_provisioner.metadata[0].name
namespace = kubernetes_namespace.local_path_storage.metadata[0].name
}
}
resource "kubernetes_config_map" "local_path_config" {
metadata {
name = "local-path-config"
namespace = kubernetes_namespace.local_path_storage.metadata[0].name
}
data = {
"config.json" = jsonencode({
nodePathMap = [{
node = "DEFAULT_PATH_FOR_NON_LISTED_NODES"
paths = ["/opt/local-path-provisioner"]
}]
})
"helperPod.yaml" = <<-EOT
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
priorityClassName: system-node-critical
tolerations:
- key: node.kubernetes.io/disk-pressure
operator: Exists
effect: NoSchedule
containers:
- name: helper-pod
image: busybox
imagePullPolicy: IfNotPresent
EOT
"setup" = <<-EOT
#!/bin/sh
set -eu
mkdir -m 0777 -p "$VOL_DIR"
EOT
"teardown" = <<-EOT
#!/bin/sh
set -eu
rm -rf "$VOL_DIR"
EOT
}
}
resource "kubernetes_storage_class_v1" "local_path" {
metadata {
name = "local-path"
annotations = {
"storageclass.kubernetes.io/is-default-class" = "true"
}
}
storage_provisioner = "rancher.io/local-path"
reclaim_policy = "Delete"
volume_binding_mode = "WaitForFirstConsumer"
allow_volume_expansion = false
}
resource "kubernetes_deployment" "local_path_provisioner" {
metadata {
name = "local-path-provisioner"
namespace = kubernetes_namespace.local_path_storage.metadata[0].name
labels = {
tier = "default"
}
}
spec {
replicas = 1
selector {
match_labels = {
app = "local-path-provisioner"
}
}
template {
metadata {
labels = {
app = "local-path-provisioner"
}
}
spec {
service_account_name = kubernetes_service_account.local_path_provisioner.metadata[0].name
automount_service_account_token = false
enable_service_links = false
container {
name = "local-path-provisioner"
image = "rancher/local-path-provisioner:v0.0.31"
image_pull_policy = "IfNotPresent"
command = [
"local-path-provisioner",
"--debug",
"start",
"--config",
"/etc/config/config.json",
]
env {
name = "POD_NAMESPACE"
value_from {
field_ref {
field_path = "metadata.namespace"
}
}
}
env {
name = "CONFIG_MOUNT_PATH"
value = "/etc/config/"
}
volume_mount {
name = "config-volume"
mount_path = "/etc/config/"
}
}
volume {
name = "config-volume"
config_map {
name = kubernetes_config_map.local_path_config.metadata[0].name
}
}
}
}
}
lifecycle {
# KYVERNO_LIFECYCLE_V1: Kyverno admission webhook mutates dns_config with ndots=2
ignore_changes = [spec[0].template[0].spec[0].dns_config]
}
}