The previously-baked kubeconfig at /home/node/.openclaw/kubeconfig retained a service-account token bound to the original (long-dead) pod, so kubectl calls from inside the openclaw container failed with "the server has asked for the client to provide credentials" even though the openclaw SA has cluster-admin and kubelet projects a fresh token at /var/run/secrets/kubernetes.io/serviceaccount/token. Add init-container "setup-kubeconfig" that writes a kubeconfig with tokenFile + certificate-authority paths pointing at the projected SA volume — kubelet auto-rotates the token, kubectl always reads fresh creds, no Vault K8s-creds-engine refresh needed. Verified end-to-end: agent ran `kubectl get nodes -o wide` inside the pod and delivered a correct one-line summary to Telegram via openai-codex/gpt-5.4-mini. |
||
|---|---|---|
| .. | ||
| files | ||
| .terraform.lock.hcl | ||
| backend.tf | ||
| main.tf | ||
| providers.tf | ||
| secrets | ||
| terragrunt.hcl | ||