2026-03-07 14:30:36 +00:00
|
|
|
variable "tls_secret_name" {
|
2026-03-14 08:51:45 +00:00
|
|
|
type = string
|
2026-03-07 14:30:36 +00:00
|
|
|
sensitive = true
|
|
|
|
|
}
|
[ci skip] Infrastructure hardening: security, monitoring, reliability, maintainability
Phase 1 - Critical Security:
- Netbox: move hardcoded DB/superuser passwords to variables
- MeshCentral: disable public registration, add Authentik auth
- Traefik: disable insecure API dashboard (api.insecure=false)
- Traefik: configure forwarded headers with Cloudflare trusted IPs
Phase 2 - Security Hardening:
- Add security headers middleware (HSTS, X-Frame-Options, nosniff, etc.)
- Add Kyverno pod security policies in audit mode (privileged, host
namespaces, SYS_ADMIN, trusted registries)
- Tighten rate limiting (avg=10, burst=50)
- Add Authentik protection to grampsweb
Phase 3 - Monitoring & Alerting:
- Add critical service alerts (PostgreSQL, MySQL, Redis, Headscale,
Authentik, Loki)
- Increase Loki retention from 7 to 30 days (720h)
- Add predictive PV filling alert (predict_linear)
- Re-enable Hackmd and Privatebin down alerts
Phase 4 - Reliability:
- Add resource requests/limits to Redis, DBaaS, Technitium, Headscale,
Vaultwarden, Uptime Kuma
- Increase Alloy DaemonSet memory to 512Mi/1Gi
Phase 6 - Maintainability:
- Extract duplicated tiers locals to terragrunt.hcl generate block
(removed from 67 stacks)
- Replace hardcoded NFS IP 10.0.10.15 with var.nfs_server (114
instances across 63 files)
- Replace hardcoded Redis/PostgreSQL/MySQL/Ollama/mail host references
with variables across ~35 stacks
- Migrate xray raw ingress resources to ingress_factory modules
2026-02-23 22:05:28 +00:00
|
|
|
variable "nfs_server" { type = string }
|
2026-02-22 13:56:34 +00:00
|
|
|
|
migrate 16 plan-time stacks: vault data source → ESO + kubernetes_secret
Replaced data "vault_kv_secret_v2" with:
1. ExternalSecret (ESO syncs Vault KV → K8s Secret)
2. data "kubernetes_secret" (reads ESO-created secret at plan time)
This removes the Vault provider dependency at plan time for these
stacks — they now only need K8s API access, not a Vault token.
Stacks: actualbudget, affine, audiobookshelf, calibre, changedetection,
coturn, freedify, freshrss, grampsweb, navidrome, novelapp, ollama,
owntracks, real-estate-crawler, servarr, ytdlp
2026-03-15 22:06:39 +00:00
|
|
|
resource "kubernetes_manifest" "external_secret" {
|
|
|
|
|
manifest = {
|
|
|
|
|
apiVersion = "external-secrets.io/v1beta1"
|
|
|
|
|
kind = "ExternalSecret"
|
|
|
|
|
metadata = {
|
|
|
|
|
name = "owntracks-secrets"
|
|
|
|
|
namespace = "owntracks"
|
|
|
|
|
}
|
|
|
|
|
spec = {
|
|
|
|
|
refreshInterval = "15m"
|
|
|
|
|
secretStoreRef = {
|
|
|
|
|
name = "vault-kv"
|
|
|
|
|
kind = "ClusterSecretStore"
|
|
|
|
|
}
|
|
|
|
|
target = {
|
|
|
|
|
name = "owntracks-secrets"
|
|
|
|
|
}
|
|
|
|
|
dataFrom = [{
|
|
|
|
|
extract = {
|
|
|
|
|
key = "owntracks"
|
|
|
|
|
}
|
|
|
|
|
}]
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
depends_on = [kubernetes_namespace.owntracks]
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
data "kubernetes_secret" "eso_secrets" {
|
|
|
|
|
metadata {
|
|
|
|
|
name = "owntracks-secrets"
|
|
|
|
|
namespace = kubernetes_namespace.owntracks.metadata[0].name
|
|
|
|
|
}
|
|
|
|
|
depends_on = [kubernetes_manifest.external_secret]
|
2026-03-14 17:15:48 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
locals {
|
migrate 16 plan-time stacks: vault data source → ESO + kubernetes_secret
Replaced data "vault_kv_secret_v2" with:
1. ExternalSecret (ESO syncs Vault KV → K8s Secret)
2. data "kubernetes_secret" (reads ESO-created secret at plan time)
This removes the Vault provider dependency at plan time for these
stacks — they now only need K8s API access, not a Vault token.
Stacks: actualbudget, affine, audiobookshelf, calibre, changedetection,
coturn, freedify, freshrss, grampsweb, navidrome, novelapp, ollama,
owntracks, real-estate-crawler, servarr, ytdlp
2026-03-15 22:06:39 +00:00
|
|
|
credentials = jsondecode(data.kubernetes_secret.eso_secrets.data["credentials"])
|
2026-03-14 17:15:48 +00:00
|
|
|
}
|
|
|
|
|
|
2026-02-22 13:56:34 +00:00
|
|
|
|
2026-02-22 15:13:55 +00:00
|
|
|
resource "kubernetes_namespace" "owntracks" {
|
|
|
|
|
metadata {
|
|
|
|
|
name = "owntracks"
|
|
|
|
|
labels = {
|
|
|
|
|
"istio-injection" : "disabled"
|
|
|
|
|
tier = local.tiers.aux
|
|
|
|
|
}
|
|
|
|
|
}
|
[infra] Suppress Goldilocks vpa-update-mode label drift on all namespaces [ci skip]
## Context
Wave 3B-continued: the Goldilocks VPA dashboard (stacks/vpa) runs a Kyverno
ClusterPolicy `goldilocks-vpa-auto-mode` that mutates every namespace with
`metadata.labels["goldilocks.fairwinds.com/vpa-update-mode"] = "off"`. This
is intentional — Terraform owns container resource limits, and Goldilocks
should only provide recommendations, never auto-update. The label is how
Goldilocks decides per-namespace whether to run its VPA in `off` mode.
Effect on Terraform: every `kubernetes_namespace` resource shows the label
as pending-removal (`-> null`) on every `scripts/tg plan`. Dawarich survey
2026-04-18 confirmed the drift. Cluster-side count: 88 namespaces carry the
label (`kubectl get ns -o json | jq ... | wc -l`). Every TF-managed namespace
is affected.
This commit brings the intentional admission drift under the same
`# KYVERNO_LIFECYCLE_V1` discoverability marker introduced in c9d221d5 for
the ndots dns_config pattern. The marker now stands generically for any
Kyverno admission-webhook drift suppression; the inline comment records
which specific policy stamps which specific field so future grep audits
show why each suppression exists.
## This change
107 `.tf` files touched — every stack's `resource "kubernetes_namespace"`
resource gets:
```hcl
lifecycle {
# KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
}
```
Injection was done with a brace-depth-tracking Python pass (`/tmp/add_goldilocks_ignore.py`):
match `^resource "kubernetes_namespace" ` → track `{` / `}` until the
outermost closing brace → insert the lifecycle block before the closing
brace. The script is idempotent (skips any file that already mentions
`goldilocks.fairwinds.com/vpa-update-mode`) so re-running is safe.
Vault stack picked up 2 namespaces in the same file (k8s-users produces
one, plus a second explicit ns) — confirmed via file diff (+8 lines).
## What is NOT in this change
- `stacks/trading-bot/main.tf` — entire file is `/* … */` commented out
(paused 2026-04-06 per user decision). Reverted after the script ran.
- `stacks/_template/main.tf.example` — per-stack skeleton, intentionally
minimal. User keeps it that way. Not touched by the script (file
has no real `resource "kubernetes_namespace"` — only a placeholder
comment).
- `.terraform/` copies (e.g. `stacks/metallb/.terraform/modules/...`) —
gitignored, won't commit; the live path was edited.
- `terraform fmt` cleanup of adjacent pre-existing alignment issues in
authentik, freedify, hermes-agent, nvidia, vault, meshcentral. Reverted
to keep the commit scoped to the Goldilocks sweep. Those files will
need a separate fmt-only commit or will be cleaned up on next real
apply to that stack.
## Verification
Dawarich (one of the hundred-plus touched stacks) showed the pattern
before and after:
```
$ cd stacks/dawarich && ../../scripts/tg plan
Before:
Plan: 0 to add, 2 to change, 0 to destroy.
# kubernetes_namespace.dawarich will be updated in-place
(goldilocks.fairwinds.com/vpa-update-mode -> null)
# module.tls_secret.kubernetes_secret.tls_secret will be updated in-place
(Kyverno generate.* labels — fixed in 8d94688d)
After:
No changes. Your infrastructure matches the configuration.
```
Injection count check:
```
$ rg -c 'KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode' stacks/ | awk -F: '{s+=$2} END {print s}'
108
```
## Reproduce locally
1. `git pull`
2. Pick any stack: `cd stacks/<name> && ../../scripts/tg plan`
3. Expect: no drift on the namespace's goldilocks.fairwinds.com/vpa-update-mode label.
Closes: code-dwx
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 21:15:27 +00:00
|
|
|
lifecycle {
|
|
|
|
|
# KYVERNO_LIFECYCLE_V1: goldilocks-vpa-auto-mode ClusterPolicy stamps this label on every namespace
|
|
|
|
|
ignore_changes = [metadata[0].labels["goldilocks.fairwinds.com/vpa-update-mode"]]
|
|
|
|
|
}
|
2026-02-22 15:13:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
module "tls_secret" {
|
|
|
|
|
source = "../../modules/kubernetes/setup_tls_secret"
|
|
|
|
|
namespace = kubernetes_namespace.owntracks.metadata[0].name
|
|
|
|
|
tls_secret_name = var.tls_secret_name
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
locals {
|
|
|
|
|
username = "owntracks"
|
2026-03-14 17:15:48 +00:00
|
|
|
htpasswd = join("\n", [for name, pass in local.credentials : "${name}:${bcrypt(pass, 10)}"])
|
2026-02-22 15:13:55 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
resource "kubernetes_secret" "basic_auth" {
|
|
|
|
|
metadata {
|
|
|
|
|
name = "basic-auth-secret"
|
|
|
|
|
namespace = kubernetes_namespace.owntracks.metadata[0].name
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
data = {
|
|
|
|
|
auth = local.htpasswd
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
type = "Opaque"
|
|
|
|
|
lifecycle {
|
2026-04-18 14:08:10 +00:00
|
|
|
# DRIFT_WORKAROUND: htpasswd bcrypt hashes are non-deterministic per apply; would cause perpetual diff. Reviewed 2026-04-18.
|
2026-02-22 15:13:55 +00:00
|
|
|
ignore_changes = [data]
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
feat(storage): migrate 38 NFS PVCs to proxmox-lvm (Wave 2)
Add proxmox-lvm PVCs with pvc-autoresizer annotations for all
remaining single-pod app data services. Deployments updated to
use new block storage PVCs. Old NFS modules retained for rollback.
Services: affine, changedetection, diun, excalidraw, f1-stream,
hackmd, isponsorblocktv, matrix, n8n, send, grampsweb, health,
onlyoffice, owntracks, paperless-ngx, privatebin, resume,
speedtest, stirling-pdf, tandoor, rybbit (clickhouse), tor-proxy
(torrserver), whisper+piper, frigate (config), ollama (ui),
servarr (prowlarr/listenarr/qbittorrent), aiostreams, freshrss
(extensions), meshcentral (data+files), openclaw (data+home+
openlobster), technitium, mailserver (data+roundcube html+enigma),
dbaas (pgadmin).
Strategy set to Recreate where needed for RWO volumes.
2026-04-04 19:25:12 +03:00
|
|
|
resource "kubernetes_persistent_volume_claim" "data_proxmox" {
|
|
|
|
|
wait_until_bound = false
|
|
|
|
|
metadata {
|
|
|
|
|
name = "owntracks-data-proxmox"
|
|
|
|
|
namespace = kubernetes_namespace.owntracks.metadata[0].name
|
|
|
|
|
annotations = {
|
|
|
|
|
"resize.topolvm.io/threshold" = "80%"
|
|
|
|
|
"resize.topolvm.io/increase" = "100%"
|
|
|
|
|
"resize.topolvm.io/storage_limit" = "5Gi"
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
spec {
|
|
|
|
|
access_modes = ["ReadWriteOnce"]
|
|
|
|
|
storage_class_name = "proxmox-lvm"
|
|
|
|
|
resources {
|
|
|
|
|
requests = {
|
|
|
|
|
storage = "1Gi"
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2026-02-22 15:13:55 +00:00
|
|
|
resource "kubernetes_deployment" "owntracks" {
|
|
|
|
|
metadata {
|
|
|
|
|
name = "owntracks"
|
|
|
|
|
namespace = kubernetes_namespace.owntracks.metadata[0].name
|
|
|
|
|
labels = {
|
|
|
|
|
app = "owntracks"
|
|
|
|
|
tier = local.tiers.aux
|
|
|
|
|
}
|
|
|
|
|
annotations = {
|
|
|
|
|
"reloader.stakater.com/search" = "true"
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
spec {
|
|
|
|
|
replicas = 1
|
|
|
|
|
strategy {
|
|
|
|
|
type = "Recreate"
|
|
|
|
|
}
|
|
|
|
|
selector {
|
|
|
|
|
match_labels = {
|
|
|
|
|
app = "owntracks"
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
template {
|
|
|
|
|
metadata {
|
|
|
|
|
labels = {
|
|
|
|
|
app = "owntracks"
|
|
|
|
|
}
|
|
|
|
|
annotations = {
|
|
|
|
|
"diun.enable" = "true"
|
|
|
|
|
"diun.include_tags" = "^\\d+(?:\\.\\d+)?(?:\\.\\d+)?$"
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
spec {
|
|
|
|
|
|
|
|
|
|
container {
|
2026-04-16 16:34:29 +00:00
|
|
|
image = "owntracks/recorder:1.0.1"
|
2026-02-22 15:13:55 +00:00
|
|
|
name = "owntracks"
|
|
|
|
|
port {
|
|
|
|
|
name = "https"
|
|
|
|
|
container_port = 8083
|
|
|
|
|
}
|
|
|
|
|
env {
|
|
|
|
|
name = "OTR_PORT"
|
|
|
|
|
value = "0"
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
volume_mount {
|
|
|
|
|
name = "data"
|
|
|
|
|
mount_path = "/store"
|
|
|
|
|
}
|
|
|
|
|
volume_mount {
|
|
|
|
|
name = "data"
|
|
|
|
|
mount_path = "/config"
|
|
|
|
|
}
|
[ci skip] right-size all pod resources based on VPA + live metrics audit
Full cluster resource audit: cross-referenced Goldilocks VPA recommendations,
live kubectl top metrics, and Terraform definitions for 100+ containers.
Critical fixes:
- dashy: CPU throttled at 98% (490m/500m) → 2 CPU limit
- stirling-pdf: CPU throttled at 99.7% (299m/300m) → 2 CPU limit
- traefik auth-proxy/bot-block-proxy: mem limit 32Mi → 128Mi
Added explicit resources to ~40 containers that had none:
- audiobookshelf, changedetection, cyberchef, dawarich, diun, echo,
excalidraw, freshrss, hackmd, isponsorblocktv, linkwarden, n8n,
navidrome, ntfy, owntracks, privatebin, send, shadowsocks, tandoor,
tor-proxy, wealthfolio, networking-toolbox, rybbit, mailserver,
cloudflared, pgadmin, phpmyadmin, crowdsec-web, xray, wireguard,
k8s-portal, tuya-bridge, ollama-ui, whisper, piper, immich-server,
immich-postgresql, osrm-foot
GPU containers: added CPU/mem alongside GPU limits:
- ollama: removed CPU/mem limits (models vary in size), keep GPU only
- frigate: req 500m/2Gi, lim 4/8Gi + GPU
- immich-ml: req 100m/1Gi, lim 2/4Gi + GPU
Right-sized ~25 over-provisioned containers:
- kms-web-page: 500m/512Mi → 50m/64Mi (was using 0m/10Mi)
- onlyoffice: CPU 8 → 2 (VPA upper 45m)
- realestate-crawler-api: CPU 2000m → 250m
- blog/travel-blog/webhook-handler: 500m → 100m
- coturn/health/plotting-book: reduced to match actual usage
Conservative methodology: limits = max(VPA upper * 2, live usage * 2)
2026-03-01 19:18:50 +00:00
|
|
|
resources {
|
|
|
|
|
requests = {
|
|
|
|
|
cpu = "10m"
|
right-size memory: set requests=limits based on actual usage
- Set memory requests = limits across 56 stacks to prevent overcommit
- Right-sized limits based on actual pod usage (2x actual, rounded up)
- Scaled down trading-bot (replicas=0) to free memory
- Fixed OOMKilled services: forgejo, dawarich, health, meshcentral,
paperless-ngx, vault auto-unseal, rybbit, whisper, openclaw, clickhouse
- Added startup+liveness probes to calibre-web
- Bumped inotify limits on nodes 2,3 (max_user_instances 128->8192)
Post node2 OOM incident (2026-03-14). Previous kubelet config had no
kubeReserved/systemReserved set, allowing pods to starve the kernel.
2026-03-14 21:01:24 +00:00
|
|
|
memory = "64Mi"
|
[ci skip] right-size all pod resources based on VPA + live metrics audit
Full cluster resource audit: cross-referenced Goldilocks VPA recommendations,
live kubectl top metrics, and Terraform definitions for 100+ containers.
Critical fixes:
- dashy: CPU throttled at 98% (490m/500m) → 2 CPU limit
- stirling-pdf: CPU throttled at 99.7% (299m/300m) → 2 CPU limit
- traefik auth-proxy/bot-block-proxy: mem limit 32Mi → 128Mi
Added explicit resources to ~40 containers that had none:
- audiobookshelf, changedetection, cyberchef, dawarich, diun, echo,
excalidraw, freshrss, hackmd, isponsorblocktv, linkwarden, n8n,
navidrome, ntfy, owntracks, privatebin, send, shadowsocks, tandoor,
tor-proxy, wealthfolio, networking-toolbox, rybbit, mailserver,
cloudflared, pgadmin, phpmyadmin, crowdsec-web, xray, wireguard,
k8s-portal, tuya-bridge, ollama-ui, whisper, piper, immich-server,
immich-postgresql, osrm-foot
GPU containers: added CPU/mem alongside GPU limits:
- ollama: removed CPU/mem limits (models vary in size), keep GPU only
- frigate: req 500m/2Gi, lim 4/8Gi + GPU
- immich-ml: req 100m/1Gi, lim 2/4Gi + GPU
Right-sized ~25 over-provisioned containers:
- kms-web-page: 500m/512Mi → 50m/64Mi (was using 0m/10Mi)
- onlyoffice: CPU 8 → 2 (VPA upper 45m)
- realestate-crawler-api: CPU 2000m → 250m
- blog/travel-blog/webhook-handler: 500m → 100m
- coturn/health/plotting-book: reduced to match actual usage
Conservative methodology: limits = max(VPA upper * 2, live usage * 2)
2026-03-01 19:18:50 +00:00
|
|
|
}
|
|
|
|
|
limits = {
|
|
|
|
|
memory = "64Mi"
|
|
|
|
|
}
|
|
|
|
|
}
|
2026-02-22 15:13:55 +00:00
|
|
|
}
|
|
|
|
|
volume {
|
|
|
|
|
name = "data"
|
[ci skip] migrate 29 services from inline NFS to CSI-backed PV/PVC
Batch migration of all single-volume and simple multi-volume stacks.
All services verified healthy after migration. Uses nfs-truenas
StorageClass with soft,timeo=30,retrans=3 mount options to eliminate
stale NFS mount hangs.
Services: atuin, audiobookshelf, calibre, changedetection, diun,
excalidraw, forgejo, freshrss, grampsweb, hackmd, health,
isponsorblocktv, matrix, meshcentral, n8n, navidrome, ntfy, ollama,
onlyoffice, owntracks, paperless-ngx, poison-fountain, send,
stirling-pdf, tandoor, wealthfolio, whisper, woodpecker, ytdlp
2026-03-02 00:15:39 +00:00
|
|
|
persistent_volume_claim {
|
2026-04-17 20:29:57 +00:00
|
|
|
claim_name = "owntracks-data-encrypted"
|
2026-02-22 15:13:55 +00:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
resource "kubernetes_service" "owntracks" {
|
|
|
|
|
metadata {
|
|
|
|
|
name = "owntracks"
|
|
|
|
|
namespace = kubernetes_namespace.owntracks.metadata[0].name
|
|
|
|
|
labels = {
|
|
|
|
|
"app" = "owntracks"
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
spec {
|
|
|
|
|
selector = {
|
|
|
|
|
app = "owntracks"
|
|
|
|
|
}
|
|
|
|
|
port {
|
|
|
|
|
name = "https"
|
|
|
|
|
port = 443
|
|
|
|
|
target_port = 8083
|
|
|
|
|
protocol = "TCP"
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
module "ingress" {
|
|
|
|
|
source = "../../modules/kubernetes/ingress_factory"
|
2026-04-16 13:45:04 +00:00
|
|
|
dns_type = "proxied"
|
2026-02-22 15:13:55 +00:00
|
|
|
namespace = kubernetes_namespace.owntracks.metadata[0].name
|
|
|
|
|
name = "owntracks"
|
|
|
|
|
tls_secret_name = var.tls_secret_name
|
|
|
|
|
port = 443
|
|
|
|
|
extra_annotations = {
|
|
|
|
|
"traefik.ingress.kubernetes.io/router.middlewares" = "owntracks-basic-auth@kubernetescrd,traefik-rate-limit@kubernetescrd,traefik-csp-headers@kubernetescrd,traefik-crowdsec@kubernetescrd"
|
2026-03-07 16:41:36 +00:00
|
|
|
"gethomepage.dev/enabled" = "true"
|
|
|
|
|
"gethomepage.dev/name" = "OwnTracks"
|
|
|
|
|
"gethomepage.dev/description" = "Location tracking"
|
|
|
|
|
"gethomepage.dev/icon" = "owntracks.png"
|
|
|
|
|
"gethomepage.dev/group" = "Smart Home"
|
|
|
|
|
"gethomepage.dev/pod-selector" = ""
|
2026-02-22 15:13:55 +00:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
resource "kubernetes_manifest" "basic_auth_middleware" {
|
|
|
|
|
manifest = {
|
|
|
|
|
apiVersion = "traefik.io/v1alpha1"
|
|
|
|
|
kind = "Middleware"
|
|
|
|
|
metadata = {
|
|
|
|
|
name = "basic-auth"
|
|
|
|
|
namespace = kubernetes_namespace.owntracks.metadata[0].name
|
|
|
|
|
}
|
|
|
|
|
spec = {
|
|
|
|
|
basicAuth = {
|
|
|
|
|
secret = kubernetes_secret.basic_auth.metadata[0].name
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2026-02-22 13:56:34 +00:00
|
|
|
}
|