[infra] Auto-create Cloudflare DNS records from ingress_factory
## Context
Deploying new services required manually adding hostnames to
cloudflare_proxied_names/cloudflare_non_proxied_names in config.tfvars —
a separate file from the service stack. This was frequently forgotten,
leaving services unreachable externally.
## This change:
- Add `dns_type` parameter to `ingress_factory` and `reverse_proxy/factory`
modules. Setting `dns_type = "proxied"` or `"non-proxied"` auto-creates
the Cloudflare DNS record (CNAME to tunnel or A/AAAA to public IP).
- Simplify cloudflared tunnel from 100 per-hostname rules to wildcard
`*.viktorbarzin.me → Traefik`. Traefik still handles host-based routing.
- Add global Cloudflare provider via terragrunt.hcl (separate
cloudflare_provider.tf with Vault-sourced API key).
- Migrate 118 hostnames from centralized config.tfvars to per-service
dns_type. 17 hostnames remain centrally managed (Helm ingresses,
special cases).
- Update docs, AGENTS.md, CLAUDE.md, dns.md runbook.
```
BEFORE AFTER
config.tfvars (manual list) stacks/<svc>/main.tf
| module "ingress" {
v dns_type = "proxied"
stacks/cloudflared/ }
for_each = list |
cloudflare_record auto-creates
tunnel per-hostname cloudflare_record + annotation
```
## What is NOT in this change:
- Uptime Kuma monitor migration (still reads from config.tfvars)
- 17 remaining centrally-managed hostnames (Helm, special cases)
- Removal of allow_overwrite (keep until migration confirmed stable)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
95d2a6abf8
commit
b1d152be1f
94 changed files with 471 additions and 34 deletions
|
|
@ -28,7 +28,7 @@ Violations cause state drift, which causes future applies to break or silently r
|
|||
- **Apply**: Authenticate via `vault login -method=oidc`, then use `scripts/tg` (preferred — handles state decrypt/encrypt) or `terragrunt` directly. `scripts/tg` adds `-auto-approve` for `--non-interactive` applies.
|
||||
- **New services need CI/CD** and **monitoring** (Prometheus/Uptime Kuma)
|
||||
- **New service**: Use `setup-project` skill for full workflow
|
||||
- **Ingress**: `ingress_factory` module. Auth: `protected = true`. Anti-AI: on by default.
|
||||
- **Ingress**: `ingress_factory` module. Auth: `protected = true`. Anti-AI: on by default. **DNS**: `dns_type = "proxied"` (Cloudflare CDN) or `"non-proxied"` (direct A/AAAA). DNS records are auto-created — no need to edit `config.tfvars`.
|
||||
- **Docker images**: Always build for `linux/amd64`. Use 8-char git SHA tags — `:latest` causes stale pull-through cache.
|
||||
- **Private registry**: `registry.viktorbarzin.me` (htpasswd auth, credentials in Vault `secret/viktor`). Use `image: registry.viktorbarzin.me/<name>:<tag>` + `imagePullSecrets: [{name: registry-credentials}]`. Kyverno auto-syncs the secret to all namespaces. Build & push from registry VM (`10.0.20.10`). Containerd `hosts.toml` redirects pulls to LAN IP directly. Web UI at `docker.viktorbarzin.me` (Authentik-protected).
|
||||
- **LinuxServer.io containers**: `DOCKER_MODS` runs apt-get on every start — bake slow mods into a custom image (`RUN /docker-mods || true` then `ENV DOCKER_MODS=`). Set `NO_CHOWN=true` to skip recursive chown that hangs on NFS mounts.
|
||||
|
|
@ -133,7 +133,7 @@ Repo IDs: infra=1, Website=2, finance=3, health=4, travel_blog=5, webhook-handle
|
|||
- Alert cascade inhibitions: if node is down, suppress pod alerts on that node.
|
||||
- Exclude completed CronJob pods from "pod not ready" alerts.
|
||||
- Every new service gets Prometheus scrape config + Uptime Kuma monitor. External monitors auto-created for Cloudflare-proxied services by `external-monitor-sync` CronJob (10min, uptime-kuma ns).
|
||||
- **External monitoring**: `[External] <service>` monitors in Uptime Kuma test full external path (DNS → Cloudflare → Tunnel → Traefik). Divergence metric `external_internal_divergence_count` → alert `ExternalAccessDivergence` (15min). Config: `stacks/uptime-kuma/`, targets from `cloudflare_proxied_names` in `config.tfvars`.
|
||||
- **External monitoring**: `[External] <service>` monitors in Uptime Kuma test full external path (DNS → Cloudflare → Tunnel → Traefik). Divergence metric `external_internal_divergence_count` → alert `ExternalAccessDivergence` (15min). Config: `stacks/uptime-kuma/`, targets from `cloudflare_proxied_names` in `config.tfvars` (17 remaining centrally-managed hostnames; most DNS records now auto-created by `ingress_factory` `dns_type` param).
|
||||
- Key alerts: OOMKill, pod replica mismatch, 4xx/5xx error rates, UPS battery, CPU temp, SSD writes, NFS responsiveness, ClusterMemoryRequestsHigh (>85%), ContainerNearOOM (>85% limit), PodUnschedulable, ExternalAccessDivergence.
|
||||
- **E2E email monitoring**: CronJob `email-roundtrip-monitor` (every 20 min) sends test email via Mailgun API to `smoke-test@viktorbarzin.me` (catch-all → `spam@`), verifies IMAP delivery, deletes test email, pushes metrics to Pushgateway + Uptime Kuma. Alerts: `EmailRoundtripFailing` (60m), `EmailRoundtripStale` (60m), `EmailRoundtripNeverRun` (60m). Outbound relay: Brevo EU (`smtp-relay.brevo.com:587`, 300/day free — migrated from Mailgun). Mailserver on dedicated MetalLB IP `10.0.20.202` with `externalTrafficPolicy: Local` for CrowdSec real-IP detection. Vault: `mailgun_api_key` in `secret/viktor` (probe), `brevo_api_key` in `secret/viktor` (relay).
|
||||
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ Terragrunt-based homelab managing a Kubernetes cluster (5 nodes, v1.34.2) on Pro
|
|||
## Key Paths
|
||||
- `stacks/<service>/main.tf` — service definition
|
||||
- `stacks/platform/modules/<service>/` — core infra modules
|
||||
- `modules/kubernetes/ingress_factory/` — standardized ingress with auth, rate limiting, anti-AI
|
||||
- `modules/kubernetes/ingress_factory/` — standardized ingress with auth, rate limiting, anti-AI, and auto Cloudflare DNS (`dns_type = "proxied"` or `"non-proxied"`)
|
||||
- `modules/kubernetes/nfs_volume/` — NFS volume module (CSI-backed, soft mount)
|
||||
- `config.tfvars` — non-secret configuration (plaintext)
|
||||
- `secrets.sops.json` — all secrets (SOPS-encrypted JSON)
|
||||
|
|
|
|||
BIN
config.tfvars
BIN
config.tfvars
Binary file not shown.
|
|
@ -277,15 +277,31 @@ viktorbarzin.lan:53 {
|
|||
|
||||
## Cloudflare DNS — External Domains
|
||||
|
||||
All public domains are under the `viktorbarzin.me` zone, managed via Terraform in `stacks/cloudflared/modules/cloudflared/cloudflare.tf`.
|
||||
All public domains are under the `viktorbarzin.me` zone. DNS records are **auto-created per service** via the `ingress_factory` module's `dns_type` parameter. A small number of records (Helm-managed ingresses, special cases) remain centrally managed in `config.tfvars`.
|
||||
|
||||
### How DNS Records Are Created
|
||||
|
||||
```
|
||||
stacks/<service>/main.tf
|
||||
module "ingress" {
|
||||
source = ingress_factory
|
||||
dns_type = "proxied" # ← auto-creates Cloudflare DNS record
|
||||
}
|
||||
```
|
||||
|
||||
- **`dns_type = "proxied"`**: Creates CNAME → `{tunnel_id}.cfargotunnel.com` (Cloudflare CDN)
|
||||
- **`dns_type = "non-proxied"`**: Creates A → public IP + AAAA → IPv6
|
||||
- **`dns_type = "none"`** (default): No DNS record
|
||||
|
||||
The Cloudflare tunnel uses a **wildcard rule** (`*.viktorbarzin.me → Traefik`) — no per-hostname tunnel config needed. Traefik handles host-based routing via K8s Ingress resources.
|
||||
|
||||
### Record Types
|
||||
|
||||
| Type | Records | Target | Example |
|
||||
|------|---------|--------|---------|
|
||||
| Proxied CNAME | ~30 domains | `{tunnel_id}.cfargotunnel.com` | blog, hackmd, homepage, ntfy |
|
||||
| Non-proxied A | ~20 domains | `176.12.22.76` (public IP) | mail, headscale, immich, vaultwarden |
|
||||
| Non-proxied AAAA | ~20 domains | IPv6 (HE tunnel) | Same as non-proxied A |
|
||||
| Proxied CNAME | ~100 domains | `{tunnel_id}.cfargotunnel.com` | blog, hackmd, homepage, ntfy |
|
||||
| Non-proxied A | ~35 domains | `176.12.22.76` (public IP) | mail, headscale, immich |
|
||||
| Non-proxied AAAA | ~35 domains | IPv6 (HE tunnel) | Same as non-proxied A |
|
||||
| MX | 1 | `mail.viktorbarzin.me` | Inbound email |
|
||||
| TXT (SPF) | 1 | `v=spf1 include:mailgun.org -all` | Email authentication |
|
||||
| TXT (DKIM) | 4 | RSA keys (s1, mail, brevo1, brevo2) | Email signing |
|
||||
|
|
@ -393,9 +409,9 @@ For internal `.viktorbarzin.lan` records:
|
|||
3. Or add directly in Technitium web UI (`technitium.viktorbarzin.me`)
|
||||
|
||||
For external `.viktorbarzin.me` records:
|
||||
1. Add to `cloudflare_proxied_names` or `cloudflare_non_proxied_names` in `config.tfvars`
|
||||
2. Run `scripts/tg apply -target=module.kubernetes_cluster.module.cloudflared`
|
||||
3. For non-standard records (MX, TXT), add a `cloudflare_record` resource in `cloudflare.tf`
|
||||
1. Add `dns_type = "proxied"` (or `"non-proxied"`) to the `ingress_factory` module call in the service stack
|
||||
2. Run `scripts/tg apply` on the service stack — DNS record is auto-created
|
||||
3. For non-standard records (MX, TXT), add a `cloudflare_record` resource in `stacks/cloudflared/modules/cloudflared/cloudflare.tf`
|
||||
|
||||
## Incident History
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,14 @@
|
|||
terraform {
|
||||
required_providers {
|
||||
cloudflare = {
|
||||
source = "cloudflare/cloudflare"
|
||||
version = "~> 4"
|
||||
}
|
||||
kubernetes = {
|
||||
source = "hashicorp/kubernetes"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
variable "name" { type = string }
|
||||
variable "service_name" {
|
||||
|
|
@ -76,6 +87,38 @@ variable "anti_ai_scraping" {
|
|||
default = null # null = auto (enabled when not protected, disabled when protected)
|
||||
}
|
||||
|
||||
variable "dns_type" {
|
||||
type = string
|
||||
default = "none"
|
||||
description = "Cloudflare DNS: 'proxied' (CNAME to tunnel), 'non-proxied' (A/AAAA to public IP), or 'none'"
|
||||
validation {
|
||||
condition = contains(["proxied", "non-proxied", "none"], var.dns_type)
|
||||
error_message = "dns_type must be 'proxied', 'non-proxied', or 'none'."
|
||||
}
|
||||
}
|
||||
|
||||
# Cloudflare config defaults — override via variables if these change.
|
||||
# Source of truth: config.tfvars (cloudflare_zone_id, cloudflare_tunnel_id, public_ip, public_ipv6)
|
||||
variable "cloudflare_zone_id" {
|
||||
type = string
|
||||
default = "fd2c5dd4efe8fe38958944e74d0ced6d"
|
||||
}
|
||||
|
||||
variable "cloudflare_tunnel_id" {
|
||||
type = string
|
||||
default = "75182cd7-bb91-4310-b961-5d8967da8b41"
|
||||
}
|
||||
|
||||
variable "public_ip" {
|
||||
type = string
|
||||
default = "176.12.22.76"
|
||||
}
|
||||
|
||||
variable "public_ipv6" {
|
||||
type = string
|
||||
default = "2001:470:6e:43d::2"
|
||||
}
|
||||
|
||||
variable "homepage_group" {
|
||||
type = string
|
||||
default = null # auto-detect from namespace
|
||||
|
|
@ -122,6 +165,8 @@ locals {
|
|||
lookup(local.ns_to_group, var.namespace, "Other")
|
||||
)
|
||||
|
||||
dns_name = local.effective_host == var.root_domain ? "@" : replace(local.effective_host, ".${var.root_domain}", "")
|
||||
|
||||
homepage_defaults = var.homepage_enabled ? {
|
||||
"gethomepage.dev/enabled" = "true"
|
||||
"gethomepage.dev/name" = replace(replace(var.name, "-", " "), "_", " ")
|
||||
|
|
@ -177,7 +222,9 @@ resource "kubernetes_ingress_v1" "proxied-ingress" {
|
|||
var.custom_content_security_policy != null ? "${var.namespace}-custom-csp-${var.name}@kubernetescrd" : null,
|
||||
], var.extra_middlewares)))
|
||||
"traefik.ingress.kubernetes.io/router.entrypoints" = "websecure"
|
||||
}, local.homepage_defaults, var.extra_annotations)
|
||||
}, local.homepage_defaults, var.extra_annotations,
|
||||
var.dns_type != "none" ? { "cloudflare.viktorbarzin.me/dns-type" = var.dns_type } : {}
|
||||
)
|
||||
}
|
||||
|
||||
spec {
|
||||
|
|
@ -255,3 +302,38 @@ resource "kubernetes_manifest" "custom_csp" {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Cloudflare DNS records — created automatically when dns_type is set.
|
||||
# Proxied: CNAME to Cloudflare tunnel. Non-proxied: A + AAAA to public IP.
|
||||
resource "cloudflare_record" "proxied" {
|
||||
count = var.dns_type == "proxied" ? 1 : 0
|
||||
name = local.dns_name
|
||||
content = "${var.cloudflare_tunnel_id}.cfargotunnel.com"
|
||||
proxied = true
|
||||
ttl = 1
|
||||
type = "CNAME"
|
||||
zone_id = var.cloudflare_zone_id
|
||||
allow_overwrite = true
|
||||
}
|
||||
|
||||
resource "cloudflare_record" "non_proxied_a" {
|
||||
count = var.dns_type == "non-proxied" ? 1 : 0
|
||||
name = local.dns_name
|
||||
content = var.public_ip
|
||||
proxied = false
|
||||
ttl = 1
|
||||
type = "A"
|
||||
zone_id = var.cloudflare_zone_id
|
||||
allow_overwrite = true
|
||||
}
|
||||
|
||||
resource "cloudflare_record" "non_proxied_aaaa" {
|
||||
count = var.dns_type == "non-proxied" ? 1 : 0
|
||||
name = local.dns_name
|
||||
content = var.public_ipv6
|
||||
proxied = false
|
||||
ttl = 1
|
||||
type = "AAAA"
|
||||
zone_id = var.cloudflare_zone_id
|
||||
allow_overwrite = true
|
||||
}
|
||||
|
|
|
|||
|
|
@ -86,5 +86,6 @@ module "ingress" {
|
|||
namespace = "<your-namespace>"
|
||||
name = "<app-name>"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
protected = false # Set true to require Authentik login
|
||||
dns_type = "proxied" # "proxied" (Cloudflare CDN), "non-proxied" (direct A/AAAA), or "none"
|
||||
protected = false # Set true to require Authentik login
|
||||
}
|
||||
|
|
|
|||
|
|
@ -137,6 +137,7 @@ module "ingress" {
|
|||
namespace = "actualbudget"
|
||||
name = "budget-${var.name}"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
dns_type = "proxied"
|
||||
rybbit_site_id = "3e6b6b68088a"
|
||||
extra_annotations = var.homepage_annotations
|
||||
}
|
||||
|
|
|
|||
|
|
@ -344,6 +344,7 @@ resource "kubernetes_service" "affine" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.affine.metadata[0].name
|
||||
name = "affine"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -67,6 +67,7 @@ resource "helm_release" "authentik" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.authentik.metadata[0].name
|
||||
name = "authentik"
|
||||
service_name = "goauthentik-server"
|
||||
|
|
|
|||
|
|
@ -228,7 +228,7 @@ resource "kubernetes_deployment" "workbench" {
|
|||
for f in /static/chunks/pages/_app-*.js; do
|
||||
sed -i 's|http://localhost:9002/graphql|/graphql|g' "$f"
|
||||
done
|
||||
echo "Patched GraphQL URL to /graphql"
|
||||
echo "Patched GraphQL URL and store path"
|
||||
EOT
|
||||
]
|
||||
volume_mount {
|
||||
|
|
@ -249,6 +249,13 @@ resource "kubernetes_deployment" "workbench" {
|
|||
container {
|
||||
name = "workbench"
|
||||
image = "dolthub/dolt-workbench:latest"
|
||||
command = ["sh", "-c", <<-EOT
|
||||
# Patch GraphQL server to listen on 0.0.0.0 (IPv4) — Node 18+ defaults to IPv6
|
||||
sed -i 's|app.listen(9002)|app.listen(9002,"0.0.0.0")|g' /app/graphql-server/dist/main.js
|
||||
# Start PM2 (the default entrypoint)
|
||||
exec pm2-runtime /app/process.yml
|
||||
EOT
|
||||
]
|
||||
|
||||
port {
|
||||
name = "http"
|
||||
|
|
@ -259,9 +266,14 @@ resource "kubernetes_deployment" "workbench" {
|
|||
container_port = 9002
|
||||
}
|
||||
|
||||
env {
|
||||
name = "NODE_OPTIONS"
|
||||
value = "--dns-result-order=ipv4first"
|
||||
}
|
||||
|
||||
volume_mount {
|
||||
name = "store"
|
||||
mount_path = "/app/store"
|
||||
mount_path = "/app/graphql-server/store"
|
||||
}
|
||||
volume_mount {
|
||||
name = "static-patched"
|
||||
|
|
@ -361,6 +373,7 @@ module "tls_secret" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.beads.metadata[0].name
|
||||
name = "dolt-workbench"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -110,6 +110,7 @@ module "ingress" {
|
|||
name = "blog"
|
||||
service_name = "blog"
|
||||
full_host = "viktorbarzin.me"
|
||||
dns_type = "proxied"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
rybbit_site_id = "da853a2438d0"
|
||||
extra_annotations = {
|
||||
|
|
|
|||
|
|
@ -206,6 +206,7 @@ resource "kubernetes_service" "changedetection" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.changedetection.metadata[0].name
|
||||
name = "changedetection"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -87,6 +87,7 @@ resource "kubernetes_service" "city-guesser" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = "city-guesser"
|
||||
name = "city-guesser"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -267,6 +267,7 @@ resource "kubernetes_service" "claude-memory" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.claude-memory.metadata[0].name
|
||||
name = "claude-memory"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -58,15 +58,20 @@ resource "cloudflare_zero_trust_tunnel_cloudflared_config" "sof" {
|
|||
warp_routing {
|
||||
enabled = true
|
||||
}
|
||||
dynamic "ingress_rule" {
|
||||
for_each = toset(var.cloudflare_proxied_names)
|
||||
content {
|
||||
hostname = ingress_rule.value == "viktorbarzin.me" ? ingress_rule.value : "${ingress_rule.value}.viktorbarzin.me"
|
||||
path = "/"
|
||||
service = "https://10.0.20.200:443"
|
||||
origin_request {
|
||||
no_tls_verify = true
|
||||
}
|
||||
# Wildcard rule routes all subdomains through tunnel to Traefik.
|
||||
# Traefik handles host-based routing via K8s Ingress resources.
|
||||
ingress_rule {
|
||||
hostname = "*.viktorbarzin.me"
|
||||
service = "https://10.0.20.200:443"
|
||||
origin_request {
|
||||
no_tls_verify = true
|
||||
}
|
||||
}
|
||||
ingress_rule {
|
||||
hostname = "viktorbarzin.me"
|
||||
service = "https://10.0.20.200:443"
|
||||
origin_request {
|
||||
no_tls_verify = true
|
||||
}
|
||||
}
|
||||
ingress_rule {
|
||||
|
|
|
|||
|
|
@ -256,6 +256,7 @@ resource "kubernetes_service" "crowdsec-web" {
|
|||
}
|
||||
module "ingress" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.crowdsec.metadata[0].name
|
||||
name = "crowdsec-web"
|
||||
protected = true
|
||||
|
|
|
|||
|
|
@ -98,6 +98,7 @@ resource "kubernetes_service" "cyberchef" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.cyberchef.metadata[0].name
|
||||
name = "cc"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -120,6 +120,7 @@ resource "kubernetes_service" "dashy" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.dashy.metadata[0].name
|
||||
name = "dashy"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -364,6 +364,7 @@ resource "kubernetes_service" "dawarich" {
|
|||
# }
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.dawarich.metadata[0].name
|
||||
name = "dawarich"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -366,9 +366,99 @@ resource "helm_release" "mysql_cluster" {
|
|||
depends_on = [helm_release.mysql_operator]
|
||||
}
|
||||
|
||||
# Compatibility service: mysql.dbaas points at InnoDB Cluster mysqld pods
|
||||
# When router is available it handles failover, but we fall back to direct
|
||||
# mysqld access to avoid total outage during partial cluster failures
|
||||
#### MYSQL — Standalone Bitnami (migration target)
|
||||
#
|
||||
# Standalone MySQL without Group Replication. Eliminates ~95 GB/day of GR
|
||||
# write overhead (binlog, relay log, XCom cache) for databases totaling ~35 MB.
|
||||
# Binary logging disabled entirely (skip-log-bin) since no replication needed.
|
||||
|
||||
resource "helm_release" "mysql_standalone" {
|
||||
namespace = kubernetes_namespace.dbaas.metadata[0].name
|
||||
create_namespace = false
|
||||
name = "mysql-standalone"
|
||||
timeout = 600
|
||||
|
||||
repository = "oci://registry-1.docker.io/bitnamicharts"
|
||||
chart = "mysql"
|
||||
|
||||
values = [yamlencode({
|
||||
architecture = "standalone"
|
||||
image = {
|
||||
tag = "8.4"
|
||||
}
|
||||
|
||||
auth = {
|
||||
rootPassword = var.dbaas_root_password
|
||||
}
|
||||
|
||||
primary = {
|
||||
configuration = <<-EOT
|
||||
[mysqld]
|
||||
skip-name-resolve
|
||||
mysql-native-password=ON
|
||||
skip-log-bin
|
||||
max_connections=80
|
||||
innodb_log_buffer_size=16777216
|
||||
innodb_flush_log_at_trx_commit=2
|
||||
innodb_io_capacity=100
|
||||
innodb_io_capacity_max=200
|
||||
innodb_redo_log_capacity=1073741824
|
||||
innodb_buffer_pool_size=1073741824
|
||||
innodb_flush_neighbors=1
|
||||
innodb_lru_scan_depth=256
|
||||
innodb_page_cleaners=1
|
||||
innodb_adaptive_flushing_lwm=10
|
||||
innodb_max_dirty_pages_pct=90
|
||||
innodb_max_dirty_pages_pct_lwm=10
|
||||
EOT
|
||||
|
||||
persistence = {
|
||||
enabled = true
|
||||
storageClass = "proxmox-lvm-encrypted"
|
||||
size = "5Gi"
|
||||
annotations = {
|
||||
"resize.topolvm.io/threshold" = "80%"
|
||||
"resize.topolvm.io/increase" = "100%"
|
||||
"resize.topolvm.io/storage_limit" = "30Gi"
|
||||
}
|
||||
}
|
||||
|
||||
resources = {
|
||||
requests = {
|
||||
cpu = "250m"
|
||||
memory = "1536Mi"
|
||||
}
|
||||
limits = {
|
||||
memory = "2Gi"
|
||||
}
|
||||
}
|
||||
|
||||
affinity = {
|
||||
nodeAffinity = {
|
||||
requiredDuringSchedulingIgnoredDuringExecution = {
|
||||
nodeSelectorTerms = [{
|
||||
matchExpressions = [{
|
||||
key = "kubernetes.io/hostname"
|
||||
operator = "NotIn"
|
||||
values = ["k8s-node1"]
|
||||
}]
|
||||
}]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
metrics = {
|
||||
enabled = false
|
||||
}
|
||||
})]
|
||||
}
|
||||
|
||||
# Compatibility service: mysql.dbaas points at InnoDB Cluster mysqld pods.
|
||||
# Phase 3 cutover: switch selector to Bitnami standalone after dump/restore:
|
||||
# "app.kubernetes.io/instance" = "mysql-standalone"
|
||||
# "app.kubernetes.io/component" = "primary"
|
||||
# and remove publish_not_ready_addresses + update depends_on.
|
||||
resource "kubernetes_service" "mysql" {
|
||||
metadata {
|
||||
name = var.cluster_master_service
|
||||
|
|
@ -833,6 +923,7 @@ resource "kubernetes_service" "phpmyadmin" {
|
|||
}
|
||||
module "ingress" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.dbaas.metadata[0].name
|
||||
name = "pma"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
@ -1287,6 +1378,7 @@ resource "kubernetes_service" "pgadmin" {
|
|||
}
|
||||
module "ingress-pgadmin" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.dbaas.metadata[0].name
|
||||
name = "pgadmin"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -242,6 +242,7 @@ resource "kubernetes_service" "ebook2audiobook" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.ebook2audiobook.metadata[0].name
|
||||
name = "ebook2audiobook"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
@ -426,6 +427,7 @@ module "audiblez-web-ingress" {
|
|||
namespace = kubernetes_namespace.ebook2audiobook.metadata[0].name
|
||||
name = "audiblez-web"
|
||||
host = "audiblez"
|
||||
dns_type = "non-proxied"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
protected = true
|
||||
max_body_size = "500m" # Allow large EPUB uploads
|
||||
|
|
|
|||
|
|
@ -374,6 +374,7 @@ resource "kubernetes_service" "calibre" {
|
|||
|
||||
module "calibre_ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.ebooks.metadata[0].name
|
||||
name = "calibre"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
@ -494,6 +495,7 @@ resource "kubernetes_service" "annas-archive-stacks" {
|
|||
|
||||
module "stacks_ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.ebooks.metadata[0].name
|
||||
name = "stacks"
|
||||
service_name = "annas-archive-stacks"
|
||||
|
|
@ -644,6 +646,7 @@ resource "kubernetes_service" "audiobookshelf" {
|
|||
|
||||
module "audiobookshelf_ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.ebooks.metadata[0].name
|
||||
name = "audiobookshelf"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
@ -904,6 +907,7 @@ resource "kubernetes_service" "book_search" {
|
|||
|
||||
module "book_search_ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.ebooks.metadata[0].name
|
||||
name = "book-search"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -94,6 +94,7 @@ resource "kubernetes_service" "echo" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.echo.metadata[0].name
|
||||
name = "echo"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -137,6 +137,7 @@ resource "kubernetes_service" "draw" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.excalidraw.metadata[0].name
|
||||
name = "draw"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -169,6 +169,7 @@ module "tls_secret" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.f1-stream.metadata[0].name
|
||||
name = "f1"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -57,6 +57,7 @@ resource "kubernetes_endpoints" "foolery" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.foolery.metadata[0].name
|
||||
name = "foolery"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -153,6 +153,7 @@ resource "kubernetes_service" "forgejo" {
|
|||
}
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.forgejo.metadata[0].name
|
||||
name = "forgejo"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -232,6 +232,7 @@ module "ingress" {
|
|||
namespace = "freedify"
|
||||
name = "music-${var.name}"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
dns_type = "non-proxied"
|
||||
protected = var.protected
|
||||
extra_annotations = var.extra_annotations
|
||||
}
|
||||
|
|
|
|||
|
|
@ -207,6 +207,7 @@ resource "kubernetes_service" "freshrss" {
|
|||
}
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = "freshrss"
|
||||
name = "rss"
|
||||
service_name = "freshrss"
|
||||
|
|
|
|||
|
|
@ -276,6 +276,7 @@ resource "kubernetes_service" "frigate-rtsp" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.frigate.metadata[0].name
|
||||
name = "frigate"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -183,6 +183,7 @@ resource "kubernetes_service" "hackmd" {
|
|||
}
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.hackmd.metadata[0].name
|
||||
name = "hackmd"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -291,6 +291,7 @@ resource "kubernetes_service" "headscale" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.headscale.metadata[0].name
|
||||
name = "headscale"
|
||||
port = 8080
|
||||
|
|
|
|||
|
|
@ -166,6 +166,7 @@ resource "kubernetes_service" "health" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.health.metadata[0].name
|
||||
name = "health"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -134,6 +134,7 @@ module "ingress" {
|
|||
namespace = kubernetes_namespace.homepage.metadata[0].name
|
||||
name = "homepage"
|
||||
host = "home"
|
||||
dns_type = "proxied"
|
||||
service_name = kubernetes_service.cache_proxy.metadata[0].name
|
||||
tls_secret_name = var.tls_secret_name
|
||||
extra_annotations = {
|
||||
|
|
|
|||
|
|
@ -120,6 +120,7 @@ resource "kubernetes_service" "immich-frame" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = "immich"
|
||||
name = "highlights-immich"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -674,6 +674,7 @@ resource "kubernetes_service" "immich-machine-learning" {
|
|||
|
||||
module "ingress-immich" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.immich.metadata[0].name
|
||||
name = "immich"
|
||||
service_name = "immich-server"
|
||||
|
|
|
|||
|
|
@ -228,6 +228,7 @@ resource "kubernetes_service" "insta2spotify" {
|
|||
# Main ingress — protected by Authentik (frontend)
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.insta2spotify.metadata[0].name
|
||||
name = "insta2spotify"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -78,6 +78,7 @@ resource "kubernetes_service" "jsoncrack" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.jsoncrack.metadata[0].name
|
||||
name = "json"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -90,6 +90,7 @@ module "ingress" {
|
|||
name = "kubernetes-dashboard"
|
||||
service_name = "kubernetes-dashboard-kong-proxy"
|
||||
host = "k8s"
|
||||
dns_type = "proxied"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
protected = true
|
||||
backend_protocol = "HTTPS"
|
||||
|
|
|
|||
|
|
@ -139,6 +139,7 @@ resource "kubernetes_service" "k8s_portal" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.k8s_portal.metadata[0].name
|
||||
name = "k8s-portal"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -116,6 +116,7 @@ resource "kubernetes_service" "kms-web-page" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.kms.metadata[0].name
|
||||
name = "kms"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -221,6 +221,7 @@ resource "kubernetes_service" "linkwarden" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.linkwarden.metadata[0].name
|
||||
name = "linkwarden"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -258,6 +258,7 @@ resource "kubernetes_service" "roundcubemail" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = "mailserver"
|
||||
name = "mail"
|
||||
service_name = "roundcubemail"
|
||||
|
|
|
|||
|
|
@ -217,6 +217,7 @@ resource "kubernetes_service" "matrix" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.matrix.metadata[0].name
|
||||
name = "matrix"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -258,6 +258,7 @@ resource "kubernetes_service" "meshcentral" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.meshcentral.metadata[0].name
|
||||
name = "meshcentral"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -258,6 +258,7 @@ resource "kubernetes_service" "n8n" {
|
|||
}
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.n8n.metadata[0].name
|
||||
name = "n8n"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -221,6 +221,7 @@ resource "kubernetes_service" "navidrome" {
|
|||
}
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.navidrome.metadata[0].name
|
||||
name = "navidrome"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -220,6 +220,7 @@ resource "kubernetes_service" "netbox" {
|
|||
}
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.netbox.metadata[0].name
|
||||
name = "netbox"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -91,6 +91,7 @@ resource "kubernetes_service" "networking-toolbox" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.networking-toolbox.metadata[0].name
|
||||
name = "networking-toolbox"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -216,6 +216,7 @@ module "nfs_nextcloud_backup_host" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.nextcloud.metadata[0].name
|
||||
name = "nextcloud"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -211,6 +211,7 @@ resource "kubernetes_service" "novelapp" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.novelapp.metadata[0].name
|
||||
name = "novelapp"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -181,6 +181,7 @@ resource "kubernetes_service" "ntfy" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.ntfy.metadata[0].name
|
||||
name = "ntfy"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -258,6 +258,7 @@ resource "kubernetes_manifest" "ollama_api_basic_auth_middleware" {
|
|||
|
||||
module "ollama-api-ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.ollama.metadata[0].name
|
||||
name = "ollama-api"
|
||||
service_name = "ollama"
|
||||
|
|
@ -362,6 +363,7 @@ resource "kubernetes_service" "ollama-ui" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.ollama.metadata[0].name
|
||||
name = "ollama"
|
||||
service_name = "ollama-ui"
|
||||
|
|
|
|||
|
|
@ -242,6 +242,7 @@ resource "kubernetes_service" "onlyoffice" {
|
|||
}
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.onlyoffice.metadata[0].name
|
||||
name = "onlyoffice"
|
||||
service_name = "onlyoffice-document-server"
|
||||
|
|
|
|||
|
|
@ -625,6 +625,7 @@ resource "kubernetes_service" "openclaw" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.openclaw.metadata[0].name
|
||||
name = "openclaw"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
@ -1199,6 +1200,7 @@ resource "kubernetes_service" "openlobster" {
|
|||
|
||||
module "openlobster_ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.openclaw.metadata[0].name
|
||||
name = "openlobster"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -203,6 +203,7 @@ resource "kubernetes_service" "owntracks" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.owntracks.metadata[0].name
|
||||
name = "owntracks"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -228,6 +228,7 @@ module "ingress" {
|
|||
name = "paperless-ngx"
|
||||
service_name = "paperless-ngx"
|
||||
host = "pdf"
|
||||
dns_type = "proxied"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
port = 80
|
||||
extra_annotations = {
|
||||
|
|
|
|||
|
|
@ -226,6 +226,7 @@ resource "kubernetes_service" "phpipam" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.phpipam.metadata[0].name
|
||||
name = "phpipam"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -192,6 +192,7 @@ resource "kubernetes_service" "plotting-book" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.plotting-book.metadata[0].name
|
||||
name = "plotting-book"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -205,6 +205,7 @@ module "ingress" {
|
|||
namespace = kubernetes_namespace.poison_fountain.metadata[0].name
|
||||
name = "poison-fountain"
|
||||
host = "poison"
|
||||
dns_type = "non-proxied"
|
||||
port = 8080
|
||||
tls_secret_name = var.tls_secret_name
|
||||
skip_default_rate_limit = true
|
||||
|
|
|
|||
|
|
@ -115,6 +115,7 @@ resource "kubernetes_service" "priority-pass" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = "priority-pass"
|
||||
name = "priority-pass"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -128,6 +128,7 @@ module "ingress" {
|
|||
namespace = kubernetes_namespace.privatebin.metadata[0].name
|
||||
name = "privatebin"
|
||||
host = "pb"
|
||||
dns_type = "proxied"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
rybbit_site_id = "3ae810b0476d"
|
||||
custom_content_security_policy = "script-src 'self' 'unsafe-inline' 'unsafe-eval' 'wasm-unsafe-eval' https://rybbit.viktorbarzin.me"
|
||||
|
|
|
|||
|
|
@ -326,6 +326,7 @@ resource "kubernetes_service" "realestate-crawler-api" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.realestate-crawler.metadata[0].name
|
||||
name = "wrongmove"
|
||||
service_name = "realestate-crawler-ui"
|
||||
|
|
@ -343,6 +344,7 @@ module "ingress" {
|
|||
|
||||
module "ingress-api" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.realestate-crawler.metadata[0].name
|
||||
name = "wrongmove-api"
|
||||
host = "wrongmove"
|
||||
|
|
|
|||
|
|
@ -342,6 +342,7 @@ resource "kubernetes_service" "resume" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.resume.metadata[0].name
|
||||
name = "resume"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -1,3 +1,15 @@
|
|||
terraform {
|
||||
required_providers {
|
||||
cloudflare = {
|
||||
source = "cloudflare/cloudflare"
|
||||
version = "~> 4"
|
||||
}
|
||||
kubernetes = {
|
||||
source = "hashicorp/kubernetes"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
variable "name" {}
|
||||
variable "namespace" {
|
||||
default = "reverse-proxy"
|
||||
|
|
@ -45,6 +57,31 @@ variable "skip_global_rate_limit" {
|
|||
type = bool
|
||||
default = false
|
||||
}
|
||||
variable "dns_type" {
|
||||
type = string
|
||||
default = "none"
|
||||
description = "Cloudflare DNS: 'proxied' (CNAME to tunnel), 'non-proxied' (A/AAAA to public IP), or 'none'"
|
||||
validation {
|
||||
condition = contains(["proxied", "non-proxied", "none"], var.dns_type)
|
||||
error_message = "dns_type must be 'proxied', 'non-proxied', or 'none'."
|
||||
}
|
||||
}
|
||||
variable "cloudflare_zone_id" {
|
||||
type = string
|
||||
default = "fd2c5dd4efe8fe38958944e74d0ced6d"
|
||||
}
|
||||
variable "cloudflare_tunnel_id" {
|
||||
type = string
|
||||
default = "75182cd7-bb91-4310-b961-5d8967da8b41"
|
||||
}
|
||||
variable "public_ip" {
|
||||
type = string
|
||||
default = "176.12.22.76"
|
||||
}
|
||||
variable "public_ipv6" {
|
||||
type = string
|
||||
default = "2001:470:6e:43d::2"
|
||||
}
|
||||
|
||||
|
||||
resource "kubernetes_service" "proxied-service" {
|
||||
|
|
@ -88,7 +125,9 @@ resource "kubernetes_ingress_v1" "proxied-ingress" {
|
|||
"traefik.ingress.kubernetes.io/router.entrypoints" = "websecure"
|
||||
"traefik.ingress.kubernetes.io/service.serversscheme" = var.backend_protocol == "HTTPS" ? "https" : null
|
||||
"traefik.ingress.kubernetes.io/service.serverstransport" = var.backend_protocol == "HTTPS" ? "traefik-insecure-skip-verify@kubernetescrd" : null
|
||||
}, var.extra_annotations)
|
||||
}, var.extra_annotations,
|
||||
var.dns_type != "none" ? { "cloudflare.viktorbarzin.me/dns-type" = var.dns_type } : {}
|
||||
)
|
||||
}
|
||||
|
||||
spec {
|
||||
|
|
@ -166,3 +205,37 @@ resource "kubernetes_manifest" "custom_csp" {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Cloudflare DNS records — created automatically when dns_type is set.
|
||||
resource "cloudflare_record" "proxied" {
|
||||
count = var.dns_type == "proxied" ? 1 : 0
|
||||
name = var.name
|
||||
content = "${var.cloudflare_tunnel_id}.cfargotunnel.com"
|
||||
proxied = true
|
||||
ttl = 1
|
||||
type = "CNAME"
|
||||
zone_id = var.cloudflare_zone_id
|
||||
allow_overwrite = true
|
||||
}
|
||||
|
||||
resource "cloudflare_record" "non_proxied_a" {
|
||||
count = var.dns_type == "non-proxied" ? 1 : 0
|
||||
name = var.name
|
||||
content = var.public_ip
|
||||
proxied = false
|
||||
ttl = 1
|
||||
type = "A"
|
||||
zone_id = var.cloudflare_zone_id
|
||||
allow_overwrite = true
|
||||
}
|
||||
|
||||
resource "cloudflare_record" "non_proxied_aaaa" {
|
||||
count = var.dns_type == "non-proxied" ? 1 : 0
|
||||
name = var.name
|
||||
content = var.public_ipv6
|
||||
proxied = false
|
||||
ttl = 1
|
||||
type = "AAAA"
|
||||
zone_id = var.cloudflare_zone_id
|
||||
allow_overwrite = true
|
||||
}
|
||||
|
|
|
|||
|
|
@ -26,6 +26,7 @@ module "tls_secret" {
|
|||
# https://pfsense.viktorbarzin.me/
|
||||
module "pfsense" {
|
||||
source = "./factory"
|
||||
dns_type = "proxied"
|
||||
name = "pfsense"
|
||||
external_name = "pfsense.viktorbarzin.lan"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
@ -53,6 +54,7 @@ module "pfsense" {
|
|||
# https://nas.viktorbarzin.me/
|
||||
module "nas" {
|
||||
source = "./factory"
|
||||
dns_type = "proxied"
|
||||
name = "nas"
|
||||
external_name = "nas.viktorbarzin.lan"
|
||||
port = 5001
|
||||
|
|
@ -74,6 +76,7 @@ module "nas" {
|
|||
# https://files.viktorbarzin.me/
|
||||
module "nas-files" {
|
||||
source = "./factory"
|
||||
dns_type = "non-proxied"
|
||||
name = "files"
|
||||
external_name = "nas.viktorbarzin.lan"
|
||||
port = 5001
|
||||
|
|
@ -89,6 +92,7 @@ module "nas-files" {
|
|||
# https://idrac.viktorbarzin.me/
|
||||
module "idrac" {
|
||||
source = "./factory"
|
||||
dns_type = "proxied"
|
||||
name = "idrac"
|
||||
external_name = "idrac.viktorbarzin.lan"
|
||||
port = 443
|
||||
|
|
@ -110,6 +114,7 @@ module "idrac" {
|
|||
# TODO: Not working yet
|
||||
module "tp-link-gateway" {
|
||||
source = "./factory"
|
||||
dns_type = "proxied"
|
||||
name = "gw"
|
||||
external_name = "gw.viktorbarzin.lan"
|
||||
port = 443
|
||||
|
|
@ -124,6 +129,7 @@ module "tp-link-gateway" {
|
|||
# https://truenas.viktorbarzin.me/
|
||||
module "truenas" {
|
||||
source = "./factory"
|
||||
dns_type = "proxied"
|
||||
name = "truenas"
|
||||
external_name = "truenas.viktorbarzin.lan"
|
||||
port = 80
|
||||
|
|
@ -168,6 +174,7 @@ module "r730" {
|
|||
# https://proxmox.viktorbarzin.me/
|
||||
module "proxmox" {
|
||||
source = "./factory"
|
||||
dns_type = "proxied"
|
||||
name = "proxmox"
|
||||
external_name = "proxmox.viktorbarzin.lan"
|
||||
port = 8006
|
||||
|
|
@ -189,6 +196,7 @@ module "proxmox" {
|
|||
# https://docker.viktorbarzin.me/ (registry web UI)
|
||||
module "docker-registry-ui" {
|
||||
source = "./factory"
|
||||
dns_type = "proxied"
|
||||
name = "docker"
|
||||
external_name = "docker-registry.viktorbarzin.lan"
|
||||
port = 8080
|
||||
|
|
@ -209,6 +217,7 @@ module "docker-registry-ui" {
|
|||
# https://registry.viktorbarzin.me/ (Docker CLI push/pull endpoint)
|
||||
module "docker-registry-cli" {
|
||||
source = "./factory"
|
||||
dns_type = "non-proxied"
|
||||
name = "registry"
|
||||
external_name = "docker-registry.viktorbarzin.lan"
|
||||
port = 5050
|
||||
|
|
@ -228,6 +237,7 @@ module "docker-registry-cli" {
|
|||
# https://valchedrym.viktorbarzin.me/
|
||||
module "valchedrym" {
|
||||
source = "./factory"
|
||||
dns_type = "proxied"
|
||||
name = "valchedrym"
|
||||
external_name = "valchedrym.viktorbarzin.lan"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
@ -293,6 +303,7 @@ resource "kubernetes_manifest" "ha_sofia_rate_limit" {
|
|||
|
||||
module "ha-sofia" {
|
||||
source = "./factory"
|
||||
dns_type = "non-proxied"
|
||||
name = "ha-sofia"
|
||||
external_name = "ha-sofia.viktorbarzin.lan"
|
||||
port = 8123
|
||||
|
|
@ -317,6 +328,7 @@ module "ha-sofia" {
|
|||
# https://music-assistant.viktorbarzin.me/
|
||||
module "music-assistant" {
|
||||
source = "./factory"
|
||||
dns_type = "non-proxied"
|
||||
name = "music-assistant"
|
||||
external_name = "ha-sofia.viktorbarzin.lan"
|
||||
port = 8095
|
||||
|
|
@ -332,6 +344,7 @@ module "music-assistant" {
|
|||
# https://ha-london.viktorbarzin.me/
|
||||
module "ha-london" {
|
||||
source = "./factory"
|
||||
dns_type = "non-proxied"
|
||||
name = "ha-london"
|
||||
external_name = "ha-london.viktorbarzin.lan"
|
||||
port = 8123
|
||||
|
|
@ -351,6 +364,7 @@ module "ha-london" {
|
|||
# https://london.viktorbarzin.me/
|
||||
module "london" {
|
||||
source = "./factory"
|
||||
dns_type = "proxied"
|
||||
name = "london"
|
||||
external_name = "openwrt-london.viktorbarzin.lan"
|
||||
port = 443
|
||||
|
|
@ -374,6 +388,7 @@ module "london" {
|
|||
}
|
||||
module "pi-lights" {
|
||||
source = "./factory"
|
||||
dns_type = "proxied"
|
||||
name = "pi"
|
||||
external_name = "ha-london.viktorbarzin.lan"
|
||||
port = 5000
|
||||
|
|
@ -401,6 +416,7 @@ module "pi-lights" {
|
|||
|
||||
module "mbp14" {
|
||||
source = "./factory"
|
||||
dns_type = "proxied"
|
||||
name = "mbp14"
|
||||
external_name = "mbp14.viktorbarzin.lan"
|
||||
port = 4020
|
||||
|
|
|
|||
|
|
@ -543,6 +543,7 @@ resource "kubernetes_service" "rybbit-client" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.rybbit.metadata[0].name
|
||||
name = "rybbit"
|
||||
service_name = "rybbit-client"
|
||||
|
|
@ -560,6 +561,7 @@ module "ingress" {
|
|||
|
||||
module "ingress-api" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.rybbit.metadata[0].name
|
||||
name = "rybbit-api"
|
||||
host = "rybbit"
|
||||
|
|
|
|||
|
|
@ -158,6 +158,7 @@ resource "kubernetes_service" "send" {
|
|||
}
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.send.metadata[0].name
|
||||
name = "send"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -130,6 +130,7 @@ resource "kubernetes_service" "aiostreams" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.aiostreams.metadata[0].name
|
||||
name = "aiostreams"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -72,6 +72,7 @@ resource "kubernetes_service" "flaresolverr" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = "servarr"
|
||||
name = "flaresolverr"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -162,6 +162,7 @@ resource "kubernetes_service" "deemix" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = "servarr"
|
||||
name = "lidarr"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
@ -174,6 +175,7 @@ module "ingress" {
|
|||
|
||||
module "ingress-deemix" {
|
||||
source = "../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = "servarr"
|
||||
name = "deemix"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -124,6 +124,7 @@ resource "kubernetes_service" "listenarr" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = "servarr"
|
||||
name = "listenarr"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -152,6 +152,7 @@ resource "kubernetes_service" "prowlarr" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = "servarr"
|
||||
name = "prowlarr"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -381,6 +381,7 @@ print(f"Global: connected={connected} dht={dht} dl_speed={dl_speed} ul_speed={ul
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = "servarr"
|
||||
name = "qbittorrent"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -105,6 +105,7 @@ resource "kubernetes_service" "soulseek" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = "servarr"
|
||||
name = "soulseek"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -224,6 +224,7 @@ resource "kubernetes_service" "speedtest" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.speedtest.metadata[0].name
|
||||
name = "speedtest"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -124,6 +124,7 @@ resource "kubernetes_service" "stirling-pdf" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.stirling-pdf.metadata[0].name
|
||||
name = "stirling-pdf"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -244,6 +244,7 @@ resource "kubernetes_service" "tandoor" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.tandoor.metadata[0].name
|
||||
name = "tandoor"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -312,6 +312,7 @@ resource "kubernetes_service" "technitium_dns_internal" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.technitium.metadata[0].name
|
||||
name = "technitium"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
@ -380,7 +381,7 @@ data "kubernetes_secret" "technitium_db_creds" {
|
|||
depends_on = [kubernetes_manifest.external_secret]
|
||||
}
|
||||
|
||||
# Grafana datasource for Technitium DNS query logs in MySQL
|
||||
# Grafana datasource for Technitium DNS query logs in PostgreSQL
|
||||
resource "kubernetes_config_map" "grafana_technitium_datasource" {
|
||||
metadata {
|
||||
name = "grafana-technitium-datasource"
|
||||
|
|
@ -393,13 +394,18 @@ resource "kubernetes_config_map" "grafana_technitium_datasource" {
|
|||
"technitium-datasource.yaml" = yamlencode({
|
||||
apiVersion = 1
|
||||
datasources = [{
|
||||
name = "Technitium MySQL"
|
||||
type = "mysql"
|
||||
name = "Technitium PostgreSQL"
|
||||
type = "postgres"
|
||||
access = "proxy"
|
||||
url = "${var.mysql_host}:3306"
|
||||
url = "${var.postgresql_host}:5432"
|
||||
database = "technitium"
|
||||
user = "technitium"
|
||||
uid = "technitium-mysql"
|
||||
uid = "technitium-pg"
|
||||
jsonData = {
|
||||
sslmode = "disable"
|
||||
postgresVersion = 1600
|
||||
timescaledb = false
|
||||
}
|
||||
secureJsonData = {
|
||||
password = data.kubernetes_secret.technitium_db_creds.data["db_password"]
|
||||
}
|
||||
|
|
@ -475,6 +481,11 @@ resource "kubernetes_cron_job_v1" "technitium_password_sync" {
|
|||
TOKEN=$$(curl -sf "http://technitium-web:5380/api/user/login?user=$$TECH_USER&pass=$$TECH_PASS" | grep -o '"token":"[^"]*"' | cut -d'"' -f4)
|
||||
if [ -z "$$TOKEN" ]; then echo "Login failed"; exit 1; fi
|
||||
|
||||
# Disable SQLite query logging (eliminates ~18 GB/day write amplification on encrypted PVC)
|
||||
SQLITE_CONFIG="{\"enableLogging\":false,\"maxLogDays\":0,\"maxLogRecords\":0}"
|
||||
curl -sf -X POST "http://technitium-web:5380/api/apps/config/set?token=$$TOKEN" --data-urlencode "name=Query Logs (Sqlite)" --data-urlencode "config=$$SQLITE_CONFIG"
|
||||
echo "SQLite logging disabled on primary"
|
||||
|
||||
# Disable MySQL query logging
|
||||
MYSQL_CONFIG="{\"enableLogging\":false,\"maxQueueSize\":1000000,\"maxLogDays\":30,\"maxLogRecords\":0,\"databaseName\":\"technitium\",\"connectionString\":\"Server=mysql.dbaas.svc.cluster.local; Port=3306; Uid=technitium; Pwd=$$DB_PASSWORD;\"}"
|
||||
curl -sf -X POST "http://technitium-web:5380/api/apps/config/set?token=$$TOKEN" --data-urlencode "name=Query Logs (MySQL)" --data-urlencode "config=$$MYSQL_CONFIG"
|
||||
|
|
@ -489,7 +500,17 @@ resource "kubernetes_cron_job_v1" "technitium_password_sync" {
|
|||
# Configure PG query logging
|
||||
PG_CONFIG="{\"enableLogging\":true,\"maxQueueSize\":1000000,\"maxLogDays\":90,\"maxLogRecords\":0,\"databaseName\":\"technitium\",\"connectionString\":\"Host=${var.postgresql_host}; Port=5432; Username=technitium; Password=$$DB_PASSWORD;\"}"
|
||||
curl -sf -X POST "http://technitium-web:5380/api/apps/config/set?token=$$TOKEN" --data-urlencode "name=Query Logs (Postgres)" --data-urlencode "config=$$PG_CONFIG"
|
||||
echo "PG logging configured"
|
||||
echo "PG logging configured on primary"
|
||||
|
||||
# Disable SQLite on secondary and tertiary instances
|
||||
for INST in http://technitium-secondary-web:5380 http://technitium-tertiary-web:5380; do
|
||||
echo "Configuring $$INST"
|
||||
R_TOKEN=$$(curl -sf "$$INST/api/user/login?user=$$TECH_USER&pass=$$TECH_PASS" | grep -o '"token":"[^"]*"' | cut -d'"' -f4)
|
||||
if [ -z "$$R_TOKEN" ]; then echo "Login failed for $$INST, skipping"; continue; fi
|
||||
curl -sf -X POST "$$INST/api/apps/config/set?token=$$R_TOKEN" --data-urlencode "name=Query Logs (Sqlite)" --data-urlencode "config=$$SQLITE_CONFIG" || echo "WARN: SQLite plugin not present on $$INST"
|
||||
echo "SQLite logging disabled on $$INST"
|
||||
done
|
||||
echo "Password sync complete"
|
||||
EOT
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -57,6 +57,7 @@ resource "kubernetes_endpoints" "terminal" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.terminal.metadata[0].name
|
||||
name = "terminal"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
@ -197,6 +198,7 @@ resource "kubernetes_manifest" "clipboard_strip_prefix" {
|
|||
|
||||
module "ingress_ro" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.terminal.metadata[0].name
|
||||
name = "terminal-ro"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -611,6 +611,7 @@ resource "kubernetes_service" "trading-bot-frontend" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.trading-bot.metadata[0].name
|
||||
name = "trading"
|
||||
service_name = "trading-bot-frontend"
|
||||
|
|
|
|||
|
|
@ -279,6 +279,7 @@ resource "kubernetes_service" "traefik_dashboard" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.traefik.metadata[0].name
|
||||
name = "traefik"
|
||||
service_name = "traefik-dashboard"
|
||||
|
|
|
|||
|
|
@ -152,6 +152,7 @@ resource "kubernetes_service" "tuya-bridge" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.tuya-bridge.metadata[0].name
|
||||
name = "tuya-bridge"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -174,6 +174,7 @@ resource "kubernetes_service" "uptime-kuma" {
|
|||
}
|
||||
module "ingress" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.uptime-kuma.metadata[0].name
|
||||
name = "uptime"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -280,6 +280,7 @@ resource "kubernetes_service" "shlink" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.shlink.metadata[0].name
|
||||
name = "url"
|
||||
service_name = "shlink"
|
||||
|
|
@ -420,6 +421,7 @@ resource "kubernetes_service" "shlink-web" {
|
|||
|
||||
module "ingress-web" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.shlink.metadata[0].name
|
||||
name = "shlink"
|
||||
service_name = "shlink-web"
|
||||
|
|
|
|||
|
|
@ -223,6 +223,7 @@ resource "vault_identity_group_alias" "admins" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.vault.metadata[0].name
|
||||
name = "vault"
|
||||
service_name = "vault-active"
|
||||
|
|
|
|||
|
|
@ -189,6 +189,7 @@ resource "kubernetes_service" "vaultwarden" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.vaultwarden.metadata[0].name
|
||||
name = "vaultwarden"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -105,6 +105,7 @@ resource "helm_release" "goldilocks" {
|
|||
# -----------------------------------------------------------------------------
|
||||
module "ingress" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.vpa.metadata[0].name
|
||||
name = "goldilocks"
|
||||
service_name = "goldilocks-dashboard"
|
||||
|
|
|
|||
|
|
@ -193,6 +193,7 @@ resource "kubernetes_service" "wealthfolio" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.wealthfolio.metadata[0].name
|
||||
name = "wealthfolio"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -259,6 +259,7 @@ module "ingress" {
|
|||
namespace = kubernetes_namespace.webhook-handler.metadata[0].name
|
||||
name = "webhook-handler"
|
||||
host = "webhook"
|
||||
dns_type = "non-proxied"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
extra_annotations = {
|
||||
"gethomepage.dev/enabled" = "true"
|
||||
|
|
|
|||
|
|
@ -317,6 +317,7 @@ resource "kubernetes_cron_job_v1" "vault_secret_sync" {
|
|||
|
||||
module "ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.woodpecker.metadata[0].name
|
||||
name = "ci"
|
||||
service_name = "woodpecker-server"
|
||||
|
|
|
|||
|
|
@ -210,6 +210,7 @@ resource "kubernetes_service" "xray-reality" {
|
|||
|
||||
module "ingress_ws" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.xray.metadata[0].name
|
||||
name = "xray-ws"
|
||||
service_name = "xray"
|
||||
|
|
@ -220,6 +221,7 @@ module "ingress_ws" {
|
|||
|
||||
module "ingress_grpc" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.xray.metadata[0].name
|
||||
name = "xray-grpc"
|
||||
service_name = "xray"
|
||||
|
|
@ -234,6 +236,7 @@ module "ingress_grpc" {
|
|||
|
||||
module "ingress_vless" {
|
||||
source = "../../../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "proxied"
|
||||
namespace = kubernetes_namespace.xray.metadata[0].name
|
||||
name = "xray-vless"
|
||||
service_name = "xray"
|
||||
|
|
|
|||
|
|
@ -173,6 +173,7 @@ module "ingress" {
|
|||
name = "ytdlp"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
host = "yt"
|
||||
dns_type = "non-proxied"
|
||||
extra_annotations = {
|
||||
"gethomepage.dev/enabled" = "true"
|
||||
"gethomepage.dev/name" = "yt-dlp"
|
||||
|
|
@ -347,6 +348,7 @@ resource "kubernetes_service" "yt_highlights" {
|
|||
|
||||
module "highlights_ingress" {
|
||||
source = "../../modules/kubernetes/ingress_factory"
|
||||
dns_type = "non-proxied"
|
||||
namespace = kubernetes_namespace.ytdlp.metadata[0].name
|
||||
name = "yt-highlights"
|
||||
tls_secret_name = var.tls_secret_name
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ terraform {
|
|||
}
|
||||
}
|
||||
|
||||
# Generate kubernetes + helm providers for K8s stacks.
|
||||
# Generate kubernetes + helm + cloudflare providers for all stacks.
|
||||
# The infra stack overrides this to add the proxmox provider.
|
||||
generate "k8s_providers" {
|
||||
path = "providers.tf"
|
||||
|
|
@ -47,6 +47,10 @@ terraform {
|
|||
source = "hashicorp/vault"
|
||||
version = "~> 4.0"
|
||||
}
|
||||
cloudflare = {
|
||||
source = "cloudflare/cloudflare"
|
||||
version = "~> 4"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -72,6 +76,25 @@ provider "vault" {
|
|||
EOF
|
||||
}
|
||||
|
||||
# Generate Cloudflare provider config (separate file to avoid conflicts
|
||||
# with stacks that override providers.tf, e.g. infra stack).
|
||||
# DNS records are created per-service via ingress_factory's dns_type param.
|
||||
generate "cloudflare_provider" {
|
||||
path = "cloudflare_provider.tf"
|
||||
if_exists = "overwrite_terragrunt"
|
||||
contents = <<EOF
|
||||
data "vault_kv_secret_v2" "cf_platform" {
|
||||
mount = "secret"
|
||||
name = "platform"
|
||||
}
|
||||
|
||||
provider "cloudflare" {
|
||||
api_key = data.vault_kv_secret_v2.cf_platform.data["cloudflare_api_key"]
|
||||
email = "vbarzin@gmail.com"
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
# Generate shared tiers locals for all stacks.
|
||||
# Previously duplicated in 67+ stacks; now defined once here.
|
||||
generate "tiers" {
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue