## Context The e2e email-roundtrip probe (CronJob `email-roundtrip-monitor`) currently wraps `requests.put(PUSHGATEWAY, ...)` and `requests.get(UPTIME_KUMA, ...)` in bare `try/except` that only prints "Failed to push ..." on error. If Pushgateway is transiently unreachable (e.g., during a Prometheus Helm upgrade / HPA scale-down / brief network blip) metrics silently drop and downstream detection relies entirely on `EmailRoundtripStale` firing after 60 min of staleness. Single transient failures masquerade as data-plane breakage for up to an hour. Target task: `code-n5l` — Add retry to probe Pushgateway + Uptime Kuma pushes. ## This change - Extracts a `push_with_retry(label, func, url)` helper that performs 3 attempts with exponential backoff (1s, 2s, 4s). Treats HTTP 2xx as success, everything else as failure. On final failure, logs an explicit `ERROR:` line to stderr with the URL and either the last HTTP status or the exception repr — matches the existing `print(...)` logging style used throughout the heredoc (no stdlib `logging` dependency added). - Replaces the two inline `try/requests.put/except print` blocks with calls to the helper. Pushgateway runs unconditionally; Uptime Kuma still only runs on round-trip success (same as before). - Makes exit code responsive to push outcome: probe exits non-zero when the round-trip itself failed (unchanged), OR when BOTH pushes failed all retries on the success path. Single-endpoint push failure with the other succeeding keeps exit 0 — partial observability is preferred over noisy pod restarts from Kubernetes' Job controller. ## Behavior matrix ``` roundtrip | pushgw | kuma | exit | rationale ----------+--------+------+------+------------------------------- success | ok | ok | 0 | happy path (unchanged) success | fail | ok | 0 | one endpoint still has telemetry success | ok | fail | 0 | one endpoint still has telemetry success | fail | fail | 1 | NEW — total observability loss fail | ok | - | 1 | roundtrip failed (unchanged, Kuma skipped) fail | fail | - | 1 | roundtrip failed (unchanged, Kuma skipped) ``` ## What is NOT in this change - Alert thresholds (`EmailRoundtripStale` still 60m) — explicitly out of scope per the task description. - `logging` stdlib adoption — rest of heredoc uses `print`, staying consistent. - Moving the heredoc out of `main.tf` into a sidecar Python file — separate refactor. ## Reproduce locally 1. Point PUSHGATEWAY at a black hole: `kubectl -n mailserver set env cronjob/email-roundtrip-monitor \` `PUSHGATEWAY=http://nope.invalid:9091/metrics/job/test` 2. Trigger a one-shot job: `kubectl -n mailserver create job --from=cronjob/email-roundtrip-monitor probe-test` 3. Expected in logs: - 3 attempts, each ~1s/2s/4s apart - `ERROR: Failed to push to Pushgateway after 3 attempts: url=... exception=...` - Uptime Kuma push still succeeds (round-trip ok) → exit 0 4. Flip UPTIME_KUMA_URL to also fail (edit heredoc or DNS-poison): expect exit 1 + two ERROR lines. ## Automated - `python3 -c "import ast; ast.parse(open('/tmp/probe.py').read())"` → OK (heredoc extracts cleanly). - `terraform fmt -check -recursive modules/mailserver/` → no diff. Closes: code-n5l Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .beads | ||
| .claude | ||
| .git-crypt | ||
| .github | ||
| .planning | ||
| .woodpecker | ||
| ci | ||
| cli | ||
| diagram | ||
| docs | ||
| modules | ||
| playbooks | ||
| scripts | ||
| secrets | ||
| stacks | ||
| state/stacks | ||
| .gitattributes | ||
| .gitignore | ||
| .sops.yaml | ||
| AGENTS.md | ||
| config.tfvars | ||
| CONTRIBUTING.md | ||
| LICENSE.txt | ||
| MEMORY.md | ||
| README.md | ||
| setup-monitoring.sh | ||
| terragrunt.hcl | ||
| tiers.tf | ||
This repo contains my infra-as-code sources.
My infrastructure is built using Terraform, Kubernetes and CI/CD is done using Woodpecker CI.
Read more by visiting my website: https://viktorbarzin.me
Documentation
Full architecture documentation is available in docs/ — covering networking, storage, security, monitoring, secrets, CI/CD, databases, and more.
Adding a New User (Admin)
Adding a new namespace-owner to the cluster requires three steps — no code changes needed.
1. Authentik Group Assignment
In the Authentik admin UI, add the user to:
kubernetes-namespace-ownersgroup (grants OIDC group claim for K8s RBAC)Headscale Usersgroup (if they need VPN access)
2. Vault KV Entry
Add a JSON entry to secret/platform → k8s_users key in Vault:
"username": {
"role": "namespace-owner",
"email": "user@example.com",
"namespaces": ["username"],
"domains": ["myapp"],
"quota": {
"cpu_requests": "2",
"memory_requests": "4Gi",
"memory_limits": "8Gi",
"pods": "20"
}
}
usernamekey must match the user's Forgejo username (for Woodpecker admin access)namespaces— K8s namespaces to create and grant admin access todomains— subdomains underviktorbarzin.mefor Cloudflare DNS recordsquota— resource limits per namespace (defaults shown above)
3. Apply Stacks
vault login -method=oidc
cd stacks/vault && terragrunt apply --non-interactive
# Creates: namespace, Vault policy, identity entity, K8s deployer role
cd ../platform && terragrunt apply --non-interactive
# Creates: RBAC bindings, ResourceQuota, TLS secret, DNS records
cd ../woodpecker && terragrunt apply --non-interactive
# Adds user to Woodpecker admin list
What Gets Auto-Generated
| Resource | Stack |
|---|---|
| Kubernetes namespace | vault |
Vault policy (namespace-owner-{user}) |
vault |
| Vault identity entity + OIDC alias | vault |
| K8s deployer Role + Vault K8s role | vault |
| RBAC RoleBinding (namespace admin) | platform |
| RBAC ClusterRoleBinding (cluster read-only) | platform |
| ResourceQuota | platform |
| TLS secret in namespace | platform |
| Cloudflare DNS records | platform |
| Woodpecker admin access | woodpecker |
New User Onboarding
If you've been added as a namespace-owner, follow these steps to get started.
1. Join the VPN
# Install Tailscale: https://tailscale.com/download
tailscale login --login-server https://headscale.viktorbarzin.me
# Send the registration URL to Viktor, wait for approval
ping 10.0.20.100 # verify connectivity
2. Install Tools
Run the setup script to install kubectl, kubelogin, Vault CLI, Terraform, and Terragrunt:
# macOS
bash <(curl -fsSL https://k8s-portal.viktorbarzin.me/setup/script?os=mac)
# Linux
bash <(curl -fsSL https://k8s-portal.viktorbarzin.me/setup/script?os=linux)
3. Authenticate
# Log into Vault (opens browser for SSO)
vault login -method=oidc
# Test kubectl (opens browser for OIDC login)
kubectl get pods -n YOUR_NAMESPACE
4. Deploy Your First App
# Clone the infra repo
git clone https://github.com/ViktorBarzin/infra.git && cd infra
# Copy the stack template
cp -r stacks/_template stacks/myapp
mv stacks/myapp/main.tf.example stacks/myapp/main.tf
# Edit main.tf — replace all <placeholders>
# Store secrets in Vault
vault kv put secret/YOUR_USERNAME/myapp DB_PASSWORD=secret123
# Submit a PR
git checkout -b feat/myapp
git add stacks/myapp/
git commit -m "add myapp stack"
git push -u origin feat/myapp
After review and merge, an admin runs cd stacks/myapp && terragrunt apply.
5. Set Up CI/CD (Optional)
Create .woodpecker.yml in your app's Forgejo repo:
steps:
- name: build
image: woodpeckerci/plugin-docker-buildx
settings:
repo: YOUR_DOCKERHUB_USER/myapp
tag: ["${CI_PIPELINE_NUMBER}", "latest"]
username:
from_secret: dockerhub-username
password:
from_secret: dockerhub-token
platforms: linux/amd64
- name: deploy
image: hashicorp/vault:1.18.1
commands:
- export VAULT_ADDR=http://vault-active.vault.svc.cluster.local:8200
- export VAULT_TOKEN=$(vault write -field=token auth/kubernetes/login
role=ci jwt=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token))
- KUBE_TOKEN=$(vault write -field=service_account_token
kubernetes/creds/YOUR_NAMESPACE-deployer
kubernetes_namespace=YOUR_NAMESPACE)
- kubectl --server=https://kubernetes.default.svc
--token=$KUBE_TOKEN
--certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
-n YOUR_NAMESPACE set image deployment/myapp
myapp=YOUR_DOCKERHUB_USER/myapp:${CI_PIPELINE_NUMBER}
Useful Commands
# Check your pods
kubectl get pods -n YOUR_NAMESPACE
# View quota usage
kubectl describe resourcequota -n YOUR_NAMESPACE
# Store/read secrets
vault kv put secret/YOUR_USERNAME/myapp KEY=value
vault kv get secret/YOUR_USERNAME/myapp
# Get a short-lived K8s deploy token
vault write kubernetes/creds/YOUR_NAMESPACE-deployer \
kubernetes_namespace=YOUR_NAMESPACE
Important Rules
- All changes go through Terraform — never
kubectl apply/edit/patchdirectly - Never put secrets in code — use Vault:
vault kv put secret/YOUR_USERNAME/... - Always use a PR — never push directly to master
- Docker images: build for
linux/amd64, use versioned tags (not:latest)
git-crypt setup
To decrypt the secrets, you need to setup git-crypt.
- Install git-crypt.
- Setup gpg keys on the machine
git-crypt unlock
This will unlock the secrets and will lock them on commit