37 KiB
Trading Bot Deployment — Implementation Plan
For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
Goal: Deploy the trading bot to Kubernetes, accessible at trading.viktorbarzin.me behind Authentik, with CI/CD via Woodpecker+Forgejo.
Architecture: 2 Kubernetes Deployments (frontend pod with dashboard+api-gateway containers; workers pod with 6 background service containers). Reuses cluster PostgreSQL, Redis (DB 4), and Ollama. CI builds Docker images on push to Forgejo, deploys via K8s API patch.
Tech Stack: Terraform/Terragrunt, Woodpecker CI, Forgejo, Docker Hub, Kubernetes, Authentik forward-auth.
Task 1: Create Forgejo Repository
Step 1: Create the repo on Forgejo
Open https://forgejo.viktorbarzin.me and create a new repository named trading-bot under your personal account. Leave it empty (no README, no .gitignore).
Step 2: Add Forgejo as a remote and push
cd /Users/viktorbarzin/code/trading-bot
git remote add forgejo https://forgejo.viktorbarzin.me/ViktorBarzin/trading-bot.git
git push forgejo master
Verify: git remote -v shows both origin and forgejo.
Step 3: Commit
No file changes — just remote configuration.
Task 2: Add Forgejo integration to Woodpecker CI
Woodpecker currently only supports GitHub. To add Forgejo, add WOODPECKER_FORGEJO env vars to the Helm values.
Files:
- Modify:
/Users/viktorbarzin/code/infra/stacks/woodpecker/values.yaml - Modify:
/Users/viktorbarzin/code/infra/stacks/woodpecker/main.tf(add Forgejo variables) - Modify:
/Users/viktorbarzin/code/infra/terraform.tfvars(add Forgejo OAuth credentials)
Step 1: Create an OAuth2 application in Forgejo
Go to https://forgejo.viktorbarzin.me/user/settings/applications (or site admin settings). Create an OAuth2 application:
- Application Name:
Woodpecker CI - Redirect URI:
https://ci.viktorbarzin.me/authorize
Note the Client ID and Client Secret.
Step 2: Add variables to terraform.tfvars
Add these lines to /Users/viktorbarzin/code/infra/terraform.tfvars:
# Woodpecker + Forgejo
woodpecker_forgejo_client_id = "<client-id-from-step-1>"
woodpecker_forgejo_client_secret = "<client-secret-from-step-1>"
woodpecker_forgejo_url = "https://forgejo.viktorbarzin.me"
Step 3: Add variables to Woodpecker main.tf
Add to the variables section of /Users/viktorbarzin/code/infra/stacks/woodpecker/main.tf:
variable "woodpecker_forgejo_client_id" { type = string }
variable "woodpecker_forgejo_client_secret" { type = string }
variable "woodpecker_forgejo_url" { type = string }
Update the templatefile call for values.yaml to pass these:
values = [
templatefile("${path.module}/values.yaml", {
github_client_id = var.woodpecker_github_client_id
github_client_secret = var.woodpecker_github_client_secret
agent_secret = var.woodpecker_agent_secret
db_password = var.woodpecker_db_password
postgresql_host = var.postgresql_host
forgejo_client_id = var.woodpecker_forgejo_client_id
forgejo_client_secret = var.woodpecker_forgejo_client_secret
forgejo_url = var.woodpecker_forgejo_url
})
]
Step 4: Add Forgejo env vars to values.yaml
Add to the server.env section of /Users/viktorbarzin/code/infra/stacks/woodpecker/values.yaml:
WOODPECKER_FORGEJO: "true"
WOODPECKER_FORGEJO_CLIENT: "${forgejo_client_id}"
WOODPECKER_FORGEJO_SECRET: "${forgejo_client_secret}"
WOODPECKER_FORGEJO_URL: "${forgejo_url}"
Step 5: Apply Woodpecker stack
cd /Users/viktorbarzin/code/infra/stacks/woodpecker && terragrunt apply --non-interactive
Step 6: Activate the repo in Woodpecker
Go to https://ci.viktorbarzin.me, log in, find the trading-bot repo and activate it.
Step 7: Commit infra changes
cd /Users/viktorbarzin/code/infra
git add stacks/woodpecker/values.yaml stacks/woodpecker/main.tf
git commit -m "[ci skip] add Forgejo integration to Woodpecker CI"
Task 3: Create Woodpecker CI pipeline
Files:
- Create:
/Users/viktorbarzin/code/trading-bot/.woodpecker.yml
Step 1: Write the pipeline
Create /Users/viktorbarzin/code/trading-bot/.woodpecker.yml:
when:
- event: push
branch: master
clone:
git:
image: woodpeckerci/plugin-git
settings:
attempts: 5
backoff: 10s
steps:
- name: test
image: python:3.12-slim
commands:
- python -m venv .venv
- .venv/bin/pip install --quiet --upgrade pip
- .venv/bin/pip install --quiet -e ".[api,news,sentiment,trading,backtester,dev]"
- .venv/bin/pytest tests/ -v -m "not integration" --tb=short
- name: build-service-image
image: plugins/docker
depends_on:
- test
environment:
DOCKER_BUILDKIT: 1
settings:
username: viktorbarzin
password:
from_secret: dockerhub-token
repo: viktorbarzin/trading-bot-service
dockerfile: docker/Dockerfile.service
context: .
build_args:
- EXTRAS=api,news,sentiment,trading,backtester
- SERVICE_MODULE=api_gateway
cache_from:
- viktorbarzin/trading-bot-service:latest
tags:
- "build-${CI_PIPELINE_NUMBER}"
- name: build-dashboard-image
image: plugins/docker
depends_on:
- test
environment:
DOCKER_BUILDKIT: 1
settings:
username: viktorbarzin
password:
from_secret: dockerhub-token
repo: viktorbarzin/trading-bot-dashboard
dockerfile: docker/Dockerfile.dashboard
context: .
cache_from:
- viktorbarzin/trading-bot-dashboard:latest
tags:
- "build-${CI_PIPELINE_NUMBER}"
- name: publish-images
image: alpine
depends_on:
- build-service-image
- build-dashboard-image
environment:
DOCKERHUB_TOKEN:
from_secret: dockerhub-token
commands:
- apk add --no-cache skopeo
# Tag service image
- 'skopeo copy --src-creds "viktorbarzin:$DOCKERHUB_TOKEN" --dest-creds "viktorbarzin:$DOCKERHUB_TOKEN" "docker://docker.io/viktorbarzin/trading-bot-service:build-${CI_PIPELINE_NUMBER}" "docker://docker.io/viktorbarzin/trading-bot-service:${CI_PIPELINE_NUMBER}"'
- 'skopeo copy --src-creds "viktorbarzin:$DOCKERHUB_TOKEN" --dest-creds "viktorbarzin:$DOCKERHUB_TOKEN" "docker://docker.io/viktorbarzin/trading-bot-service:build-${CI_PIPELINE_NUMBER}" "docker://docker.io/viktorbarzin/trading-bot-service:latest"'
# Tag dashboard image
- 'skopeo copy --src-creds "viktorbarzin:$DOCKERHUB_TOKEN" --dest-creds "viktorbarzin:$DOCKERHUB_TOKEN" "docker://docker.io/viktorbarzin/trading-bot-dashboard:build-${CI_PIPELINE_NUMBER}" "docker://docker.io/viktorbarzin/trading-bot-dashboard:${CI_PIPELINE_NUMBER}"'
- 'skopeo copy --src-creds "viktorbarzin:$DOCKERHUB_TOKEN" --dest-creds "viktorbarzin:$DOCKERHUB_TOKEN" "docker://docker.io/viktorbarzin/trading-bot-dashboard:build-${CI_PIPELINE_NUMBER}" "docker://docker.io/viktorbarzin/trading-bot-dashboard:latest"'
- name: update-deployment
image: alpine
depends_on:
- publish-images
commands:
- apk add --no-cache curl jq
- |
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
RESTART_AT=$(date -u +%Y-%m-%dT%H:%M:%SZ)
API="https://kubernetes:6443/apis/apps/v1/namespaces/trading-bot/deployments"
for DEPLOY in trading-bot-frontend trading-bot-workers; do
STATUS=$(curl -sfk "$API/$DEPLOY" \
-H "Authorization: Bearer $TOKEN" \
-H "Accept: application/json")
# Build the containers patch — update all container images
if [ "$DEPLOY" = "trading-bot-frontend" ]; then
IMAGE_DASHBOARD="viktorbarzin/trading-bot-dashboard:${CI_PIPELINE_NUMBER}"
IMAGE_SERVICE="viktorbarzin/trading-bot-service:${CI_PIPELINE_NUMBER}"
PATCH="{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\":\"$RESTART_AT\"}},\"spec\":{\"containers\":[{\"name\":\"dashboard\",\"image\":\"$IMAGE_DASHBOARD\"},{\"name\":\"api-gateway\",\"image\":\"$IMAGE_SERVICE\"}]}}}}"
else
IMAGE_SERVICE="viktorbarzin/trading-bot-service:${CI_PIPELINE_NUMBER}"
PATCH="{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\":\"$RESTART_AT\"}},\"spec\":{\"containers\":[{\"name\":\"news-fetcher\",\"image\":\"$IMAGE_SERVICE\"},{\"name\":\"sentiment-analyzer\",\"image\":\"$IMAGE_SERVICE\"},{\"name\":\"signal-generator\",\"image\":\"$IMAGE_SERVICE\"},{\"name\":\"trade-executor\",\"image\":\"$IMAGE_SERVICE\"},{\"name\":\"learning-engine\",\"image\":\"$IMAGE_SERVICE\"},{\"name\":\"market-data\",\"image\":\"$IMAGE_SERVICE\"}]}}}}"
fi
echo "Patching $DEPLOY..."
curl -sf -X PATCH "$API/$DEPLOY" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/strategic-merge-patch+json" \
-k -d "$PATCH" \
| jq '{name: .metadata.name, generation: .metadata.generation}'
done
- name: verify-deploy
image: alpine
depends_on:
- update-deployment
commands:
- apk add --no-cache curl jq
- |
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
BASE_API="https://kubernetes:6443/api/v1/namespaces/trading-bot/pods"
for DEPLOY in trading-bot-frontend trading-bot-workers; do
echo "Verifying $DEPLOY..."
PODS_API="$BASE_API?labelSelector=app%3D$DEPLOY"
FOUND=0
for i in $(seq 1 60); do
RAW=$(curl -sfk "$PODS_API" \
-H "Authorization: Bearer $TOKEN" \
-H "Accept: application/json")
READY_COUNT=$(echo "$RAW" | jq '[.items[] | select(
.status.phase == "Running" and
([.status.containerStatuses[]? | .ready] | all)
)] | length' 2>/dev/null || echo 0)
echo " Attempt $i/60: $READY_COUNT pod(s) fully ready for $DEPLOY"
if [ "$READY_COUNT" -gt 0 ] 2>/dev/null; then
echo "$DEPLOY is live!"
FOUND=1
break
fi
sleep 5
done
if [ "$FOUND" -ne 1 ]; then
echo "ERROR: $DEPLOY not ready within 5 minutes"
exit 1
fi
done
- name: slack
image: woodpeckerci/plugin-slack
settings:
webhook:
from_secret: slack-webhook-url
channel: general
when:
- status: [success, failure]
Step 2: Commit
cd /Users/viktorbarzin/code/trading-bot
git add .woodpecker.yml
git commit -m "add Woodpecker CI pipeline"
Task 4: Create K8s nginx.conf for production
The existing docker/nginx.conf proxies to api-gateway:8000 (docker-compose hostname). In K8s, both containers share a pod, so nginx needs to proxy to localhost:8000.
Files:
- Create:
/Users/viktorbarzin/code/trading-bot/docker/nginx-k8s.conf
Step 1: Create the production nginx config
Create /Users/viktorbarzin/code/trading-bot/docker/nginx-k8s.conf — same as nginx.conf but replacing all api-gateway:8000 with localhost:8000:
# nginx configuration for K8s deployment.
# Dashboard + api-gateway share a pod — proxy to localhost:8000.
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api/auth/ {
proxy_pass http://localhost:8000/auth/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /api/ {
proxy_pass http://localhost:8000/api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /auth/ {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /health {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location /ws {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 86400;
}
}
Step 2: Update Dockerfile.dashboard to accept a build arg for which nginx config to use
Modify /Users/viktorbarzin/code/trading-bot/docker/Dockerfile.dashboard to add a build arg:
# ...existing content...
FROM nginx:alpine
RUN rm /etc/nginx/conf.d/default.conf
ARG NGINX_CONF=docker/nginx.conf
COPY ${NGINX_CONF} /etc/nginx/conf.d/default.conf
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Then in the Woodpecker pipeline, build with build_args: NGINX_CONF=docker/nginx-k8s.conf.
Step 3: Commit
cd /Users/viktorbarzin/code/trading-bot
git add docker/nginx-k8s.conf docker/Dockerfile.dashboard
git commit -m "add K8s nginx config with localhost proxy"
Task 5: Add secrets to terraform.tfvars
Files:
- Modify:
/Users/viktorbarzin/code/infra/terraform.tfvars
Step 1: Add trading bot secrets
Add to /Users/viktorbarzin/code/infra/terraform.tfvars:
# Trading Bot
trading_bot_db_password = "<generate-a-password>"
trading_bot_alpaca_api_key = "PKA3BZ2YE6GCBXG7QO36YMR5JM"
trading_bot_alpaca_secret_key = "8h7rPPtdFTEnFskEH7ue87JvaxAnq1UQTw886hCm3MmZ"
trading_bot_jwt_secret = "76774bf1e07a173335313940d8201b6a5a7d43844973ebcd088cf3a0540db557"
trading_bot_reddit_client_id = "local_dev"
trading_bot_reddit_client_secret = "local_dev"
trading_bot_alpha_vantage_api_key = "M0I3TWB6VKU0UF51"
trading_bot_fmp_api_key = "34zqbQFeRxYvPtzp3Y5QLKPVPztkZyfK"
Generate the DB password: python -c "import secrets; print(secrets.token_hex(16))"
Step 2: Commit (file is git-crypt encrypted)
cd /Users/viktorbarzin/code/infra
git add terraform.tfvars
git commit -m "[ci skip] add trading bot secrets"
Task 6: Create Terraform stack for trading-bot
Files:
- Create:
/Users/viktorbarzin/code/infra/stacks/trading-bot/terragrunt.hcl - Create:
/Users/viktorbarzin/code/infra/stacks/trading-bot/main.tf - Create:
/Users/viktorbarzin/code/infra/stacks/trading-bot/secrets(symlink)
Step 1: Create directory and terragrunt.hcl
mkdir -p /Users/viktorbarzin/code/infra/stacks/trading-bot
ln -s ../../secrets /Users/viktorbarzin/code/infra/stacks/trading-bot/secrets
Create /Users/viktorbarzin/code/infra/stacks/trading-bot/terragrunt.hcl:
include "root" {
path = find_in_parent_folders()
}
dependency "platform" {
config_path = "../platform"
skip_outputs = true
}
Step 2: Create main.tf
Create /Users/viktorbarzin/code/infra/stacks/trading-bot/main.tf:
# ─────────────────────────────────────────────────────────────────────────────
# Variables
# ─────────────────────────────────────────────────────────────────────────────
variable "tls_secret_name" { type = string }
variable "nfs_server" { type = string }
variable "postgresql_host" { type = string }
variable "redis_host" { type = string }
variable "ollama_host" { type = string }
variable "dbaas_postgresql_root_password" { type = string }
variable "trading_bot_db_password" { type = string }
variable "trading_bot_alpaca_api_key" { type = string }
variable "trading_bot_alpaca_secret_key" { type = string }
variable "trading_bot_jwt_secret" { type = string }
variable "trading_bot_reddit_client_id" { type = string }
variable "trading_bot_reddit_client_secret" { type = string }
variable "trading_bot_alpha_vantage_api_key" { type = string }
variable "trading_bot_fmp_api_key" { type = string }
# ─────────────────────────────────────────────────────────────────────────────
# Namespace
# ─────────────────────────────────────────────────────────────────────────────
resource "kubernetes_namespace" "trading_bot" {
metadata {
name = "trading-bot"
labels = {
tier = local.tiers.edge
"resource-governance/custom-quota" = "true"
}
}
}
module "tls_secret" {
source = "../../modules/kubernetes/setup_tls_secret"
namespace = kubernetes_namespace.trading_bot.metadata[0].name
tls_secret_name = var.tls_secret_name
}
# ─────────────────────────────────────────────────────────────────────────────
# Database init job — create user, database, and attempt TimescaleDB extension
# ─────────────────────────────────────────────────────────────────────────────
resource "kubernetes_job" "db_init" {
metadata {
name = "trading-bot-db-init"
namespace = kubernetes_namespace.trading_bot.metadata[0].name
}
spec {
template {
metadata {}
spec {
restart_policy = "Never"
container {
name = "db-init"
image = "postgres:16-alpine"
command = ["sh", "-c", <<-EOT
set -e
# Create role if not exists
PGPASSWORD='${var.dbaas_postgresql_root_password}' psql -h ${var.postgresql_host} -U root -tc \
"SELECT 1 FROM pg_roles WHERE rolname='trading'" | grep -q 1 || \
PGPASSWORD='${var.dbaas_postgresql_root_password}' psql -h ${var.postgresql_host} -U root -c \
"CREATE ROLE trading WITH LOGIN PASSWORD '${var.trading_bot_db_password}'"
# Create database if not exists
PGPASSWORD='${var.dbaas_postgresql_root_password}' psql -h ${var.postgresql_host} -U root -tc \
"SELECT 1 FROM pg_database WHERE datname='trading'" | grep -q 1 || \
PGPASSWORD='${var.dbaas_postgresql_root_password}' psql -h ${var.postgresql_host} -U root -c \
"CREATE DATABASE trading OWNER trading"
# Grant privileges
PGPASSWORD='${var.dbaas_postgresql_root_password}' psql -h ${var.postgresql_host} -U root -c \
"GRANT ALL PRIVILEGES ON DATABASE trading TO trading"
# Try to enable TimescaleDB (may fail if not installed — that's OK)
PGPASSWORD='${var.dbaas_postgresql_root_password}' psql -h ${var.postgresql_host} -U root -d trading -c \
"CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE" 2>/dev/null || \
echo "WARNING: TimescaleDB extension not available — hypertables will not be created"
EOT
]
}
}
}
backoff_limit = 3
}
wait_for_completion = true
timeouts {
create = "2m"
}
}
# ─────────────────────────────────────────────────────────────────────────────
# Migrations job — run alembic upgrade head
# ─────────────────────────────────────────────────────────────────────────────
resource "kubernetes_job" "migrations" {
depends_on = [kubernetes_job.db_init]
metadata {
name = "trading-bot-migrations"
namespace = kubernetes_namespace.trading_bot.metadata[0].name
}
spec {
template {
metadata {}
spec {
restart_policy = "Never"
container {
name = "migrations"
image = "viktorbarzin/trading-bot-service:latest"
command = ["python", "-m", "alembic", "upgrade", "head"]
env {
name = "TRADING_DATABASE_URL"
value = "postgresql+asyncpg://trading:${var.trading_bot_db_password}@${var.postgresql_host}:5432/trading"
}
env {
name = "TRADING_REDIS_URL"
value = "redis://${var.redis_host}:6379/4"
}
}
}
}
backoff_limit = 3
}
wait_for_completion = true
timeouts {
create = "5m"
}
}
# ─────────────────────────────────────────────────────────────────────────────
# Shared environment variables (local for DRY)
# ─────────────────────────────────────────────────────────────────────────────
locals {
common_env = {
TRADING_DATABASE_URL = "postgresql+asyncpg://trading:${var.trading_bot_db_password}@${var.postgresql_host}:5432/trading"
TRADING_REDIS_URL = "redis://${var.redis_host}:6379/4"
TRADING_LOG_LEVEL = "INFO"
TRADING_ALPACA_API_KEY = var.trading_bot_alpaca_api_key
TRADING_ALPACA_SECRET_KEY = var.trading_bot_alpaca_secret_key
TRADING_ALPACA_BASE_URL = "https://paper-api.alpaca.markets"
TRADING_PAPER_TRADING = "true"
TRADING_JWT_SECRET_KEY = var.trading_bot_jwt_secret
TRADING_REDDIT_CLIENT_ID = var.trading_bot_reddit_client_id
TRADING_REDDIT_CLIENT_SECRET = var.trading_bot_reddit_client_secret
TRADING_REDDIT_USER_AGENT = "trading-bot/0.1"
TRADING_OLLAMA_HOST = "http://${var.ollama_host}:11434"
TRADING_OLLAMA_MODEL = "gemma3"
TRADING_WATCHLIST = "[\"AAPL\",\"TSLA\",\"NVDA\",\"MSFT\",\"GOOGL\"]"
TRADING_BAR_TIMEFRAME = "5Min"
TRADING_POLL_INTERVAL_SECONDS = "60"
TRADING_HISTORICAL_BARS = "100"
TRADING_SNAPSHOT_INTERVAL_SECONDS = "60"
TRADING_ALPHA_VANTAGE_API_KEY = var.trading_bot_alpha_vantage_api_key
TRADING_FMP_API_KEY = var.trading_bot_fmp_api_key
TRADING_FUNDAMENTALS_CACHE_TTL_HOURS = "24"
TRADING_RP_ID = "trading.viktorbarzin.me"
TRADING_RP_NAME = "Trading Bot"
TRADING_RP_ORIGIN = "https://trading.viktorbarzin.me"
TRADING_CORS_ORIGINS = "[\"https://trading.viktorbarzin.me\"]"
}
}
# ─────────────────────────────────────────────────────────────────────────────
# Deployment: frontend (dashboard + api-gateway)
# ─────────────────────────────────────────────────────────────────────────────
resource "kubernetes_deployment" "frontend" {
depends_on = [kubernetes_job.migrations]
metadata {
name = "trading-bot-frontend"
namespace = kubernetes_namespace.trading_bot.metadata[0].name
labels = {
app = "trading-bot-frontend"
tier = local.tiers.edge
}
}
spec {
replicas = 1
strategy {
type = "RollingUpdate"
rolling_update {
max_unavailable = 0
max_surge = 1
}
}
selector {
match_labels = {
app = "trading-bot-frontend"
}
}
template {
metadata {
labels = {
app = "trading-bot-frontend"
}
}
spec {
# Container 1: Dashboard (nginx)
container {
name = "dashboard"
image = "viktorbarzin/trading-bot-dashboard:latest"
image_pull_policy = "Always"
port {
name = "http"
container_port = 80
protocol = "TCP"
}
resources {
requests = {
cpu = "10m"
memory = "32Mi"
}
limits = {
cpu = "200m"
memory = "128Mi"
}
}
}
# Container 2: API Gateway
container {
name = "api-gateway"
image = "viktorbarzin/trading-bot-service:latest"
image_pull_policy = "Always"
command = ["python", "-m", "services.api_gateway.main"]
port {
name = "api"
container_port = 8000
protocol = "TCP"
}
dynamic "env" {
for_each = local.common_env
content {
name = env.key
value = env.value
}
}
resources {
requests = {
cpu = "50m"
memory = "128Mi"
}
limits = {
cpu = "1000m"
memory = "512Mi"
}
}
}
}
}
}
lifecycle {
ignore_changes = [
spec[0].template[0].spec[0].container[0].image,
spec[0].template[0].spec[0].container[1].image,
]
}
}
# ─────────────────────────────────────────────────────────────────────────────
# Deployment: workers (6 background services)
# ─────────────────────────────────────────────────────────────────────────────
resource "kubernetes_deployment" "workers" {
depends_on = [kubernetes_job.migrations]
metadata {
name = "trading-bot-workers"
namespace = kubernetes_namespace.trading_bot.metadata[0].name
labels = {
app = "trading-bot-workers"
tier = local.tiers.edge
}
}
spec {
replicas = 1
strategy {
type = "Recreate"
}
selector {
match_labels = {
app = "trading-bot-workers"
}
}
template {
metadata {
labels = {
app = "trading-bot-workers"
}
}
spec {
container {
name = "news-fetcher"
image = "viktorbarzin/trading-bot-service:latest"
image_pull_policy = "Always"
command = ["python", "-m", "services.news_fetcher.main"]
dynamic "env" {
for_each = local.common_env
content {
name = env.key
value = env.value
}
}
resources {
requests = {
cpu = "10m"
memory = "64Mi"
}
limits = {
cpu = "500m"
memory = "256Mi"
}
}
}
container {
name = "sentiment-analyzer"
image = "viktorbarzin/trading-bot-service:latest"
image_pull_policy = "Always"
command = ["python", "-m", "services.sentiment_analyzer.main"]
dynamic "env" {
for_each = local.common_env
content {
name = env.key
value = env.value
}
}
resources {
requests = {
cpu = "100m"
memory = "512Mi"
}
limits = {
cpu = "2000m"
memory = "2Gi"
}
}
}
container {
name = "signal-generator"
image = "viktorbarzin/trading-bot-service:latest"
image_pull_policy = "Always"
command = ["python", "-m", "services.signal_generator.main"]
dynamic "env" {
for_each = local.common_env
content {
name = env.key
value = env.value
}
}
resources {
requests = {
cpu = "10m"
memory = "64Mi"
}
limits = {
cpu = "500m"
memory = "256Mi"
}
}
}
container {
name = "trade-executor"
image = "viktorbarzin/trading-bot-service:latest"
image_pull_policy = "Always"
command = ["python", "-m", "services.trade_executor.main"]
dynamic "env" {
for_each = local.common_env
content {
name = env.key
value = env.value
}
}
resources {
requests = {
cpu = "10m"
memory = "64Mi"
}
limits = {
cpu = "500m"
memory = "256Mi"
}
}
}
container {
name = "learning-engine"
image = "viktorbarzin/trading-bot-service:latest"
image_pull_policy = "Always"
command = ["python", "-m", "services.learning_engine.main"]
dynamic "env" {
for_each = local.common_env
content {
name = env.key
value = env.value
}
}
resources {
requests = {
cpu = "10m"
memory = "64Mi"
}
limits = {
cpu = "500m"
memory = "256Mi"
}
}
}
container {
name = "market-data"
image = "viktorbarzin/trading-bot-service:latest"
image_pull_policy = "Always"
command = ["python", "-m", "services.market_data.main"]
dynamic "env" {
for_each = local.common_env
content {
name = env.key
value = env.value
}
}
resources {
requests = {
cpu = "10m"
memory = "64Mi"
}
limits = {
cpu = "500m"
memory = "256Mi"
}
}
}
}
}
}
lifecycle {
ignore_changes = [
spec[0].template[0].spec[0].container[0].image,
spec[0].template[0].spec[0].container[1].image,
spec[0].template[0].spec[0].container[2].image,
spec[0].template[0].spec[0].container[3].image,
spec[0].template[0].spec[0].container[4].image,
spec[0].template[0].spec[0].container[5].image,
]
}
}
# ─────────────────────────────────────────────────────────────────────────────
# Service
# ─────────────────────────────────────────────────────────────────────────────
resource "kubernetes_service" "frontend" {
metadata {
name = "trading-bot-frontend"
namespace = kubernetes_namespace.trading_bot.metadata[0].name
labels = { app = "trading-bot-frontend" }
}
spec {
selector = { app = "trading-bot-frontend" }
port {
port = 80
target_port = 80
}
}
}
# ─────────────────────────────────────────────────────────────────────────────
# Ingress — protected by Authentik
# ─────────────────────────────────────────────────────────────────────────────
module "ingress" {
source = "../../modules/kubernetes/ingress_factory"
namespace = kubernetes_namespace.trading_bot.metadata[0].name
name = "trading"
service_name = "trading-bot-frontend"
tls_secret_name = var.tls_secret_name
protected = true
}
Step 3: Commit
cd /Users/viktorbarzin/code/infra
git add stacks/trading-bot/
git commit -m "[ci skip] add trading-bot Terraform stack"
Task 7: Add Cloudflare DNS record
Files:
- Modify:
/Users/viktorbarzin/code/infra/terraform.tfvars(addtradingto Cloudflare DNS entries)
Step 1: Find the Cloudflare DNS section in terraform.tfvars
Look for the cloudflare_dns_records or similar variable and add "trading" to the list. This depends on how DNS records are structured in the tfvars — check the existing entries for the pattern.
Step 2: Apply platform stack (DNS is managed there)
cd /Users/viktorbarzin/code/infra/stacks/platform && terragrunt apply --non-interactive
Step 3: Commit
cd /Users/viktorbarzin/code/infra
git add terraform.tfvars
git commit -m "[ci skip] add trading.viktorbarzin.me DNS record"
Task 8: Add NFS export for trading-bot
Step 1: Add the NFS path
Add /mnt/main/trading-bot to /Users/viktorbarzin/code/infra/secrets/nfs_directories.txt (keep sorted).
Step 2: Run the NFS exports script
cd /Users/viktorbarzin/code/infra/secrets && bash nfs_exports.sh
Step 3: Commit
cd /Users/viktorbarzin/code/infra
git add secrets/nfs_directories.txt
git commit -m "[ci skip] add trading-bot NFS export"
Task 9: Build and push initial Docker images
Before Terraform can create the deployments, the images must exist on Docker Hub.
Step 1: Build and push the service image
cd /Users/viktorbarzin/code/trading-bot
docker build -f docker/Dockerfile.service \
--build-arg EXTRAS="api,news,sentiment,trading,backtester" \
--build-arg SERVICE_MODULE="api_gateway" \
-t viktorbarzin/trading-bot-service:latest .
docker push viktorbarzin/trading-bot-service:latest
Step 2: Build and push the dashboard image
cd /Users/viktorbarzin/code/trading-bot
docker build -f docker/Dockerfile.dashboard \
--build-arg NGINX_CONF=docker/nginx-k8s.conf \
-t viktorbarzin/trading-bot-dashboard:latest .
docker push viktorbarzin/trading-bot-dashboard:latest
Task 10: Apply the Terraform stack
Step 1: Apply
cd /Users/viktorbarzin/code/infra/stacks/trading-bot && terragrunt apply --non-interactive
Step 2: Verify pods are running
kubectl --kubeconfig /Users/viktorbarzin/code/infra/config get pods -n trading-bot
Expected: trading-bot-frontend-* and trading-bot-workers-* pods in Running state.
Step 3: Verify ingress
kubectl --kubeconfig /Users/viktorbarzin/code/infra/config get ingress -n trading-bot
Expected: Ingress for trading.viktorbarzin.me.
Step 4: Test access
Open https://trading.viktorbarzin.me — should redirect to Authentik login. After authenticating, the trading bot dashboard should load.
Task 11: Configure Woodpecker secrets
The CI pipeline needs the dockerhub-token and slack-webhook-url secrets. These may already exist as organization-level secrets in Woodpecker.
Step 1: Check existing secrets
Go to https://ci.viktorbarzin.me → trading-bot repo → Settings → Secrets.
Step 2: Add secrets if missing
dockerhub-token: Your Docker Hub access tokenslack-webhook-url: Slack webhook URL for notifications
Task 12: Push to Forgejo and verify CI
Step 1: Push all changes to Forgejo
cd /Users/viktorbarzin/code/trading-bot
git push forgejo master
Step 2: Monitor the pipeline
Go to https://ci.viktorbarzin.me and watch the trading-bot pipeline. It should:
- Run tests
- Build both Docker images
- Publish to Docker Hub
- Patch K8s deployments
- Verify pod readiness
- Send Slack notification
Step 3: Verify the app is accessible
Open https://trading.viktorbarzin.me and confirm the dashboard loads after Authentik authentication.