Covers: foundation, docker infra, models, schemas, broker abstraction, news pipeline, sentiment analysis, strategies, signal generation, trade execution, learning engine, backtesting, API gateway with passkey auth, React dashboard, containerization, and integration tests. [ci skip]
40 KiB
Trading Bot Implementation Plan
For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
Goal: Build an automated stock trading bot that combines news sentiment analysis with technical strategies, learns from outcomes, and provides a real-time dashboard.
Architecture: Event-driven Python microservices communicating via Redis Streams, with PostgreSQL+TimescaleDB for persistence, Alpaca for brokerage, and a React/TypeScript dashboard. See docs/plans/2026-02-22-trading-bot-design.md for full design.
Tech Stack: Python 3.12, FastAPI, SQLAlchemy (async), Redis Streams, PostgreSQL+TimescaleDB, React 18, TypeScript, Tailwind CSS, alpaca-py, transformers (FinBERT), Ollama, OpenTelemetry, py-webauthn
Phase 1: Foundation
Task 1: Python Monorepo Setup
Files:
- Create:
pyproject.toml - Create:
shared/__init__.py - Create:
shared/config.py - Create:
shared/redis_streams.py - Create:
shared/telemetry.py - Create:
tests/__init__.py
Step 1: Create pyproject.toml
Single pyproject.toml at the repo root. Use [project.optional-dependencies] groups per service so each service only installs what it needs. Core deps shared by all: sqlalchemy[asyncio], asyncpg, redis, pydantic, pydantic-settings, opentelemetry-sdk, opentelemetry-exporter-prometheus, opentelemetry-api.
[project]
name = "trading-bot"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = [
"sqlalchemy[asyncio]>=2.0",
"asyncpg>=0.29",
"redis>=5.0",
"pydantic>=2.0",
"pydantic-settings>=2.0",
"opentelemetry-sdk>=1.20",
"opentelemetry-exporter-prometheus>=0.45b",
"opentelemetry-api>=1.20",
"alembic>=1.13",
]
[project.optional-dependencies]
api = ["fastapi>=0.110", "uvicorn[standard]>=0.27", "websockets>=12.0", "py-webauthn>=2.0", "pyjwt[crypto]>=2.8"]
news = ["feedparser>=6.0", "praw>=7.7", "httpx>=0.27"]
sentiment = ["transformers>=4.38", "torch>=2.2", "ollama>=0.1"]
trading = ["alpaca-py>=0.21"]
backtester = ["numpy>=1.26", "pandas>=2.2"]
dev = ["pytest>=8.0", "pytest-asyncio>=0.23", "pytest-cov>=4.1", "ruff>=0.3", "mypy>=1.8"]
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]
[tool.ruff]
line-length = 120
target-version = "py312"
Step 2: Create shared config module
shared/config.py — Pydantic Settings class for all shared config (database URL, Redis URL, etc). Each service extends this with its own settings.
from pydantic_settings import BaseSettings
class BaseConfig(BaseSettings):
database_url: str = "postgresql+asyncpg://trading:trading@localhost:5432/trading"
redis_url: str = "redis://localhost:6379/0"
log_level: str = "INFO"
otel_service_name: str = "trading-bot"
otel_metrics_port: int = 9090
model_config = {"env_prefix": "TRADING_"}
Step 3: Create Redis Streams helper
shared/redis_streams.py — Thin wrapper around redis-py Streams for publishing and consuming. Consumer groups, auto-ack, deserialization.
import json
import logging
from typing import AsyncIterator
from redis.asyncio import Redis
logger = logging.getLogger(__name__)
class StreamPublisher:
def __init__(self, redis: Redis, stream: str):
self.redis = redis
self.stream = stream
async def publish(self, data: dict) -> str:
msg_id = await self.redis.xadd(self.stream, {"data": json.dumps(data)})
return msg_id
class StreamConsumer:
def __init__(self, redis: Redis, stream: str, group: str, consumer: str):
self.redis = redis
self.stream = stream
self.group = group
self.consumer = consumer
async def ensure_group(self) -> None:
try:
await self.redis.xgroup_create(self.stream, self.group, id="0", mkstream=True)
except Exception:
pass # group already exists
async def consume(self, batch_size: int = 10, block_ms: int = 5000) -> AsyncIterator[tuple[str, dict]]:
await self.ensure_group()
while True:
messages = await self.redis.xreadgroup(
self.group, self.consumer, {self.stream: ">"}, count=batch_size, block=block_ms
)
for _stream, entries in messages:
for msg_id, fields in entries:
data = json.loads(fields[b"data"])
yield msg_id, data
await self.redis.xack(self.stream, self.group, msg_id)
Step 4: Create OpenTelemetry helper
shared/telemetry.py — Sets up meter provider with Prometheus exporter, returns a meter. Each service calls setup_telemetry("service-name") at startup and starts a /metrics HTTP endpoint.
from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.exporter.prometheus import PrometheusMetricReader
from prometheus_client import start_http_server
def setup_telemetry(service_name: str, metrics_port: int = 9090) -> metrics.Meter:
reader = PrometheusMetricReader()
provider = MeterProvider(metric_readers=[reader])
metrics.set_meter_provider(provider)
start_http_server(metrics_port)
return metrics.get_meter(service_name)
Step 5: Write tests for Redis Streams helper
# tests/test_redis_streams.py
import pytest
from unittest.mock import AsyncMock, MagicMock
from shared.redis_streams import StreamPublisher, StreamConsumer
@pytest.mark.asyncio
async def test_publisher_publishes_json():
redis = AsyncMock()
redis.xadd = AsyncMock(return_value=b"1-0")
pub = StreamPublisher(redis, "test:stream")
msg_id = await pub.publish({"ticker": "AAPL", "score": 0.8})
redis.xadd.assert_called_once()
assert msg_id == b"1-0"
Run: python -m pytest tests/test_redis_streams.py -v
Step 6: Commit
git add pyproject.toml shared/ tests/
git commit -m "feat: project foundation — monorepo setup, shared config, redis streams, telemetry"
Task 2: Docker Compose Infrastructure
Files:
- Create:
docker-compose.yml - Create:
.env.example - Create:
.gitignore
Step 1: Create docker-compose.yml with infrastructure services
Start with just the infrastructure containers (postgres+timescaledb, redis, ollama). Application services added later.
services:
postgres:
image: timescale/timescaledb:latest-pg16
environment:
POSTGRES_USER: trading
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-trading}
POSTGRES_DB: trading
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U trading"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redisdata:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ollama_models:/root/.ollama
volumes:
pgdata:
redisdata:
ollama_models:
Step 2: Create .env.example and .gitignore
.env.example: document all env vars with safe defaults.
.gitignore: Python defaults + .env, __pycache__, .venv, node_modules, etc.
Step 3: Boot infrastructure and verify
docker compose up -d postgres redis
docker compose ps # verify healthy
Step 4: Commit
git add docker-compose.yml .env.example .gitignore
git commit -m "feat: docker compose infrastructure — postgres+timescaledb, redis, ollama"
Task 3: Database Models & Alembic Migrations
Files:
- Create:
shared/models/__init__.py - Create:
shared/models/base.py - Create:
shared/models/trading.py - Create:
shared/models/news.py - Create:
shared/models/learning.py - Create:
shared/models/auth.py - Create:
shared/models/timeseries.py - Create:
shared/db.py - Create:
alembic.ini - Create:
alembic/env.py - Create:
tests/test_models.py
Step 1: Create base model and DB session factory
shared/models/base.py — Declarative base with common columns (id, created_at, updated_at).
shared/db.py — Async engine + sessionmaker factory.
# shared/models/base.py
import uuid
from datetime import datetime, timezone
from sqlalchemy import DateTime, func
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
from sqlalchemy.dialects.postgresql import UUID
class Base(DeclarativeBase):
pass
class TimestampMixin:
created_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), server_default=func.now()
)
updated_at: Mapped[datetime] = mapped_column(
DateTime(timezone=True), server_default=func.now(), onupdate=func.now()
)
# shared/db.py
from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker, AsyncSession
from shared.config import BaseConfig
def create_db(config: BaseConfig) -> tuple:
engine = create_async_engine(config.database_url, echo=config.log_level == "DEBUG")
session_factory = async_sessionmaker(engine, class_=AsyncSession, expire_on_commit=False)
return engine, session_factory
Step 2: Create trading models
shared/models/trading.py — Trade, Position, Signal, Strategy, StrategyWeightHistory per design doc.
Key columns per design doc:
trades: ticker, side (enum: BUY/SELL), qty, price, timestamp, strategy_id (FK), signal_id (FK), status (enum: PENDING/FILLED/CANCELLED/REJECTED), pnlpositions: ticker (unique), qty, avg_entry, unrealized_pnl, stop_loss, take_profitsignals: ticker, direction (enum: LONG/SHORT/NEUTRAL), strength (float 0-1), strategy_sources (JSON), sentiment_score, acted_on (bool)strategies: name (unique), description, current_weight, active (bool)strategy_weights_history: strategy_id (FK), old_weight, new_weight, timestamp, reason
Step 3: Create news models
shared/models/news.py — Article, ArticleSentiment per design doc.
Step 4: Create learning models
shared/models/learning.py — TradeOutcome, LearningAdjustment per design doc.
Step 5: Create auth models
shared/models/auth.py — User, UserCredential per design doc.
Step 6: Create timeseries models
shared/models/timeseries.py — MarketData, PortfolioSnapshot, StrategyMetric. These will be TimescaleDB hypertables (created via migration with SELECT create_hypertable(...)).
Step 7: Set up Alembic
python -m alembic init alembic
Edit alembic/env.py to import all models and use async engine. Edit alembic.ini to read sqlalchemy.url from env.
Step 8: Generate and apply initial migration
python -m alembic revision --autogenerate -m "initial schema"
Manually edit migration to add TimescaleDB hypertable creation after table creates:
op.execute("SELECT create_hypertable('market_data', 'timestamp')")
op.execute("SELECT create_hypertable('portfolio_snapshots', 'timestamp')")
op.execute("SELECT create_hypertable('strategy_metrics', 'timestamp')")
Apply:
docker compose up -d postgres
python -m alembic upgrade head
Step 9: Write model tests
Test that models can be instantiated, relationships work, enums are correct.
Step 10: Commit
git add shared/models/ shared/db.py alembic/ alembic.ini tests/test_models.py
git commit -m "feat: database models and alembic migrations — all tables per design"
Task 4: Pydantic Schemas
Files:
- Create:
shared/schemas/__init__.py - Create:
shared/schemas/trading.py - Create:
shared/schemas/news.py - Create:
shared/schemas/learning.py - Create:
shared/schemas/auth.py - Create:
tests/test_schemas.py
Step 1: Create schemas matching Redis Stream message formats
These are the Pydantic v2 models used for (de)serialization across services. Each Redis Stream message is one of these schemas serialized to JSON.
Key schemas:
RawArticle(published tonews:raw): source, url, title, content, published_at, fetched_atScoredArticle(published tonews:scored): article fields + ticker, sentiment_score, confidence, model_used, entitiesTradeSignal(published tosignals:generated): ticker, direction, strength, strategy_sources, sentiment_context, timestampTradeExecution(published totrades:executed): trade_id, ticker, side, qty, price, status, signal_id, timestamp
Also create request/response schemas for the API Gateway endpoints.
Step 2: Write schema validation tests
Test serialization round-trips, validation constraints (score range, required fields).
Step 3: Commit
git add shared/schemas/ tests/test_schemas.py
git commit -m "feat: pydantic schemas for all service message types"
Phase 2: Brokerage & Market Data
Task 5: Brokerage Abstraction Layer
Files:
- Create:
shared/broker/__init__.py - Create:
shared/broker/base.py - Create:
shared/broker/alpaca_broker.py - Create:
tests/test_broker.py
Step 1: Define abstract broker interface
shared/broker/base.py — ABC with methods: submit_order, cancel_order, get_positions, get_account, get_order_status, stream_market_data. This is the seam that lets us swap Alpaca for another brokerage later.
from abc import ABC, abstractmethod
from shared.schemas.trading import OrderRequest, OrderResult, PositionInfo, AccountInfo
class BaseBroker(ABC):
@abstractmethod
async def submit_order(self, order: OrderRequest) -> OrderResult: ...
@abstractmethod
async def cancel_order(self, order_id: str) -> bool: ...
@abstractmethod
async def get_positions(self) -> list[PositionInfo]: ...
@abstractmethod
async def get_account(self) -> AccountInfo: ...
@abstractmethod
async def get_order_status(self, order_id: str) -> OrderResult: ...
Step 2: Implement Alpaca broker
shared/broker/alpaca_broker.py — Wraps alpaca-py SDK. Uses TradingClient for orders/positions/account, StockDataStream for real-time bars. Paper vs live controlled by config (API base URL).
Step 3: Write tests with mocked Alpaca client
Test order submission, position retrieval, error handling (rejected orders, network failures).
Step 4: Commit
git add shared/broker/ tests/test_broker.py
git commit -m "feat: brokerage abstraction layer with Alpaca implementation"
Phase 3: News Pipeline
Task 6: News Fetcher Service
Files:
- Create:
services/news-fetcher/__init__.py - Create:
services/news-fetcher/main.py - Create:
services/news-fetcher/sources/__init__.py - Create:
services/news-fetcher/sources/rss.py - Create:
services/news-fetcher/sources/reddit.py - Create:
services/news-fetcher/config.py - Create:
tests/services/test_news_fetcher.py
Step 1: Create RSS source
Uses feedparser to poll configurable list of RSS feed URLs. Deduplicates by content_hash (SHA256 of URL+title). Converts feed entries to RawArticle schema. Configurable poll interval.
Default feeds: Yahoo Finance, Reuters business, MarketWatch, SEC EDGAR RSS.
Step 2: Create Reddit source
Uses praw (async via asyncpraw) to poll r/wallstreetbets, r/stocks, r/investing. Fetches hot/new posts above score threshold. Converts to RawArticle schema.
Step 3: Create main service loop
main.py — Async service that:
- Loads config (feed URLs, poll intervals, Reddit credentials)
- Connects to Redis
- Sets up telemetry (articles_fetched counter, fetch_errors counter, fetch_latency histogram)
- Runs source pollers on configurable schedules via
asyncio.TaskGroup - Publishes each
RawArticletonews:rawstream
Step 4: Write tests
Test RSS parsing with fixture XML, Reddit post conversion, deduplication logic, stream publishing.
Step 5: Commit
git add services/news-fetcher/ tests/services/test_news_fetcher.py
git commit -m "feat: news fetcher service — RSS and Reddit sources"
Task 7: Sentiment Analyzer Service
Files:
- Create:
services/sentiment-analyzer/__init__.py - Create:
services/sentiment-analyzer/main.py - Create:
services/sentiment-analyzer/analyzers/__init__.py - Create:
services/sentiment-analyzer/analyzers/finbert.py - Create:
services/sentiment-analyzer/analyzers/ollama_analyzer.py - Create:
services/sentiment-analyzer/ticker_extractor.py - Create:
services/sentiment-analyzer/config.py - Create:
tests/services/test_sentiment_analyzer.py
Step 1: Create FinBERT analyzer
Loads ProsusAI/finbert via HuggingFace transformers. Input: article title + first 512 tokens of content. Output: sentiment score (-1 to +1), confidence (0 to 1). Runs on CPU by default (GPU if available via torch.cuda).
Step 2: Create Ollama analyzer
Fallback for articles where FinBERT confidence < configurable threshold (default 0.6). Uses ollama Python client to query local Mistral/Llama 3. Structured prompt asking for JSON output with sentiment score, reasoning, and entity extraction.
Step 3: Create ticker extractor
Regex + lookup against a known ticker list (can be loaded from Alpaca's asset list). Extracts mentioned stock tickers from article text. Handles common patterns: $AAPL, "Apple Inc", NASDAQ:AAPL.
Step 4: Create main service loop
Consumes news:raw, routes through FinBERT → (if low confidence) → Ollama, extracts tickers, publishes ScoredArticle to news:scored.
Telemetry: articles_scored counter, finbert_vs_ollama_ratio, inference_latency histogram.
Step 5: Write tests
Test FinBERT scoring (mock transformers pipeline), Ollama fallback routing, ticker extraction regex, end-to-end message flow.
Step 6: Commit
git add services/sentiment-analyzer/ tests/services/test_sentiment_analyzer.py
git commit -m "feat: sentiment analyzer — FinBERT + Ollama tiered analysis"
Phase 4: Trading Core
Task 8: Strategy Implementations
Files:
- Create:
shared/strategies/__init__.py - Create:
shared/strategies/base.py - Create:
shared/strategies/momentum.py - Create:
shared/strategies/mean_reversion.py - Create:
shared/strategies/news_driven.py - Create:
tests/test_strategies.py
Step 1: Define strategy interface
# shared/strategies/base.py
from abc import ABC, abstractmethod
from shared.schemas.trading import TradeSignal, MarketSnapshot, SentimentContext
class BaseStrategy(ABC):
name: str
@abstractmethod
async def evaluate(
self, ticker: str, market: MarketSnapshot, sentiment: SentimentContext | None = None
) -> TradeSignal | None:
"""Return a signal if this strategy has an opinion, None otherwise."""
...
MarketSnapshot: current price, OHLCV bars (recent history), volume profile, moving averages.
SentimentContext: recent sentiment scores for this ticker, article count, average confidence.
Step 2: Implement momentum strategy
Buy when price crosses above N-period SMA with increasing volume. Sell when crosses below. Signal strength proportional to distance from SMA.
Step 3: Implement mean reversion strategy
Buy when RSI < 30 (oversold), sell when RSI > 70 (overbought). Signal strength proportional to RSI extremity.
Step 4: Implement news-driven strategy
Buy on strong positive sentiment (score > 0.7, confidence > 0.6), sell on strong negative. Signal strength = sentiment_score * confidence. Decay factor for stale news (> 4 hours old).
Step 5: Write tests for each strategy
Test with known market data fixtures. Verify signal direction, strength ranges, edge cases (no data, flat market).
Step 6: Commit
git add shared/strategies/ tests/test_strategies.py
git commit -m "feat: trading strategies — momentum, mean reversion, news-driven"
Task 9: Signal Generator Service
Files:
- Create:
services/signal-generator/__init__.py - Create:
services/signal-generator/main.py - Create:
services/signal-generator/ensemble.py - Create:
services/signal-generator/market_data.py - Create:
services/signal-generator/config.py - Create:
tests/services/test_signal_generator.py
Step 1: Create market data consumer
Connects to Alpaca WebSocket (StockDataStream) for real-time bars/quotes. Maintains in-memory rolling window of recent OHLCV bars per subscribed ticker. Builds MarketSnapshot objects.
Step 2: Create weighted ensemble
Loads strategy weights from DB (or Redis cache). Runs all active strategies, combines signals:
- For each ticker with signals: weighted average of direction * strength * weight
- Apply threshold: only emit signal if combined strength > configurable min (default 0.3)
- Tag output signal with contributing strategy sources and their individual strengths
async def combine_signals(
signals: list[tuple[BaseStrategy, TradeSignal]],
weights: dict[str, float],
) -> TradeSignal | None:
if not signals:
return None
weighted_sum = sum(
s.strength * (1 if s.direction == "LONG" else -1) * weights.get(strategy.name, 0.1)
for strategy, s in signals
)
total_weight = sum(weights.get(strategy.name, 0.1) for strategy, _ in signals)
combined_strength = abs(weighted_sum / total_weight) if total_weight > 0 else 0
# ... threshold check, build TradeSignal ...
Step 3: Create main service loop
Consumes news:scored (builds sentiment context per ticker). Subscribes to Alpaca WebSocket for tickers mentioned in recent news + a watchlist. On each new bar or sentiment update, runs ensemble for affected tickers. Publishes qualifying signals to signals:generated.
Step 4: Write tests
Test ensemble weighting math, threshold filtering, sentiment context aggregation, market snapshot building.
Step 5: Commit
git add services/signal-generator/ tests/services/test_signal_generator.py
git commit -m "feat: signal generator — weighted ensemble with market data"
Task 10: Trade Executor Service
Files:
- Create:
services/trade-executor/__init__.py - Create:
services/trade-executor/main.py - Create:
services/trade-executor/risk_manager.py - Create:
services/trade-executor/config.py - Create:
tests/services/test_trade_executor.py
Step 1: Create risk manager
Pre-trade risk checks:
- Position sizing: Kelly criterion or fixed fractional (configurable), max % of portfolio per position (default 5%)
- Max total exposure: sum of all position values < configurable % of portfolio (default 80%)
- Max positions: configurable cap (default 20)
- Stop-loss: auto-set at configurable % below entry (default 3%)
- Cooldown: no re-entry into same ticker within N minutes of exit (default 30)
- Market hours check: only trade during market hours (9:30 AM - 4:00 PM ET)
Step 2: Create main service loop
Consumes signals:generated. For each signal:
- Run risk checks → reject if any fail
- Calculate position size
- Submit order via broker abstraction
- Record trade in PostgreSQL (status: PENDING)
- Poll/await fill confirmation
- Update trade record (status: FILLED, actual price)
- Update positions table
- Publish
TradeExecutiontotrades:executed
Telemetry: trades_executed counter, order_fill_latency histogram, rejection_rate counter (by reason).
Step 3: Write tests
Test risk checks (position sizing math, exposure limits, cooldown), order flow with mocked broker, DB recording.
Step 4: Commit
git add services/trade-executor/ tests/services/test_trade_executor.py
git commit -m "feat: trade executor — risk management and order execution"
Phase 5: Learning & Backtesting
Task 11: Learning Engine Service
Files:
- Create:
services/learning-engine/__init__.py - Create:
services/learning-engine/main.py - Create:
services/learning-engine/evaluator.py - Create:
services/learning-engine/weight_adjuster.py - Create:
services/learning-engine/config.py - Create:
tests/services/test_learning_engine.py
Step 1: Create trade evaluator
When a position is closed (detected via trades:executed with side opposite to open position):
- Compute realized P&L, ROI %, hold duration
- Compute risk-adjusted return (per-trade Sharpe approximation)
- Store
TradeOutcomein DB - Attribute credit/blame to contributing strategies proportionally to signal strength
Step 2: Create weight adjuster
Multi-armed bandit style (per design doc):
new_weight = (1 - lr) * old_weight + lr * reward_signal
Guardrails (all configurable):
- Minimum 20 trades per strategy before any adjustment
- Max 10% weight shift per cycle
- Weight floor of 0.05
- Normalize all weights to sum to 1.0
- Exponential recency decay (recent trades weighted more)
Store every adjustment in learning_adjustments table. Update strategy weights in strategies table and Redis cache.
Step 3: Create main service loop
Consumes trades:executed. On position close events, runs evaluator → weight adjuster. Periodically (configurable, default 1 hour) computes portfolio-level metrics and stores PortfolioSnapshot and StrategyMetric in TimescaleDB.
Step 4: Write tests
Test P&L calculation, credit attribution math, weight adjustment with guardrails (test each guardrail independently), normalization.
Step 5: Commit
git add services/learning-engine/ tests/services/test_learning_engine.py
git commit -m "feat: learning engine — multi-armed bandit strategy weight adjustment"
Task 12: Backtesting Engine
Files:
- Create:
backtester/__init__.py - Create:
backtester/engine.py - Create:
backtester/simulated_broker.py - Create:
backtester/data_loader.py - Create:
backtester/metrics.py - Create:
backtester/config.py - Create:
tests/test_backtester.py
Step 1: Create simulated broker
Implements BaseBroker interface. Fills orders instantly at current bar's close price (or configurable slippage model). Tracks simulated positions and cash balance. Supports configurable commission model.
Step 2: Create data loader
Loads historical OHLCV bars from TimescaleDB market_data table. Loads historical sentiment from article_sentiments table. Aligns by timestamp. Returns an async iterator of (timestamp, MarketSnapshot, SentimentContext | None) tuples.
Step 3: Create backtest engine
Replays data loader output through the same strategy → ensemble → risk manager → simulated broker pipeline. Records all simulated trades. Uses the same code as live system (shared strategies, shared ensemble, shared risk manager).
Step 4: Create metrics calculator
From the simulated trade log, compute:
- Equity curve (portfolio value over time)
- Total return, annualized return
- Sharpe ratio, Sortino ratio
- Max drawdown (% and duration)
- Win rate, average win/loss ratio
- Per-strategy attribution (which strategies contributed most)
- Trade count, average hold duration
Step 5: Write tests
Test simulated broker fills, equity curve calculation, metrics math (known inputs → known outputs), data loader query.
Step 6: Commit
git add backtester/ tests/test_backtester.py
git commit -m "feat: backtesting engine — historical replay with shared strategies"
Phase 6: API & Dashboard
Task 13: API Gateway — Auth
Files:
- Create:
services/api-gateway/__init__.py - Create:
services/api-gateway/main.py - Create:
services/api-gateway/auth/__init__.py - Create:
services/api-gateway/auth/routes.py - Create:
services/api-gateway/auth/jwt.py - Create:
services/api-gateway/auth/middleware.py - Create:
services/api-gateway/config.py - Create:
tests/services/test_api_auth.py
Step 1: Create FastAPI application shell
main.py — FastAPI app with CORS middleware (restricted origin), lifespan handler for DB/Redis connections, OpenTelemetry instrumentation (opentelemetry-instrumentation-fastapi).
Step 2: Implement passkey registration (sign up)
POST /auth/register/begin — Generate WebAuthn registration options (challenge, relying party info). Uses py-webauthn's generate_registration_options.
POST /auth/register/complete — Verify registration response, store credential public key in user_credentials table. Uses verify_registration_response.
Step 3: Implement passkey authentication (sign in)
POST /auth/login/begin — Generate authentication options with stored credential IDs.
POST /auth/login/complete — Verify authentication response, update sign count, issue JWT (access token + refresh token).
Step 4: Create JWT helper
Issue short-lived access tokens (15 min) and longer refresh tokens (7 days). POST /auth/refresh to get new access token.
Step 5: Create auth middleware
FastAPI dependency that extracts and validates JWT from Authorization: Bearer <token> header. Skip for /auth/*, /metrics, /health routes.
Step 6: Write tests
Test registration flow, authentication flow (mock WebAuthn verification), JWT issuance/validation, middleware rejection of invalid tokens.
Step 7: Commit
git add services/api-gateway/ tests/services/test_api_auth.py
git commit -m "feat: API gateway with passkey (WebAuthn) authentication"
Task 14: API Gateway — Trading Endpoints
Files:
- Modify:
services/api-gateway/main.py - Create:
services/api-gateway/routes/__init__.py - Create:
services/api-gateway/routes/portfolio.py - Create:
services/api-gateway/routes/trades.py - Create:
services/api-gateway/routes/signals.py - Create:
services/api-gateway/routes/strategies.py - Create:
services/api-gateway/routes/news.py - Create:
services/api-gateway/routes/controls.py - Create:
services/api-gateway/routes/backtest.py - Create:
services/api-gateway/ws.py - Create:
tests/services/test_api_routes.py
Step 1: Portfolio endpoints
GET /api/portfolio— Current portfolio value, cash, buying power, daily P&LGET /api/portfolio/positions— All open positions with unrealized P&LGET /api/portfolio/history— Equity curve fromportfolio_snapshots(query params: period)
Step 2: Trade endpoints
GET /api/trades— Paginated trade history with filters (ticker, date range, strategy, profitable)GET /api/trades/:id— Single trade detail with linked signal, news context, outcome
Step 3: Signal & strategy endpoints
GET /api/signals— Recent signals with filtersGET /api/strategies— All strategies with current weightsGET /api/strategies/:id/history— Weight history and adjustments logGET /api/strategies/:id/metrics— Performance metrics over time
Step 4: News endpoints
GET /api/news— Recent scored articles with filters (ticker, source, score range)
Step 5: Control endpoints
POST /api/controls/pause— Pause trading (sets flag in Redis checked by trade executor)POST /api/controls/resume— Resume tradingPOST /api/controls/close-position— Force close a position by tickerGET /api/controls/status— Current trading status (active/paused)
Step 6: Backtest endpoints
POST /api/backtest/run— Start a backtest with config (date range, capital, strategies, weights). Runs in background task. Returns run_id.GET /api/backtest/:run_id— Get backtest results (status, metrics, trade log, equity curve)
Step 7: WebSocket endpoint
/ws — Authenticated WebSocket. Pushes real-time events: trade executions, new signals, portfolio value updates, news sentiment scores. Uses Redis pub/sub or polling to pick up events from other services.
Step 8: Write tests
Test each endpoint group with test client, mock DB queries, verify response schemas.
Step 9: Commit
git add services/api-gateway/ tests/services/test_api_routes.py
git commit -m "feat: API gateway trading endpoints, controls, backtest, WebSocket"
Task 15: Dashboard — Project Setup & Auth
Files:
- Create:
dashboard/package.json - Create:
dashboard/tsconfig.json - Create:
dashboard/vite.config.ts - Create:
dashboard/tailwind.config.ts - Create:
dashboard/postcss.config.js - Create:
dashboard/index.html - Create:
dashboard/src/main.tsx - Create:
dashboard/src/App.tsx - Create:
dashboard/src/api/client.ts - Create:
dashboard/src/api/auth.ts - Create:
dashboard/src/pages/Login.tsx - Create:
dashboard/src/pages/Register.tsx - Create:
dashboard/src/hooks/useAuth.ts - Create:
dashboard/src/components/ProtectedRoute.tsx
Step 1: Scaffold React project
Vite + React 18 + TypeScript. Install: @tanstack/react-query, react-router-dom, tailwindcss, @simplewebauthn/browser, lightweight-charts (TradingView), recharts.
Step 2: Create API client
Axios or fetch wrapper with JWT interceptor (auto-attach token, auto-refresh on 401).
Step 3: Create auth pages
Registration page: username input → calls /auth/register/begin → triggers browser passkey creation → sends attestation to /auth/register/complete.
Login page: username input → calls /auth/login/begin → triggers browser passkey assertion → sends to /auth/login/complete → stores JWT.
Step 4: Create protected route wrapper
Redirects to login if no valid token. Wraps all dashboard routes.
Step 5: Commit
git add dashboard/
git commit -m "feat: dashboard setup with passkey authentication"
Task 16: Dashboard — Trading Views
Files:
- Create:
dashboard/src/pages/Portfolio.tsx - Create:
dashboard/src/pages/TradeLog.tsx - Create:
dashboard/src/pages/Strategies.tsx - Create:
dashboard/src/pages/NewsFeed.tsx - Create:
dashboard/src/pages/Backtest.tsx - Create:
dashboard/src/components/EquityCurve.tsx - Create:
dashboard/src/components/PositionsTable.tsx - Create:
dashboard/src/components/TradeDetail.tsx - Create:
dashboard/src/components/StrategyWeights.tsx - Create:
dashboard/src/components/SentimentCard.tsx - Create:
dashboard/src/components/Layout.tsx - Create:
dashboard/src/hooks/useWebSocket.ts - Create:
dashboard/src/hooks/usePortfolio.ts
Step 1: Create layout and navigation
Sidebar nav with links to all 5 views. Top bar with portfolio value summary and trading status indicator (active/paused).
Step 2: Portfolio Overview page
- Portfolio value card with daily P&L (green/red)
- Equity curve chart (TradingView lightweight-charts) from
/api/portfolio/history - Open positions table with unrealized P&L, close button per position
- Key metrics row: ROI, Sharpe, win rate, max drawdown
Step 3: Trade Log page
- Paginated table from
/api/trades - Filters: ticker search, date range, strategy dropdown, profitable toggle
- Expandable rows showing linked news, signal details, strategy attribution
Step 4: Strategy Performance page
- Strategy cards showing name, current weight, win rate, total P&L
- Weight allocation donut chart (Recharts)
- Weight history line chart per strategy
- Adjustments log table
Step 5: News & Sentiment Feed page
- Live feed of scored articles from
/api/news - Ticker filter
- Color-coded sentiment badges (green/yellow/red)
- Sparkline of recent sentiment per ticker
Step 6: Backtesting page
- Config form: date range pickers, initial capital, strategy checkboxes with weight sliders, slippage/commission inputs
- Submit → shows progress → displays results dashboard
- Results: equity curve, metrics summary, trade log, per-strategy breakdown
Step 7: WebSocket hook for real-time updates
useWebSocket hook connects to /ws, dispatches events to TanStack Query cache invalidation. Toast notifications for trade executions.
Step 8: Commit
git add dashboard/src/
git commit -m "feat: dashboard trading views — portfolio, trades, strategies, news, backtest"
Phase 7: Containerization & Integration
Task 17: Dockerfiles & Full Docker Compose
Files:
- Create:
docker/Dockerfile.service(multi-stage, shared for all Python services) - Create:
docker/Dockerfile.dashboard - Modify:
docker-compose.yml
Step 1: Create Python service Dockerfile
Multi-stage build. Stage 1: install deps from pyproject.toml with appropriate extras. Stage 2: slim runtime image. Configurable via build arg which service to run and which extras to install.
FROM python:3.12-slim AS builder
WORKDIR /app
COPY pyproject.toml .
ARG EXTRAS="api"
RUN pip install --no-cache-dir ".[${EXTRAS}]"
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
COPY shared/ shared/
ARG SERVICE_DIR
COPY services/${SERVICE_DIR}/ service/
CMD ["python", "-m", "service.main"]
Step 2: Create dashboard Dockerfile
Multi-stage: Node build stage → nginx serving static files.
Step 3: Update docker-compose.yml with all services
Add all 6 Python services + dashboard container. Each service gets:
- Correct build args (EXTRAS, SERVICE_DIR)
- Depends-on for postgres, redis
- Environment variables from
.env - Health check
- Metrics port exposed for Prometheus scraping
Step 4: Write a smoke test script
scripts/smoke-test.sh — Boots full stack, waits for health checks, hits key endpoints, verifies non-error responses.
Step 5: Commit
git add docker/ docker-compose.yml scripts/
git commit -m "feat: dockerfiles and full docker-compose orchestration"
Task 18: Integration Testing & Seed Data
Files:
- Create:
tests/integration/test_news_pipeline.py - Create:
tests/integration/test_trading_flow.py - Create:
scripts/seed_strategies.py
Step 1: Seed default strategies
Script to insert the 3 strategies (momentum, mean_reversion, news_driven) with equal initial weights (0.333 each) into the strategies table.
Step 2: Write news pipeline integration test
Test the full flow: publish a mock RSS article → news fetcher picks it up → sentiment analyzer scores it → signal generator evaluates it. Uses real Redis + PostgreSQL (from docker-compose), mocked Alpaca and FinBERT.
Step 3: Write trading flow integration test
Test: inject a signal → trade executor processes it (mocked Alpaca broker) → trade recorded in DB → learning engine evaluates outcome.
Step 4: Commit
git add tests/integration/ scripts/seed_strategies.py
git commit -m "feat: integration tests and strategy seed data"
Task Dependencies
Task 1 (foundation) ──┬── Task 2 (docker infra)
├── Task 3 (models) ── Task 4 (schemas)
│ │
│ ┌───────────┤
│ │ │
│ Task 5 (broker) │
│ │ │
│ ┌────────┤ Task 6 (news fetcher)
│ │ │ │
│ │ Task 8 (strategies) Task 7 (sentiment)
│ │ │ │
│ │ Task 9 (signal gen) ─┘
│ │ │
│ │ Task 10 (executor)
│ │ │
│ │ Task 11 (learning)
│ │ │
│ │ Task 12 (backtester)
│ │
│ └── Task 13 (API auth) ── Task 14 (API routes)
│ │
│ Task 15 (dashboard setup)
│ │
│ Task 16 (dashboard views)
│
└── Task 17 (docker) ── Task 18 (integration)
Parallelizable pairs:
- Task 2 + Task 3 (infra + models)
- Task 5 + Task 6 (broker + news fetcher) after Task 4
- Task 7 + Task 8 (sentiment + strategies) after Task 4
- Task 13 + Task 9 (API auth + signal gen) after their deps
- Task 15 + Task 12 (dashboard setup + backtester) after their deps