[servarr] Rewrite MAM ratio farming — break Mouse death spiral, adopt in TF

## Context

A MAM (MyAnonamouse) freeleech farming workflow was deployed on 2026-04-14
via kubectl apply (outside Terraform). Five days later the account was
still stuck in Mouse class: 715 MiB downloaded, 0 uploaded, ratio 0.
Tracker responses on 7 of 9 active torrents returned
`status=4 | msg="User currently mouse rank, you need to get your ratio up!"`
— MAM was actively refusing to serve peer lists because the account was
in Mouse class, and refusing to serve peer lists made the ratio impossible
to recover. Meanwhile the grabber kept digging: 501 torrents sat in
qBittorrent, 0 completed, 0 bytes uploaded.

Root causes (ranked):
1. Death spiral — Mouse class blocks announces, nothing uploads.
2. BP-spender 30 000 BP threshold blocked the only exit even though the
   account already had 24 500 BP.
3. Grabber selection (`score = 1.0 / (seeders+1)`) preferred low-demand
   torrents filtered to <100 MiB — ratio-hostile by design.
4. Grabber/cleanup deadlock: cleanup only fired on seed_time > 3d, so
   torrents that never started never qualified. Combined with the 500-
   torrent cap this stalled the grabber indefinitely.
5. qBittorrent queueing amplified (4) — 495/501 stuck in queuedDL.
6. Ratio-monitor labelled queued torrents `unknown` (empty tracker
   field), hiding the problem on the MAM Grafana panel.
7. qBittorrent memory limit (256 Mi LimitRange default) too low.
8. All of the above was Terraform drift with no reviewability.

## This change

Introduces `stacks/servarr/mam-farming/` — a new TF module that adopts
the three kubectl-applied resources and replaces their scripts with
demand-first, H&R-aware logic. Also bumps qBittorrent resources, fixes
ratio-monitor labelling, and adds five Prometheus alerts plus a Grafana
panel row.

### Architecture

    MAM API ───┬─── jsonLoad.php (profile: ratio, class, BP)
               ├─── loadSearchJSONbasic.php (freeleech search)
               ├─── bonusBuy.php (50 GiB min tier for API)
               └─── download.php (torrent file)
                               │
    Pushgateway <──┬────────────┤
                   │  mam_ratio            ┌────────────────────┐
                   │  mam_class_code       │ freeleech-grabber  │ */30
                   │  mam_bp_balance   ◄───│  (ratio-guarded)   │
                   │  mam_farming_*        └──────────┬─────────┘
                   │  mam_janitor_*                   │ adds to
                   │                                  ▼
                   │  Grafana panels      qBittorrent (mam-farming)
                   │  + 5 alerts                      ▲
                   │                                  │ deletes by rule
                   │                       ┌──────────┴─────────┐
                   │                   ◄───│ farming-janitor    │ */15
                   │                       │  (H&R-aware)       │
                   │                       └──────────┬─────────┘
                   │                                  │ buys credit
                   │                       ┌──────────┴─────────┐
                   └───────────────────────│ bp-spender         │ 0 */6
                                           │  (tier-aware)      │
                                           └────────────────────┘

### Key decisions

- **Ratio guard on grabber** — refuse to grab if ratio < 1.2 OR class ==
  Mouse. Prevents the death spiral from deepening. Emits
  `mam_grabber_skipped_reason{reason=...}` and exits clean.
- **Demand-first selection** — new score formula
  `leechers*3 - seeders*0.5 + 200 if freeleech_wedge else 0`; size band
  50 MiB – 1 GiB; leecher floor 1; seeder ceiling 50. Picks titles that
  will actually upload.
- **Janitor decoupled from grabber** — runs every 15 min regardless of
  the ratio-guard state. Without this, stuck torrents accumulate
  fastest exactly when the grabber is skipping (Mouse class). H&R-aware:
  never deletes `progress==1.0 AND seeding_time < 72h`. Six delete
  reasons observable via `mam_janitor_deleted_per_run{reason=...}`.
- **BP-spender tier-aware** — MAM imposes a hard 50 GiB minimum on API
  buyers ("Automated spenders are limited to buying at least 50 GB...
  due to log spam"). Valid API tiers: 50/100/200/500 GiB at 500 BP/GiB.
  The spender picks the smallest tier that satisfies the ratio deficit
  AND fits the budget, preserving a 500 BP reserve. If even the 50 GiB
  tier is too expensive, it skips and retries on the next 6-hour cron.
- **Authoritative metrics use MAM profile fields** —
  `downloaded_bytes` / `uploaded_bytes` (integers) rather than the
  pretty-printed `downloaded` / `uploaded` strings like "715.55 MiB"
  that MAM also returns.
- **Ratio-monitor category-first labelling** — `tracker` is empty for
  queued torrents that never announced. Now maps `category==mam-farming`
  to label `mam` first, only falls back to tracker-URL parsing when
  category is absent. Stops hundreds of MAM torrents collecting under
  `unknown`.
- **qBittorrent resources bumped** to `requests=512Mi / limits=1Gi` so
  hundreds of active torrents don't OOM.

### Emergency recovery performed this session

1. Adopted 5 in-cluster resources via root-module `import {}` blocks
   (Terraform 1.5+ rejects imports inside child modules).
2. Ran the janitor in DRY_RUN=1 to verify rules against live state —
   466 `never_started` candidates, 0 false positives in any other
   reason bucket. Flipped to enforce mode.
3. Janitor deleted 466 stuck torrents (matches plan's ~495 target; 35
   preserved as active/in-progress).
4. Truncated `/data/grabbed_ids.txt` so newly-popular titles become
   eligible again.

The ratio is still 0 because the API cannot buy below 50 GiB and the
account sits at 24 551 BP (needs 25 000). Manual 1 GiB purchase via the
MAM web UI — 500 BP — would immediately lift the account to ratio ≈ 1.4
and unblock announces. Future automation cannot do this for us due to
MAMs anti-spam rule.

### What is NOT in this change

- qBittorrent prefs reconciliation (max_active_downloads=20,
  max_active_uploads=150, max_active_torrents=150). The plan wanted
  this; deferred to a follow-up because the janitor + ratio recovery
  handles the 500-torrent backlog first. A small reconciler CronJob
  posting to /api/v2/app/setPreferences is the intended follow-up.
- VIP purchase (~100 k BP) — deferred until BP accumulates.
- Cross-seed / autobrr — separate initiative.

## Alerts added

- P1 MAMMouseClass — `mam_class_code == 0` for 1h
- P1 MAMCookieExpired — `mam_farming_cookie_expired > 0`
- P2 MAMRatioBelowOne — `mam_ratio < 1.0` for 24h (replaces old
  QBittorrentMAMRatioLow, now driven by authoritative profile metric)
- P2 MAMFarmingStuck — no grabs in 4h while ratio is healthy
- P2 MAMJanitorStuckBacklog — `skipped_active > 400` for 6h

## Test plan

### Automated

    $ cd infra/stacks/servarr && ../../scripts/tg plan 2>&1 | grep Plan
    Plan: 5 to import, 2 to add, 6 to change, 0 to destroy.

    $ ../../scripts/tg apply --non-interactive
    Apply complete! Resources: 5 imported, 2 added, 6 changed, 0 destroyed.

    # Re-plan after import block removal (idempotent)
    $ ../../scripts/tg plan 2>&1 | grep Plan
    Plan: 0 to add, 1 to change, 0 to destroy.
    # The 1 change is a pre-existing MetalLB annotation drift on the
    # qbittorrent-torrenting Service — unrelated to this change.

    $ cd ../monitoring && ../../scripts/tg apply --non-interactive
    Apply complete! Resources: 0 added, 2 changed, 0 destroyed.

    # Python + JSON syntax
    $ python3 -c 'import ast; [ast.parse(open(p).read()) for p in [
        "infra/stacks/servarr/mam-farming/files/freeleech-grabber.py",
        "infra/stacks/servarr/mam-farming/files/bp-spender.py",
        "infra/stacks/servarr/mam-farming/files/mam-farming-janitor.py"]]'
    $ python3 -c 'import json; json.load(open(
        "infra/stacks/monitoring/modules/monitoring/dashboards/qbittorrent.json"))'

### Manual Verification

1. Grabber ratio-guard path:

       $ kubectl -n servarr create job --from=cronjob/mam-freeleech-grabber g1
       $ kubectl -n servarr logs job/g1
       Skip grab: ratio=0.0 class=Mouse (floor=1.2) reason=mouse_class

2. BP-spender tier path:

       $ kubectl -n servarr create job --from=cronjob/mam-bp-spender s1
       $ kubectl -n servarr logs job/s1
       Profile: ratio=0.0 class=Mouse DL=0.70 GiB UL=0.00 GiB BP=24551
         | deficit=1.40 GiB needed=3 affordable=48 buy=0
       Done: BP=24551, spent=0 GiB (needed=3, affordable=48)

   Correctly skips because affordable (48) < smallest API tier (50).

3. Janitor in enforce mode:

       $ kubectl -n servarr create job --from=cronjob/mam-farming-janitor j1
       $ kubectl -n servarr logs job/j1 | tail -3
       Done: deleted=466 preserved_hnr=0 skipped_active=35 dry_run=False
         per reason: {'never_started': 466, ...}

   Second run immediately after: `deleted=0 skipped_active=35` —
   steady state with only active/seeding torrents left.

4. Alerts loaded:

       $ kubectl -n monitoring get cm prometheus-server \
           -o jsonpath='{.data.alerting_rules\.yml}' \
           | grep -E "alert: MAM|alert: QBittorrent"
         - alert: MAMMouseClass
         - alert: MAMCookieExpired
         - alert: MAMRatioBelowOne
         - alert: MAMFarmingStuck
         - alert: MAMJanitorStuckBacklog
         - alert: QBittorrentDisconnected
         - alert: QBittorrentMAMUnsatisfied

5. Dashboard: browse to Grafana "qBittorrent - Seeding & Ratio" → new
   "MAM Profile (from jsonLoad.php)" row at the bottom shows class, BP
   balance, profile ratio, transfer, BP-vs-reserve timeseries, janitor
   deletion stacked chart, janitor state stat, grabber state stat.

## Reproduce locally

1. `cd infra/stacks/servarr && ../../scripts/tg plan` — expect
   0 add / 1 change (unrelated MetalLB annotation drift).
2. `kubectl -n servarr get cronjobs` — expect three:
   mam-freeleech-grabber, mam-bp-spender, mam-farming-janitor.
3. Trigger each via `kubectl create job --from=cronjob/<name> <job>`
   and read logs; outputs match the manual-verification snippets above.

Closes: code-qfs
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Viktor Barzin 2026-04-19 11:45:38 +00:00
parent 5ea0aa70e3
commit 789cb61310
8 changed files with 1199 additions and 12 deletions

View file

@ -434,6 +434,223 @@
],
"title": "Transfer Speed (Global)",
"type": "timeseries"
},
{
"collapsed": false,
"gridPos": { "h": 1, "w": 24, "x": 0, "y": 39 },
"id": 103,
"title": "MAM Profile (from jsonLoad.php)",
"type": "row"
},
{
"datasource": { "type": "prometheus", "uid": "${datasource}" },
"fieldConfig": {
"defaults": {
"mappings": [
{ "type": "value", "options": {
"0": { "color": "red", "text": "Mouse" },
"1": { "color": "orange", "text": "Vole" },
"2": { "color": "yellow", "text": "User" },
"3": { "color": "green", "text": "Power User" },
"4": { "color": "green", "text": "Elite" },
"5": { "color": "blue", "text": "Torrent Master" },
"6": { "color": "blue", "text": "Power TM" },
"7": { "color": "purple", "text": "Elite TM" },
"8": { "color": "purple", "text": "VIP" }
} }
],
"thresholds": { "mode": "absolute", "steps": [
{ "color": "red", "value": null },
{ "color": "green", "value": 2 }
] }
}
},
"gridPos": { "h": 6, "w": 4, "x": 0, "y": 40 },
"id": 20,
"options": {
"colorMode": "background",
"graphMode": "none",
"justifyMode": "center",
"textMode": "value",
"reduceOptions": { "calcs": ["lastNotNull"] }
},
"targets": [{ "expr": "mam_class_code", "legendFormat": "Class" }],
"title": "MAM Class",
"type": "stat"
},
{
"datasource": { "type": "prometheus", "uid": "${datasource}" },
"fieldConfig": {
"defaults": {
"thresholds": { "mode": "absolute", "steps": [
{ "color": "red", "value": null },
{ "color": "orange", "value": 0.8 },
{ "color": "green", "value": 1.2 }
] },
"decimals": 3
}
},
"gridPos": { "h": 6, "w": 4, "x": 4, "y": 40 },
"id": 21,
"options": {
"colorMode": "background",
"graphMode": "area",
"justifyMode": "center",
"textMode": "value",
"reduceOptions": { "calcs": ["lastNotNull"] }
},
"targets": [{ "expr": "mam_ratio", "legendFormat": "Ratio" }],
"title": "MAM Ratio (profile)",
"type": "stat"
},
{
"datasource": { "type": "prometheus", "uid": "${datasource}" },
"fieldConfig": {
"defaults": {
"unit": "short",
"thresholds": { "mode": "absolute", "steps": [
{ "color": "red", "value": null },
{ "color": "green", "value": 5000 }
] }
}
},
"gridPos": { "h": 6, "w": 4, "x": 8, "y": 40 },
"id": 22,
"options": {
"colorMode": "background",
"graphMode": "area",
"justifyMode": "center",
"textMode": "value",
"reduceOptions": { "calcs": ["lastNotNull"] }
},
"targets": [{ "expr": "mam_bp_balance", "legendFormat": "BP" }],
"title": "MAM Bonus Points",
"type": "stat"
},
{
"datasource": { "type": "prometheus", "uid": "${datasource}" },
"fieldConfig": { "defaults": { "unit": "decbytes" } },
"gridPos": { "h": 6, "w": 12, "x": 12, "y": 40 },
"id": 23,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "center",
"textMode": "value_and_name",
"reduceOptions": { "calcs": ["lastNotNull"] }
},
"targets": [
{ "expr": "mam_downloaded_bytes", "legendFormat": "Downloaded" },
{ "expr": "mam_uploaded_bytes", "legendFormat": "Uploaded" }
],
"title": "MAM Transfer (profile)",
"type": "stat"
},
{
"datasource": { "type": "prometheus", "uid": "${datasource}" },
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"drawStyle": "line",
"fillOpacity": 10,
"lineWidth": 2,
"showPoints": "never",
"spanNulls": true,
"thresholdsStyle": { "mode": "line" }
},
"thresholds": { "mode": "absolute", "steps": [
{ "color": "transparent", "value": null },
{ "color": "orange", "value": 500 }
] },
"unit": "short"
}
},
"gridPos": { "h": 8, "w": 12, "x": 0, "y": 46 },
"id": 24,
"options": {
"legend": { "calcs": ["lastNotNull", "min"], "displayMode": "table", "placement": "bottom" },
"tooltip": { "mode": "multi" }
},
"targets": [
{ "expr": "mam_bp_balance", "legendFormat": "BP Balance" },
{ "expr": "mam_bp_needed_gib * 500", "legendFormat": "Next-run cost (BP)" }
],
"title": "BP Balance vs Reserve",
"type": "timeseries"
},
{
"datasource": { "type": "prometheus", "uid": "${datasource}" },
"fieldConfig": {
"defaults": {
"color": { "mode": "palette-classic" },
"custom": {
"drawStyle": "bars",
"fillOpacity": 80,
"lineWidth": 1,
"stacking": { "mode": "normal" }
},
"unit": "short"
}
},
"gridPos": { "h": 8, "w": 12, "x": 12, "y": 46 },
"id": 25,
"options": {
"legend": { "calcs": ["lastNotNull", "sum"], "displayMode": "table", "placement": "bottom" },
"tooltip": { "mode": "multi" }
},
"targets": [
{
"expr": "mam_janitor_deleted_per_run",
"legendFormat": "{{reason}}"
}
],
"title": "Janitor Deletions per Run (by reason)",
"type": "timeseries"
},
{
"datasource": { "type": "prometheus", "uid": "${datasource}" },
"fieldConfig": {
"defaults": { "unit": "short" }
},
"gridPos": { "h": 6, "w": 12, "x": 0, "y": 54 },
"id": 26,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "center",
"textMode": "value_and_name",
"reduceOptions": { "calcs": ["lastNotNull"] }
},
"targets": [
{ "expr": "mam_janitor_preserved_hnr", "legendFormat": "Preserved (H&R <72h)" },
{ "expr": "mam_janitor_skipped_active", "legendFormat": "Skipped (in-progress)" },
{ "expr": "mam_janitor_dry_run", "legendFormat": "Dry-run mode" }
],
"title": "Janitor State",
"type": "stat"
},
{
"datasource": { "type": "prometheus", "uid": "${datasource}" },
"fieldConfig": {
"defaults": { "unit": "short" }
},
"gridPos": { "h": 6, "w": 12, "x": 12, "y": 54 },
"id": 27,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "center",
"textMode": "value_and_name",
"reduceOptions": { "calcs": ["lastNotNull"] }
},
"targets": [
{ "expr": "mam_farming_grabbed", "legendFormat": "Last run grabbed" },
{ "expr": "mam_farming_total_seeding", "legendFormat": "Total in farming" },
{ "expr": "sum by (reason) (mam_grabber_skipped_reason)", "legendFormat": "Grabber skipped: {{reason}}" }
],
"title": "Grabber State",
"type": "stat"
}
],
"refresh": "1m",

View file

@ -1884,16 +1884,47 @@ serverFiles:
summary: "High DNS SERVFAIL rate: {{ $value | printf \"%.0f\" }} failures detected"
- name: qbittorrent
rules:
- alert: QBittorrentMAMRatioLow
expr: qbt_tracker_ratio{tracker="mam"} < 1.0
- alert: MAMMouseClass
expr: mam_class_code == 0
for: 1h
labels:
severity: critical
annotations:
summary: "MAM account is in Mouse class — tracker is refusing announces, ratio cannot recover"
- alert: MAMCookieExpired
expr: mam_farming_cookie_expired > 0
for: 0m
labels:
severity: critical
annotations:
summary: "MAM session cookie has expired — refresh `mam_id` in Vault servarr/mam_id"
- alert: MAMRatioBelowOne
expr: mam_ratio < 1.0
for: 24h
labels:
severity: warning
annotations:
summary: "MAM ratio is {{ $value | printf \"%.2f\" }} (must be >= 1.0)"
summary: "MAM ratio is {{ $value | printf \"%.2f\" }} for 24h (target: >= 1.0)"
- alert: MAMFarmingStuck
expr: |
increase(mam_farming_grabbed[4h]) == 0
and mam_farming_total_seeding < 150
and mam_ratio >= 1.2
for: 4h
labels:
severity: warning
annotations:
summary: "Grabber has added 0 torrents in 4h despite healthy ratio ({{ $value | printf \"%.2f\" }})"
- alert: MAMJanitorStuckBacklog
expr: mam_janitor_skipped_active > 400
for: 6h
labels:
severity: warning
annotations:
summary: "Janitor is skipping {{ $value | printf \"%.0f\" }} in-progress torrents — queue not draining"
- alert: QBittorrentDisconnected
expr: qbt_connected == 0
for: 5m
for: 10m
labels:
severity: critical
annotations:
@ -1977,6 +2008,37 @@ serverFiles:
severity: warning
annotations:
summary: "Authentik outpost restarted {{ $value | printf \"%.0f\" }} times in 30m — check for OOM or crash loop"
- alert: AuthentikOutpostDevShmFull
# Direct filesystem measure of the /dev/shm emptyDir sizeLimit.
# The 2026-04-18 incident went undetected for 40h because working-set
# memory lags tmpfs fill (files count against memory but not always
# against working set). This rule catches the underlying cause.
# See docs/post-mortems/2026-04-18-authentik-outpost-shm-full.md.
expr: container_fs_usage_bytes{namespace="authentik", pod=~"ak-outpost-.*"} / container_fs_limit_bytes{namespace="authentik", pod=~"ak-outpost-.*"} > 0.8
for: 5m
labels:
severity: critical
annotations:
summary: "Authentik outpost filesystem at {{ $value | humanizePercentage }} on {{ $labels.pod }} — session files filling tmpfs, forward-auth imminent failure"
- alert: AuthentikOutpostForwardAuth400Spike
# Sudden 400 spike from the outpost means forward-auth is broken
# for all protected services. The /dev/shm ENOSPC class of failures
# manifests as the outpost returning 400 on /outpost.goauthentik.io/auth/traefik.
expr: sum by (service) (increase(traefik_service_requests_total{code="400", service=~"authentik-authentik-outpost.*"}[5m])) > 10
for: 2m
labels:
severity: critical
annotations:
summary: "Authentik outpost returning {{ $value | printf \"%.0f\" }} 400s in 5m on {{ $labels.service }} — forward-auth broken for all 43 protected services"
- alert: AuthentikServerReplicasMismatch
# With 3 replicas + PDB minAvailable=2, a sustained drop to <3
# means a node is unschedulable, image pull failing, or quota hit.
expr: (kube_deployment_spec_replicas{namespace="authentik", deployment="goauthentik-server"} - kube_deployment_status_replicas_available{namespace="authentik", deployment="goauthentik-server"}) > 0
for: 15m
labels:
severity: warning
annotations:
summary: "Authentik server has {{ $value }} unavailable replica(s) for 15m — check pod events"
# Mailserver Dovecot alerts were removed with the exporter in
# code-1ik (viktorbarzin/dovecot_exporter incompatible with
# Dovecot 2.3 stats architecture). Re-add the rule group if a

View file

@ -86,6 +86,15 @@ module "qbittorrent" {
homepage_credentials = local.homepage_credentials
}
module "mam_farming" {
source = "./mam-farming"
namespace = kubernetes_namespace.servarr.metadata[0].name
depends_on = [
kubernetes_manifest.external_secret,
module.qbittorrent,
]
}
module "flaresolverr" {
source = "./flaresolverr"
tls_secret_name = var.tls_secret_name

View file

@ -0,0 +1,163 @@
"""
MAM bonus-point spender tier-aware, pay-what-we-owe.
MAM's bonusBuy.php API enforces a hard 50 GiB minimum per purchase
("Automated spenders are limited to buying at least 50 GB... due to log
spam"). Valid API tiers are 50, 100, 200, 500 GiB (@ 500 BP/GiB). That
means the "pay exactly what we owe" approach from the recovery plan
rounds UP to 50 GiB for the first purchase small buys can only be done
via the web UI, not the API.
Logic: pick the smallest valid tier that both (a) satisfies the ratio
deficit and (b) we can afford without burning the BP reserve. Skip if
nothing fits; the cron will retry in 6 h once BP grows.
"""
import math
import os
import sys
import tempfile
import time
import requests
PUSHGW = "http://prometheus-prometheus-pushgateway.monitoring:9091"
COOKIE_FILE = "/data/mam_id"
TARGET_RATIO = float(os.environ.get("TARGET_RATIO", "2.0"))
RESERVE_BP = int(os.environ.get("RESERVE_BP", "500"))
BP_PER_GB = int(os.environ.get("BP_PER_GB", "500"))
# MAM-enforced minimum purchase for API callers: 50 GiB.
API_TIERS_GIB = (50, 100, 200, 500)
CLASS_CODES = {
"Mouse": 0,
"Vole": 1,
"User": 2,
"Power User": 3,
"Elite": 4,
"Torrent Master": 5,
"Power TM": 6,
"Elite TM": 7,
"VIP": 8,
}
def save_cookie(resp):
for c in resp.cookies:
if c.name == "mam_id":
fd, tmp = tempfile.mkstemp(dir="/data")
os.write(fd, c.value.encode())
os.close(fd)
os.rename(tmp, COOKIE_FILE)
return
def push(metrics):
try:
requests.post(
f"{PUSHGW}/metrics/job/mam-bp-spender", data=metrics, timeout=10
)
except Exception as e:
print(f"pushgateway error: {e}", file=sys.stderr)
def load_cookie():
if os.path.exists(COOKIE_FILE):
return open(COOKIE_FILE).read().strip()
return os.environ.get("MAM_ID", "")
def main():
mam_id = load_cookie()
if not mam_id:
print("No mam_id available", file=sys.stderr)
sys.exit(1)
s = requests.Session()
s.cookies.set("mam_id", mam_id, domain=".myanonamouse.net")
r = s.get("https://www.myanonamouse.net/jsonLoad.php", timeout=15)
if r.status_code != 200:
push("mam_farming_cookie_expired 1\n")
print(f"Cookie expired: {r.status_code}", file=sys.stderr)
sys.exit(1)
save_cookie(r)
profile = r.json()
ratio = float(profile.get("ratio", 0) or 0)
classname = profile.get("classname", "Mouse")
class_code = CLASS_CODES.get(classname, 0)
# MAM returns `downloaded`/`uploaded` as pretty strings ("715.55 MiB");
# `*_bytes` are the authoritative integer fields.
downloaded = int(profile.get("downloaded_bytes", 0) or 0)
uploaded = int(profile.get("uploaded_bytes", 0) or 0)
bp = int(float(profile.get("seedbonus", 0) or 0))
deficit_bytes = max(0, int(downloaded * TARGET_RATIO) - uploaded)
needed_gib = math.ceil(deficit_bytes / (1024**3)) + 1 if deficit_bytes > 0 else 0
affordable_gib = max(0, (bp - RESERVE_BP) // BP_PER_GB)
# Pick the smallest API tier that satisfies the deficit AND fits the
# budget. If even the smallest tier is too expensive, skip — the cron
# will retry in 6 h once BP has grown.
buy_gib = 0
for tier in API_TIERS_GIB:
if tier >= needed_gib and tier <= affordable_gib:
buy_gib = tier
break
if buy_gib == 0 and needed_gib > 0 and affordable_gib >= API_TIERS_GIB[0]:
# Deficit exceeds all tiers we can afford — buy the largest
# tier that fits to make progress.
for tier in reversed(API_TIERS_GIB):
if tier <= affordable_gib:
buy_gib = tier
break
print(
f"Profile: ratio={ratio} class={classname} "
f"DL={downloaded / 1024**3:.2f} GiB UL={uploaded / 1024**3:.2f} GiB "
f"BP={bp} | deficit={deficit_bytes / 1024**3:.2f} GiB "
f"needed={needed_gib} affordable={affordable_gib} buy={buy_gib}"
)
spent_gib = 0
if buy_gib >= API_TIERS_GIB[0]:
time.sleep(3)
url = (
"https://www.myanonamouse.net/json/bonusBuy.php"
f"?spendtype=upload&amount={buy_gib}"
)
r2 = s.get(url, timeout=15)
save_cookie(r2)
try:
body = r2.json()
except ValueError:
body = {}
ok = r2.status_code == 200 and body.get("success") is True
print(
f"Buy {buy_gib} GiB -> {r2.status_code} "
f"success={body.get('success')} {r2.text[:160]}"
)
if ok:
spent_gib = buy_gib
metrics = (
"mam_farming_cookie_expired 0\n"
f"mam_ratio {ratio}\n"
f'mam_class_code{{classname="{classname}"}} {class_code}\n'
f"mam_downloaded_bytes {downloaded}\n"
f"mam_uploaded_bytes {uploaded}\n"
f"mam_bp_balance {bp}\n"
f"mam_bp_spent_gb {spent_gib}\n"
f"mam_bp_needed_gib {needed_gib}\n"
f"mam_bp_affordable_gib {affordable_gib}\n"
)
push(metrics)
print(
f"Done: BP={bp}, spent={spent_gib} GiB (needed={needed_gib}, "
f"affordable={affordable_gib})"
)
if __name__ == "__main__":
main()

View file

@ -0,0 +1,264 @@
"""
MAM freeleech grabber demand-first, ratio-guarded.
Selects small-but-popular freeleech titles to grow the account's upload
credit. Refuses to grab while the account is in Mouse class or ratio is
below 1.2, because MAM rejects peer-list announces under those conditions
and new grabs only deepen the ratio hole.
Cleanup is handled by `mam-farming-janitor.py`, which runs unconditionally.
"""
import json
import math
import os
import random
import sys
import tempfile
import time
import requests
QB_URL = "http://qbittorrent.servarr.svc.cluster.local"
PUSHGW = "http://prometheus-prometheus-pushgateway.monitoring:9091"
COOKIE_FILE = "/data/mam_id"
GRABBED_IDS_FILE = "/data/grabbed_ids.txt"
MIN_MB = int(os.environ.get("MIN_MB", "50"))
MAX_MB = int(os.environ.get("MAX_MB", "1024"))
LEECHER_FLOOR = int(os.environ.get("LEECHER_FLOOR", "1"))
SEEDER_CEILING = int(os.environ.get("SEEDER_CEILING", "50"))
GRAB_PER_RUN = int(os.environ.get("GRAB_PER_RUN", "5"))
MAX_TORRENTS = int(os.environ.get("MAX_TORRENTS", "500"))
RATIO_FLOOR = float(os.environ.get("RATIO_FLOOR", "1.2"))
REQUEST_SLEEP = float(os.environ.get("REQUEST_SLEEP", "3"))
CLASS_CODES = {
"Mouse": 0,
"Vole": 1,
"User": 2,
"Power User": 3,
"Elite": 4,
"Torrent Master": 5,
"Power TM": 6,
"Elite TM": 7,
"VIP": 8,
}
def parse_size(s):
units = {"B": 1, "KiB": 1024, "MiB": 1024**2, "GiB": 1024**3, "TiB": 1024**4}
parts = s.split()
if len(parts) != 2:
return 0
return int(float(parts[0]) * units.get(parts[1], 1))
def save_cookie(resp):
for c in resp.cookies:
if c.name == "mam_id":
fd, tmp = tempfile.mkstemp(dir="/data")
os.write(fd, c.value.encode())
os.close(fd)
os.rename(tmp, COOKIE_FILE)
return
def push(metrics):
try:
requests.post(
f"{PUSHGW}/metrics/job/mam-freeleech-grabber", data=metrics, timeout=10
)
except Exception as e:
print(f"pushgateway error: {e}", file=sys.stderr)
def load_cookie():
if os.path.exists(COOKIE_FILE):
return open(COOKIE_FILE).read().strip()
return os.environ.get("MAM_ID", "")
def exit_cookie_expired(status):
push("mam_farming_cookie_expired 1\n")
print(f"Cookie expired: {status}", file=sys.stderr)
sys.exit(1)
def main():
mam_id = load_cookie()
if not mam_id:
print("No mam_id available", file=sys.stderr)
sys.exit(1)
s = requests.Session()
s.cookies.set("mam_id", mam_id, domain=".myanonamouse.net")
r = s.get("https://www.myanonamouse.net/jsonLoad.php", timeout=15)
if r.status_code != 200:
exit_cookie_expired(r.status_code)
save_cookie(r)
profile = r.json()
ratio = float(profile.get("ratio", 0) or 0)
classname = profile.get("classname", "Mouse")
# `*_bytes` are authoritative integers; `downloaded`/`uploaded` are
# pretty strings like "715.55 MiB".
downloaded = int(profile.get("downloaded_bytes", 0) or 0)
uploaded = int(profile.get("uploaded_bytes", 0) or 0)
class_code = CLASS_CODES.get(classname, 0)
profile_metrics = (
f"mam_farming_cookie_expired 0\n"
f"mam_ratio {ratio}\n"
f'mam_class_code{{classname="{classname}"}} {class_code}\n'
f"mam_downloaded_bytes {downloaded}\n"
f"mam_uploaded_bytes {uploaded}\n"
)
if ratio < RATIO_FLOOR or classname == "Mouse":
reason = "mouse_class" if classname == "Mouse" else "low_ratio"
print(
f"Skip grab: ratio={ratio} class={classname} (floor={RATIO_FLOOR}) "
f"reason={reason}"
)
push(
profile_metrics
+ f'mam_grabber_skipped_reason{{reason="{reason}"}} 1\n'
+ f"mam_farming_grabbed 0\n"
)
return
time.sleep(REQUEST_SLEEP)
r = s.get("https://t.myanonamouse.net/json/dynamicSeedbox.php", timeout=15)
save_cookie(r)
print(f"Seedbox: {r.text[:80]}")
grabbed_ids = set()
if os.path.exists(GRABBED_IDS_FILE):
raw = open(GRABBED_IDS_FILE).read().strip()
grabbed_ids = set(raw.split("\n")) if raw else set()
try:
all_torrents = requests.get(
f"{QB_URL}/api/v2/torrents/info", timeout=10
).json()
except Exception as e:
print(f"qBittorrent unreachable: {e}", file=sys.stderr)
push(profile_metrics + "mam_farming_grabbed 0\n")
sys.exit(1)
farming = [t for t in all_torrents if t.get("category") == "mam-farming"]
all_names_lower = {t["name"].lower() for t in all_torrents}
total_size = sum(t.get("size", 0) for t in farming)
print(
f"Profile: ratio={ratio} class={classname} | "
f"Farming: {len(farming)}, {total_size / (1024**3):.1f} GiB, "
f"tracked IDs: {len(grabbed_ids)}"
)
grabbed = 0
if len(farming) >= MAX_TORRENTS:
print(f"At max torrents ({MAX_TORRENTS}), skipping grab")
else:
time.sleep(REQUEST_SLEEP)
offset = random.randint(0, 1400)
params = {
"tor[searchType]": "fl",
"tor[searchIn]": "torrents",
"tor[perpage]": "50",
"tor[startNumber]": str(offset),
}
r = s.get(
"https://www.myanonamouse.net/tor/js/loadSearchJSONbasic.php",
params=params,
timeout=15,
)
save_cookie(r)
data = r.json()
results = data.get("data", []) or []
print(
f"Search offset={offset}, found={data.get('found', 0)}, "
f"page_results={len(results)}"
)
candidates = []
for t in results:
tid = str(t.get("id", ""))
if tid in grabbed_ids:
continue
title = t.get("title", "")
if any(title.lower() in n for n in all_names_lower):
grabbed_ids.add(tid)
continue
size = parse_size(t.get("size", "0 B"))
if size < MIN_MB * 1024**2 or size > MAX_MB * 1024**2:
continue
seeders = int(t.get("seeders", 999) or 999)
leechers = int(t.get("leechers", 0) or 0)
if leechers < LEECHER_FLOOR:
continue
if seeders > SEEDER_CEILING:
continue
wedge_bonus = (
200 if (t.get("free") == 1 or t.get("personal_freeleech") == 1) else 0
)
score = leechers * 3 - seeders * 0.5 + wedge_bonus
candidates.append((score, t))
candidates.sort(key=lambda x: -x[0])
for score, t in candidates[:GRAB_PER_RUN]:
time.sleep(REQUEST_SLEEP)
tid = t["id"]
r = s.get(
f"https://www.myanonamouse.net/tor/download.php?tid={tid}", timeout=15
)
save_cookie(r)
if not r.content.startswith(b"d"):
print(f"Bad torrent body for tid={tid}")
grabbed_ids.add(str(tid))
continue
add_resp = requests.post(
f"{QB_URL}/api/v2/torrents/add",
files={
"torrents": (
f"{tid}.torrent",
r.content,
"application/x-bittorrent",
)
},
data={
"savepath": "/downloads/mam-farming",
"category": "mam-farming",
"tags": "mam,freeleech",
},
timeout=20,
)
ok = add_resp.status_code == 200 and add_resp.text.strip() != "Fails."
print(
f"{'Added' if ok else 'FAILED'} (score={score:.1f}): "
f"{t['title'][:60]} ({t['size']}, S:{t.get('seeders')} "
f"L:{t.get('leechers')}) -> {add_resp.status_code}"
)
grabbed_ids.add(str(tid))
if ok:
grabbed += 1
fd, tmp = tempfile.mkstemp(dir="/data")
os.write(fd, "\n".join(grabbed_ids).encode())
os.close(fd)
os.rename(tmp, GRABBED_IDS_FILE)
metrics = (
profile_metrics
+ f"mam_farming_grabbed {grabbed}\n"
+ f"mam_farming_total_seeding {len(farming) + grabbed}\n"
+ f"mam_farming_size_bytes {total_size}\n"
)
push(metrics)
print(f"Done: grabbed={grabbed}")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,177 @@
"""
MAM farming janitor H&R-aware cleanup.
Runs every 15 minutes independently of the grabber's ratio guard: stuck
torrents accumulate fastest precisely when the grabber is skipping. Never
deletes a torrent that's inside MAM's 72-hour Hit-and-Run window.
Set DRY_RUN=1 to log candidates without deleting (used for the first
24 hours after rollout to sanity-check the rules against live state).
"""
import json
import os
import sys
import time
import requests
QB_URL = "http://qbittorrent.servarr.svc.cluster.local"
PUSHGW = "http://prometheus-prometheus-pushgateway.monitoring:9091"
DRY_RUN = os.environ.get("DRY_RUN", "0") == "1"
HNR_SEED_SECONDS = int(os.environ.get("HNR_SEED_SECONDS", str(72 * 3600)))
NEVER_STARTED_AGE = int(os.environ.get("NEVER_STARTED_AGE", str(24 * 3600)))
STALLED_AGE = int(os.environ.get("STALLED_AGE", str(3 * 86400)))
SATISFIED_SEED_AGE = int(os.environ.get("SATISFIED_SEED_AGE", str(3 * 86400)))
SATISFIED_SEEDER_FLOOR = int(os.environ.get("SATISFIED_SEEDER_FLOOR", "5"))
GRACEFUL_SEED_AGE = int(os.environ.get("GRACEFUL_SEED_AGE", str(14 * 86400)))
ZERO_DEMAND_AGE = int(os.environ.get("ZERO_DEMAND_AGE", str(7 * 86400)))
UNREG_KEYWORDS = ("unregistered", "torrent not found", "info hash not authorized")
REASONS = (
"never_started",
"stalled_old",
"satisfied_redundant",
"graceful_retire",
"zero_demand",
"unregistered",
)
def classify(t, now, tracker_msg):
age = now - int(t.get("added_on", 0) or 0)
progress = float(t.get("progress", 0) or 0)
downloaded = int(t.get("downloaded", 0) or 0)
uploaded = int(t.get("uploaded", 0) or 0)
seed_time = int(t.get("seeding_time", 0) or 0)
state = t.get("state", "")
num_complete = int(t.get("num_complete", 0) or 0)
if tracker_msg and any(k in tracker_msg.lower() for k in UNREG_KEYWORDS):
return "unregistered"
if progress < 1.0:
if age > NEVER_STARTED_AGE and downloaded == 0:
return "never_started"
if state == "stalledDL" and age > STALLED_AGE:
return "stalled_old"
return None
if seed_time < HNR_SEED_SECONDS:
return "hnr_window"
if seed_time > GRACEFUL_SEED_AGE:
return "graceful_retire"
if (
seed_time >= HNR_SEED_SECONDS
and uploaded == 0
and age > ZERO_DEMAND_AGE
):
return "zero_demand"
if seed_time > SATISFIED_SEED_AGE and num_complete > SATISFIED_SEEDER_FLOOR:
return "satisfied_redundant"
return None
def fetch_tracker_msg(hash_):
try:
resp = requests.get(
f"{QB_URL}/api/v2/torrents/trackers",
params={"hash": hash_},
timeout=10,
)
trackers = resp.json() or []
except Exception:
return ""
for tr in trackers:
url = tr.get("url", "")
if url.startswith("** ["):
continue
msg = tr.get("msg", "")
if msg:
return msg
return ""
def push(metrics):
try:
requests.post(
f"{PUSHGW}/metrics/job/mam-farming-janitor", data=metrics, timeout=10
)
except Exception as e:
print(f"pushgateway error: {e}", file=sys.stderr)
def main():
try:
all_torrents = requests.get(
f"{QB_URL}/api/v2/torrents/info", timeout=15
).json()
except Exception as e:
print(f"qBittorrent unreachable: {e}", file=sys.stderr)
sys.exit(1)
farming = [t for t in all_torrents if t.get("category") == "mam-farming"]
now = int(time.time())
deleted = {r: 0 for r in REASONS}
preserved_hnr = 0
skipped_active = 0
delete_hashes = []
# Only inspect tracker msg on torrents with a peer problem — avoids
# hundreds of extra API calls when things are healthy.
for t in farming:
state = t.get("state", "")
progress = float(t.get("progress", 0) or 0)
tracker_msg = ""
if progress < 1.0 and state in ("stalledDL", "metaDL", "missingFiles"):
tracker_msg = fetch_tracker_msg(t["hash"])
verdict = classify(t, now, tracker_msg)
if verdict is None:
skipped_active += 1
elif verdict == "hnr_window":
preserved_hnr += 1
else:
deleted[verdict] += 1
delete_hashes.append((t["hash"], verdict, t.get("name", "")[:60]))
for hash_, reason, name in delete_hashes:
if DRY_RUN:
print(f"[DRY_RUN] would delete ({reason}): {name}")
continue
try:
requests.post(
f"{QB_URL}/api/v2/torrents/delete",
data={"hashes": hash_, "deleteFiles": "true"},
timeout=20,
)
print(f"Deleted ({reason}): {name}")
except Exception as e:
print(f"Delete failed for {name}: {e}", file=sys.stderr)
for reason in REASONS:
push(
f'mam_janitor_deleted_per_run{{reason="{reason}"}} '
f"{deleted[reason] if not DRY_RUN else 0}\n"
f'mam_janitor_dry_run_candidates{{reason="{reason}"}} '
f"{deleted[reason] if DRY_RUN else 0}\n"
)
push(
f"mam_janitor_preserved_hnr {preserved_hnr}\n"
f"mam_janitor_skipped_active {skipped_active}\n"
f"mam_janitor_dry_run {1 if DRY_RUN else 0}\n"
f"mam_janitor_last_run_timestamp {now}\n"
)
total = sum(deleted.values())
print(
f"Done: deleted={total} preserved_hnr={preserved_hnr} "
f"skipped_active={skipped_active} dry_run={DRY_RUN}"
)
print(f" per reason: {deleted}")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,281 @@
variable "namespace" {
type = string
default = "servarr"
}
locals {
python_image = "docker.io/library/python:3.12-alpine"
pip_prefix = "pip install -q requests > /dev/null 2>&1; python3 /tmp/script.py"
data_pvc = "mam-farming-data-proxmox"
# Dry-run window was satisfied by a one-shot test on 2026-04-19 that
# produced 466 `never_started` candidates and 0 matches in any other
# reason bucket consistent with Phase B's expected 495 stuck torrents.
# Enforcing from here on.
janitor_dry_run = "0"
}
# ------------------------------- PVC -------------------------------
# Shared scratch volume for cookie + grabbed-ID dedup list. The existing
# in-cluster PVC (kubectl-applied 2026-04-14) is adopted via an `import {}`
# block declared in the root module (servarr/main.tf) Terraform 1.5+
# rejects imports inside child modules.
resource "kubernetes_persistent_volume_claim" "mam_data" {
wait_until_bound = false
metadata {
name = local.data_pvc
namespace = var.namespace
annotations = {
"resize.topolvm.io/threshold" = "80%"
"resize.topolvm.io/increase" = "100%"
"resize.topolvm.io/storage_limit" = "5Gi"
}
}
spec {
access_modes = ["ReadWriteOnce"]
storage_class_name = "proxmox-lvm"
resources {
requests = {
storage = "1Gi"
}
}
}
}
# --------------------------- Grabber ---------------------------------
# Every 30 minutes: skip while ratio < 1.2 or class == Mouse; otherwise
# grab up to 5 small-but-popular freeleech torrents. Existing ConfigMap
# + CronJob are adopted via imports in the parent stack.
resource "kubernetes_config_map" "grabber_script" {
metadata {
name = "mam-freeleech-grabber-script"
namespace = var.namespace
}
data = {
"script.py" = file("${path.module}/files/freeleech-grabber.py")
}
}
resource "kubernetes_cron_job_v1" "grabber" {
metadata {
name = "mam-freeleech-grabber"
namespace = var.namespace
}
spec {
schedule = "*/30 * * * *"
concurrency_policy = "Forbid"
successful_jobs_history_limit = 3
failed_jobs_history_limit = 3
job_template {
metadata {}
spec {
backoff_limit = 2
ttl_seconds_after_finished = 300
template {
metadata {}
spec {
restart_policy = "Never"
container {
name = "freeleech-grabber"
image = local.python_image
command = ["/bin/sh", "-c", local.pip_prefix]
env {
name = "MAM_ID"
value_from {
secret_key_ref {
name = "servarr-secrets"
key = "mam_id"
}
}
}
resources {
requests = { memory = "64Mi", cpu = "10m" }
limits = { memory = "128Mi" }
}
volume_mount {
name = "script"
mount_path = "/tmp/script.py"
sub_path = "script.py"
}
volume_mount {
name = "data"
mount_path = "/data"
}
}
volume {
name = "script"
config_map {
name = kubernetes_config_map.grabber_script.metadata[0].name
}
}
volume {
name = "data"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim.mam_data.metadata[0].name
}
}
}
}
}
}
}
lifecycle {
# KYVERNO_LIFECYCLE_V1: Kyverno admission webhook mutates dns_config with ndots=2
ignore_changes = [spec[0].job_template[0].spec[0].template[0].spec[0].dns_config]
}
}
# --------------------------- BP Spender ------------------------------
# Every 6 hours: compute the upload deficit against TARGET_RATIO and buy
# exactly what we need (+1 GiB margin), capped by BP reserve. Existing
# ConfigMap + CronJob are adopted via imports in the parent stack.
resource "kubernetes_config_map" "bp_spender_script" {
metadata {
name = "mam-bp-spender-script"
namespace = var.namespace
}
data = {
"script.py" = file("${path.module}/files/bp-spender.py")
}
}
resource "kubernetes_cron_job_v1" "bp_spender" {
metadata {
name = "mam-bp-spender"
namespace = var.namespace
}
spec {
schedule = "0 */6 * * *"
concurrency_policy = "Forbid"
successful_jobs_history_limit = 3
failed_jobs_history_limit = 3
job_template {
metadata {}
spec {
backoff_limit = 2
ttl_seconds_after_finished = 300
template {
metadata {}
spec {
restart_policy = "Never"
container {
name = "bp-spender"
image = local.python_image
command = ["/bin/sh", "-c", local.pip_prefix]
env {
name = "MAM_ID"
value_from {
secret_key_ref {
name = "servarr-secrets"
key = "mam_id"
}
}
}
resources {
requests = { memory = "64Mi", cpu = "10m" }
limits = { memory = "128Mi" }
}
volume_mount {
name = "script"
mount_path = "/tmp/script.py"
sub_path = "script.py"
}
volume_mount {
name = "data"
mount_path = "/data"
}
}
volume {
name = "script"
config_map {
name = kubernetes_config_map.bp_spender_script.metadata[0].name
}
}
volume {
name = "data"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim.mam_data.metadata[0].name
}
}
}
}
}
}
}
lifecycle {
# KYVERNO_LIFECYCLE_V1: Kyverno admission webhook mutates dns_config with ndots=2
ignore_changes = [spec[0].job_template[0].spec[0].template[0].spec[0].dns_config]
}
}
# ----------------------------- Janitor -------------------------------
# New: every 15 minutes, independent of grabber ratio guard. Deletes
# stuck/unregistered/redundant torrents in category=mam-farming while
# preserving torrents inside the 72h H&R window.
resource "kubernetes_config_map" "janitor_script" {
metadata {
name = "mam-farming-janitor-script"
namespace = var.namespace
}
data = {
"script.py" = file("${path.module}/files/mam-farming-janitor.py")
}
}
resource "kubernetes_cron_job_v1" "janitor" {
metadata {
name = "mam-farming-janitor"
namespace = var.namespace
}
spec {
schedule = "*/15 * * * *"
concurrency_policy = "Forbid"
successful_jobs_history_limit = 3
failed_jobs_history_limit = 3
job_template {
metadata {}
spec {
backoff_limit = 2
ttl_seconds_after_finished = 300
template {
metadata {}
spec {
restart_policy = "Never"
container {
name = "farming-janitor"
image = local.python_image
command = ["/bin/sh", "-c", local.pip_prefix]
env {
name = "DRY_RUN"
value = local.janitor_dry_run
}
resources {
requests = { memory = "64Mi", cpu = "10m" }
limits = { memory = "128Mi" }
}
volume_mount {
name = "script"
mount_path = "/tmp/script.py"
sub_path = "script.py"
}
}
volume {
name = "script"
config_map {
name = kubernetes_config_map.janitor_script.metadata[0].name
}
}
}
}
}
}
}
lifecycle {
# KYVERNO_LIFECYCLE_V1: Kyverno admission webhook mutates dns_config with ndots=2
ignore_changes = [spec[0].job_template[0].spec[0].template[0].spec[0].dns_config]
}
}

View file

@ -113,6 +113,15 @@ resource "kubernetes_deployment" "qbittorrent" {
name = "audiobooks"
mount_path = "/audiobooks"
}
resources {
requests = {
memory = "512Mi"
cpu = "50m"
}
limits = {
memory = "1Gi"
}
}
}
volume {
name = "data"
@ -289,21 +298,26 @@ tracker_stats = defaultdict(lambda: {
})
for t in torrents:
category = (t.get("category") or "").lower()
tracker_url = t.get("tracker", "")
if not tracker_url:
domain = "unknown"
else:
domain = ""
if tracker_url:
try:
domain = urlparse(tracker_url).hostname or "unknown"
domain = (urlparse(tracker_url).hostname or "").lower()
except Exception:
domain = "unknown"
domain = ""
if "myanonamouse" in domain or "mam" in domain.lower():
# Category is the only signal for queuedDL torrents whose announces
# haven't happened yet (tracker field is empty). Map those first so
# hundreds of MAM torrents don't collect under "unknown".
if category == "mam-farming" or "myanonamouse" in domain or "mam" in domain:
label = "mam"
elif "audiobookbay" in domain or "abb" in domain.lower():
elif category.startswith("abb") or "audiobookbay" in domain or "abb" in domain:
label = "audiobookbay"
else:
elif domain:
label = domain.replace(".", "_")
else:
label = "unknown"
s = tracker_stats[label]
s["uploaded"] += t.get("uploaded", 0)