Commit graph

813 commits

Author SHA1 Message Date
root
a5a54aebe3 Woodpecker CI deploy [CI SKIP] 2026-05-10 11:12:38 +00:00
Viktor Barzin
72013a0890 n8n: real-time training loop + decoupled posting
instagram-approval: after every tap, immediately fetch /candidates?limit=1
and send the next photo as a fresh inline-keyboard message — the user's
tap chains back into this same workflow, so the loop is user-paced.
When the pool is exhausted, send an 'all caught up' summary with the
backlog count + cumulative training stats.

instagram-discover: cron throttled from every-30-min to daily 09:00.
The chain handles ongoing training; the daily run only kickstarts a
session if the user hasn't been tapping. Limit reduced from 3 → 1 so
each kickstart sends a single photo (chain takes over).
2026-05-10 11:12:38 +00:00
Viktor Barzin
ff2f32a33e ig-poster b17a9737 + n8n discover rewritten to use /candidates with CLIP scoring 2026-05-10 11:12:38 +00:00
Viktor Barzin
94e2f34e2a ig-poster: bump to 3b862fe4 (EXIF orientation + auto-pending /candidates) 2026-05-10 11:12:38 +00:00
Viktor Barzin
29bb434e1e ig-poster: 69e395f2 + sync IMMICH_PG_* via ESO for CLIP scoring; postiz publish-notify n8n workflow 2026-05-10 11:12:38 +00:00
Viktor Barzin
cb83972b79 ig-poster: bump to cac6fa97 + sync POSTIZ_INTEGRATION_ID via ESO 2026-05-10 11:12:37 +00:00
Viktor Barzin
40ca011bd6 postiz: expose /uploads publicly so Meta IG fetcher can pull JPEGs
Stories+feed posts via Postiz failed with state=ERROR and Postiz
mistranslated the cause as 'Invalid Instagram image resolution
max: 1920x1080px'. Real cause: Postiz hands Meta an upload URL
under https://postiz.viktorbarzin.me/uploads/... and Meta gets a
302 to the Authentik login page instead of bytes. Meta returns
error 36001 (image not fetchable) which Postiz maps to that
misleading resolution string.

Split the ingress: /uploads/* on a public ingress (matches the
instagram-poster /image+/original pattern), everything else
remains behind Authentik forward-auth. /uploads contents are
random UUIDs, low blast radius if scraped.
2026-05-10 11:12:37 +00:00
Viktor Barzin
ce9bf5b676 postiz: wire INSTAGRAM_APP_ID/SECRET via ESO for IG-standalone provider
Standalone provider (instagram-standalone OAuth flow) is what the user
is trying after the FB-Login path was blocked by their Business Account
ad-policy flag. Uses modern scope names (instagram_business_*), so no
JS patch needed unlike the FB-Login provider.
2026-05-10 11:12:37 +00:00
Viktor Barzin
9c1df3ad96 chore: remove decommissioned registry.viktorbarzin.me ingress
The old port-5050 R/W private registry was decommissioned 2026-05-07
(forgejo-registry-consolidation Phase 4). The reverse-proxy ingress
+ ExternalName service + Cloudflare DNS record kept pointing at the
dead backend, returning 502 to anyone hitting registry.viktorbarzin.me.

This was driving 3 monitoring artifacts that auto-cleared on cleanup:
- Uptime Kuma external monitor #586 (deleted)
- Pushgateway stale registry-integrity-probe metrics (deleted)
- ExternalAccessDivergence + RegistryIntegrityProbeStale alerts
2026-05-10 11:12:37 +00:00
Viktor Barzin
8c09543391 fix: restore pvc-autoresizer by allow-listing kubelet_volume_stats_available_bytes
The Prometheus scrape config for the kubernetes-nodes job kept
capacity_bytes + used_bytes but dropped available_bytes. pvc-autoresizer
computes utilization from available/capacity, so without that metric it
was silent for every PVC in the cluster — including mailserver, which
filled to 89% (1.7G/2.0G) and started rejecting all inbound mail with
'452 4.3.1 Insufficient system storage' (15+ hours, all real senders:
Brevo, Gmail, Facebook).

Also bumps the floors of mailserver (2Gi -> 5Gi, limit 10Gi) and forgejo
(15Gi -> 30Gi) PVCs to recover from the immediate outage, and adds
ignore_changes on requests.storage so future autoresizer expansions
don't cause TF drift.
2026-05-10 11:12:37 +00:00
Viktor Barzin
c44d855960 ig-poster: pivot to Telegram-only delivery (manual IG upload)
User dropped Postiz/Instagram OAuth (Meta Business Account flagged
+ Postiz scope drift). New pipeline ends at Telegram — full-quality
JPEG delivered to the bot chat, manually uploaded to IG by the user.

- Image bumped to 25e46efd: adds /deliver/{asset_id} endpoint that
  multipart-uploads to Telegram (URL-fetch fails through Cloudflare
  for >5MB), then tags 'posted' in Immich.
- ESO now syncs telegram_bot_token + telegram_chat_id from Vault.
- Public ingress paths grow to ['/image', '/original'] (Authentik
  bypass on /original is harmless — files are user-tagged, low blast
  radius — and useful for ad-hoc browser downloads).
- Memory limit 512Mi -> 1500Mi: full-resolution Pillow HEIC decode
  was OOMing on 12MP+ phone photos.
- discover.json simplified to scan -> deliver per item; approval and
  post workflows already deactivated. Telegram bot webhook removed.
2026-05-10 11:12:37 +00:00
Viktor Barzin
bd8dbbc76f postiz: wire FACEBOOK_APP_ID/SECRET via ESO for IG-Business integration 2026-05-10 11:12:37 +00:00
Viktor Barzin
02e28294e9 postiz: idempotent Job to drop default Text search attributes (Temporal SQL visibility caps at 3 Text attrs; auto-setup ships with 2, Postiz adds 2 more — gitroomhq/postiz-app#1504) 2026-05-10 11:12:37 +00:00
Viktor Barzin
16e408ee59 postiz: bump memory limit to 4Gi (was OOMing during NestJS startup) 2026-05-10 11:12:37 +00:00
Viktor Barzin
888df84fb5 postiz: add Temporal sidecar; lock both stacks behind Authentik
Postiz backend was crashlooping on connect ECONNREFUSED ::1:7233 —
Postiz needs Temporal for cron/scheduled posts and the Helm chart
doesn't bundle it. Added a single-replica temporalio/auto-setup:1.28.1
Deployment in the postiz namespace, backed by the bundled
postiz-postgresql (separate `temporal` + `temporal_visibility`
databases pre-created via init container), ENABLE_ES=false (Postiz
only uses the workflow engine, not visibility search). Skips
DYNAMIC_CONFIG_FILE_PATH because that file isn't bundled in
auto-setup.

Auth audit:
- postiz: ingress now `protected = true` (Authentik forward-auth).
  Postiz also has its own login on top, but registration is no
  longer exposed to the open internet.
- instagram-poster: split into two ingresses on the same host.
  `/image/*` stays public (Meta + Telegram fetch the 9:16
  derivatives). Everything else (/healthz, /queue, /scan,
  /enqueue, /reject, /post-next) sits behind Authentik. The
  protected ingress sets dns_type=none — the public one already
  created the CF DNS record.
2026-05-10 11:12:37 +00:00
Viktor Barzin
c6939c3d53 postiz + n8n: real DB URL + webhook-trigger approval
- postiz: set DATABASE_URL/REDIS_URL pointing at the bundled subcharts;
  the chart does NOT auto-wire even when postgresql.enabled=true, so
  the prisma db:push was failing with empty DATABASE_URL.
- n8n approval workflow: swap telegramTrigger -> webhook node so it
  works without an n8n-stored Telegram credential. Telegram bot's
  webhook is set via setWebhook to https://n8n.viktorbarzin.me/webhook/instagram-approval.
  Parse-callback Code node tolerates both shapes ({body:{callback_query:...}}
  vs {callback_query:...}) so a future move back to telegramTrigger doesn't break.
2026-05-10 11:12:37 +00:00
Viktor Barzin
5057341d09 postiz + instagram-poster: deploy fixes after first apply
- postiz: pin chart name to 'postiz-app' (was 'postiz', wrong path)
  and override bundled bitnami subchart images to bitnamilegacy/* —
  Bitnami removed bitnami/postgresql + bitnami/redis from DockerHub
  in Aug 2025 (Broadcom acquisition).
- postiz: enable initial registration (DISABLE_REGISTRATION=false)
  so first admin user can be created in UI; tighten after.
- instagram-poster: add securityContext (fsGroup/runAsUser=10001)
  so kubelet chowns the PVC mount for the non-root 'poster' user;
  was crashing on alembic with 'unable to open database file'.
- instagram-poster: bump image_tag to 24935ab4 (uvicorn now binds
  to port 8000 to match Service contract; was 8080 -> probe 404).
2026-05-10 11:12:37 +00:00
Viktor Barzin
2d1dfa49f6 instagram-poster: pin image tag to 23f8b4ed (initial push) 2026-05-10 11:12:37 +00:00
Viktor Barzin
73eb01f994 add postiz + instagram-poster stacks for IG Stories pipeline
New stacks:
- stacks/postiz/ — Postiz scheduler (Helm chart v1.0.5, image v2.21.7)
  with bundled PG/Redis, /uploads PVC on proxmox-lvm, JWT_SECRET
  via ESO from secret/instagram-poster.
- stacks/instagram-poster/ — custom Python service that polls Immich
  for the 'instagram' tag, reformats photos to 9:16 with blurred-bg
  letterbox, exposes /image/<asset_id> publicly so Postiz can fetch.
  Image: forgejo.viktorbarzin.me/viktor/instagram-poster.

n8n: 3 new workflows (discover, approval, post) for the Telegram
inline-button approval UX. Adds ExternalSecret + env vars for
TELEGRAM_BOT_TOKEN, TELEGRAM_CHAT_ID, IMMICH_API_KEY, plus static
URLs for the new service.

Vault: seed secret/instagram-poster with telegram_bot_token,
telegram_chat_id, immich_api_key, postiz_api_token,
postiz_jwt_secret before applying.
2026-05-10 11:12:37 +00:00
Viktor Barzin
badc341669 openclaw: regenerate kubeconfig at pod start using projected SA tokenFile
The previously-baked kubeconfig at /home/node/.openclaw/kubeconfig retained
a service-account token bound to the original (long-dead) pod, so kubectl
calls from inside the openclaw container failed with "the server has asked
for the client to provide credentials" even though the openclaw SA has
cluster-admin and kubelet projects a fresh token at
/var/run/secrets/kubernetes.io/serviceaccount/token.

Add init-container "setup-kubeconfig" that writes a kubeconfig with
tokenFile + certificate-authority paths pointing at the projected
SA volume — kubelet auto-rotates the token, kubectl always reads
fresh creds, no Vault K8s-creds-engine refresh needed.

Verified end-to-end: agent ran `kubectl get nodes -o wide` inside the
pod and delivered a correct one-line summary to Telegram via
openai-codex/gpt-5.4-mini.
2026-05-10 11:12:37 +00:00
Viktor Barzin
a39893bb60 [woodpecker] Re-fix null_resource trigger after lint reverted it
The helm provider in this Terraform version doesn't support
list-index access on helm_release.metadata[0]. Switch the
woodpecker_server_host_alias trigger to {helm_version, sha256(values)}
which works regardless of provider quirks. (Original fix landed
2026-05-07; got reverted by a linter pass.)
2026-05-10 11:12:36 +00:00
Viktor Barzin
564c64f4c7 f1-stream: register HmembedsExtractor in registry
Companion commit to 92474254 — the new extractor wasn't being
registered, only the file was added. Add the import + register call
in create_registry().

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 11:12:36 +00:00
Viktor Barzin
18604d808e f1-stream: hmembeds offline decoder — reverse-engineered the JW Player trap
Four-agent parallel investigation finally pinned down what's happening
with the hmembeds.one streams. The TL;DR is unexpected: there is no
fingerprint check, no decoder failure, no broken JS — the obfuscated
decoder is trivial to reproduce, but the upstream origin is dead.

Findings (saved at /tmp/jwre/{findings.md, blob-analysis.md,
fingerprint-gap.md, trace-summary.md}):

1. **The "ZpQw9XkLmN8c3vR3" blob is decoy.** It's an Adcash adblock-
   bypass config — not the stream URL. The actual stream URL is in a
   different inline `<script>` block of the embed HTML.

2. **The real decoder is base64 + XOR with a hardcoded key**, the key
   appears literally in the HTML (e.g. `var k="bux7ver6mow4trh1"`).
   No browser-derived inputs. We can run it in Python in 50µs.

3. **The decoded URL is JWT-bound to /24 of the requestor's IP**. JWT
   payload: `{stream, ip:"176.12.22.0/24", session_id, exp}`. From our
   cluster (egress 176.12.22.76) the JWT IP-binding is satisfied.

4. **The origin still returns 404 (GET) / 403 (HEAD).** Tested both
   curated embeds (Sky F1 888520f3..., DAZN F1 fc3a5463...) — same
   404. Origin landing page (`/`) returns 200, so the host is up;
   the `/sec/<JWT>/<embed_id>.m3u8` endpoint specifically refuses.

5. **No fingerprint surface trips this.** Runtime trace via
   chrome-service hooks confirmed: decoder reads navigator.userAgent
   (heavy), screen dimensions, and a single WebGL getParameter call.
   No canvas, audio, fonts, fetch-to-fingerprint-API. JW Player setup
   is given a valid file URL — the playlist stays empty because JW
   can't fetch the manifest from the (dead) origin.

Verdict: **the legacy curated hmembeds embeds (`888520f3...` Sky F1,
`fc3a5463...` DAZN F1) are upstream-dead.** No browser-side fix is
possible. The community uses these IDs as "24/7 channels" but they're
in a perpetually-offline state right now.

This commit ships the offline decoder anyway, registered as a new
extractor. Two reasons:
- If those origins come back online, no code change needed.
- Future curated hmembeds IDs (added by hand or discovered via
  subreddit posts) will resolve through the same path.

Files added: `extractors/hmembeds.py` (~120 lines incl. the decoder
and a `decode_embed(html) -> str | None` helper that's reusable).
Registered in `__init__.py`. The existing CuratedExtractor stays
disabled; this replaces its mechanism with one that can absorb new
embed IDs without code changes.

Bonus from the agent work:
- Confirmed our stealth.js is sufficient — the runtime trace showed
  the decoder reads only the surfaces we already cover.
- Identified ~10 fingerprint surfaces we don't spoof (platform,
  userAgentData, hardwareConcurrency, deviceMemory, timezone,
  AudioContext, ICE candidates) but proved they're not what's
  blocking us, so no change needed for now.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-10 11:12:36 +00:00
Viktor Barzin
afd78f8d3e kms: replace inline ConfigMap nginx with custom Hugo image
The kms-web-page deployment now pulls
forgejo.viktorbarzin.me/viktor/kms-website:${var.image_tag} (source
in the new Forgejo repo viktor/kms-website). The ConfigMap-mounted
index.html is gone — the new site is a Hugo build with full GVLK
catalog for every Microsoft KMS-eligible Windows + Office edition,
copy-to-clipboard, dark/light themes.

The container image tag is managed by CI (kubectl set image), so
add lifecycle ignore_changes on container[0].image alongside the
existing dns_config (Kyverno) ignore.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:35 +00:00
Viktor Barzin
4518aff71c f1-stream: Stremio addon extractor — TvVoo + StremVerse Sky F1 / DAZN F1
5 parallel research agents surveyed Stremio addons, F1 TV / Sky / DAZN
official APIs, IPTV M3U lists, and free-to-air broadcasters. The clean
finding: two community Stremio addons already index Sky Sports F1 +
DAZN F1 via their public HTTP APIs — no Stremio client required, just
GET /stream/<type>/<id>.json on the addon's hosted instance.

New `stremio.py` extractor pulls from:
- **TvVoo** (`https://tvvoo.hayd.uk/manifest.json`) — wraps Vavoo IPTV.
  Lists Sky Sports F1 UK + Sky Sports F1 HD + Sky Sport F1 IT + Sky
  Sport F1 HD DE + DAZN F1 ES. Returns 2 IP-bound m3u8 URLs per
  channel. Source: github.com/qwertyuiop8899/tvvoo. Vavoo's CDN SSL
  certs are currently expired so most clients fail verification today
  — addon framework is right but delivery is degraded.
- **StremVerse** (`https://stremverse.onrender.com/manifest.json`) —
  Returns 11+ streams per id (`stremevent_591` = F1, `stremevent_866`
  = MotoGP). Mix of DRM-walled DASH, JW-broken-chain JWT URLs, and
  HuggingFace-Space proxies that 404 without a per-instance api_password.

The extractor surfaces 15 candidate URLs per run; verifier filters to
the playable subset. Today that subset is 0 (Vavoo cert expiry + JW
chain + proxy auth), but the wiring is correct: as the addons fix
delivery or rotate to fresh URLs, candidates will start passing.

Other agent findings worth noting (not coded but documented):
- F1 TV Pro live = Widevine DASH; impossible without a CDM. VOD is
  clean HLS but only post-session.
- Sky Go / DAZN / Viaplay / Canal+ = all Widevine + geo-fenced + active
  DMCA enforcement. Pursuing not feasible.
- ServusTV AT (free F1 race weekends) = clean public HLS at
  rbmn-live.akamaized.net/hls/live/2002825/geoSTVATweb/master.m3u8 but
  geo-fenced; needs an Austrian-IP egress proxy/VPN.
- iptv-org/iptv has an F1 Channel (Pluto TV IE) at
  jmp2.uk/plu-6661739641af6400080cd8f1.m3u8 — 24/7 free, BG works,
  but only historic races + shoulder programming. Worth adding as a
  curated entry later.
- boxboxbox.* (community-favourite F1 race-weekend domain) is dead
  across all known TLDs as of today.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:35 +00:00
Viktor Barzin
d832a33039 [woodpecker] Bump WOODPECKER_FORGE_TIMEOUT 3s → 30s
The default forge-API timeout is 3 seconds. The config-loader makes
4-6 sequential calls per pipeline trigger (probing for .woodpecker dir
then each .woodpecker.{yaml,yml} variant), and Forgejo responses on
this cluster spike to 1-2s under load — easy to trip the cumulative
3s deadline. Result: 'could not load config from forge: context
deadline exceeded' on virtually every pipeline trigger.

This was the actual root cause of the 'Woodpecker forge-API bug'
that v3.13 → v3.14 was supposed to fix — turns out v3.14 didn't
change the timeout default, and the v3.13 successes I saw earlier
were warm-cache flukes.
2026-05-07 23:29:35 +00:00
Viktor Barzin
108bef7b1a f1-stream: subreddit extractor scans r/motorsportsstreams2 (active sub)
User asked specifically for r/motorsportstreams. Reddit banned that sub
years ago; the active 12.5k-subscriber successor is r/motorsportsstreams2.
Added it to SUBREDDITS plus r/f1streams (709 subs, public).

Also extended:
- SEARCH_QUERIES with three Sky Sports F1 / live-stream phrases that
  catch the `[F1 STREAM]` post pattern the community uses on race
  weekends (titles like "[F1 STREAM] Bahrain GP - Live Race | No Buffer
  | Mobile Friendly" linking to boxboxbox.pro/stream-1).
- _INTERESTING_HOSTS allowlist with boxboxbox.{pro,live,lol},
  pitsport.live, ppv.to, streamed.pk, acestrlms/aceztrims, and the
  Super Formula direct CDNs (racelive.jp, cdn.sfgo.jp) — all observed
  in last-50-posts on r/motorsportsstreams2.

Where this leaves us, honestly:
- The r/motorsportsstreams2 megathread "Where to watch every F1 race"
  recommends EXACTLY the four sites we already pull from: pitsport.xyz,
  streamed.pk, ppv.to, acestrlms. The community has the same broken JW
  Player chain we have for Sky Sports F1 24/7 streams. There is no
  free-and-working alternative they know about.
- boxboxbox.pro (the most-promoted F1 stream domain in race-weekend
  posts) is currently NXDOMAIN; .live is parked, .lol unreachable. The
  domain rotates after takedowns; Reddit posts will surface fresh ones
  when posters share them.
- For F1 specifically: extractor surfaces 2 motomundo.net candidates
  (MotoGP wrappers) and lights up to ~6+ during F1 race weekends as
  posters share fresh boxboxbox/equivalent URLs.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:35 +00:00
Viktor Barzin
e110b40a4a monitoring(wealth): monthly contrib-vs-mkt as line chart, not bars
User asked for two lines instead of side-by-side bars at monthly
granularity. Converts panel 25 from barchart to timeseries:

  * type: barchart -> timeseries
  * format: table -> time_series, SELECT month::timestamp AS time
  * drawStyle line, lineWidth 2, fillOpacity 0, showPoints auto
  * Same blue (contributions) / green (market gain) colour overrides

Where the green line rises above the blue line is the visual cue that
the market out-earned new contributions for that month -- the trend
the user wants to track.

Diff is small (15 ins / 28 del) because the bar-chart-only fields
(barRadius, barWidth, groupWidth, stacking, xField, xTickLabelRotation)
are dropped.
2026-05-07 23:29:35 +00:00
Viktor Barzin
84fd752747 monitoring(wealth): monthly contributions vs market gain bar chart
Goal stated by user: see when monthly market gain starts to exceed
monthly contributions, i.e. the inflection point where the market is
out-earning savings rather than the other way around.

New panel id=25 between the annual decomposition (13) and per-account
ROI (14): bar chart with two side-by-side bars per month --
contributions (blue) and market gain (green). Same calculation as
panel 13 but month-grain instead of year-grain. Months where the
green bar dwarfs the blue one are visible at a glance.

SQL: same endpoints CTE pattern as panel 13, with date_trunc('month',
valuation_date) as the grouping key. Uses max_complete cutoff so
partial-today doesn't skew the latest month.

Layout: panels at y >= 75 shifted down by 11 (chart height). New
chart at y=75; panel 14 (per-account ROI) -> y=86; panel 10
(activity log) -> y=96.

Spot check (recent months from PG):
  2025-07: contrib +£5,601    market +£42,295   <- big market month
  2025-09: contrib +£1,501    market +£24,206
  2026-02: contrib +£35,501   market +£41,382
  2026-03: contrib +£5,501    market -£38,483   <- correction
  2026-04: contrib +£73,267   market +£21,448
2026-05-07 23:29:34 +00:00
Viktor Barzin
f1d69b0a7a [wealthfolio] Flip wealthfolio-sync CronJob image to Forgejo
The CronJob has been broken since registry-private lost the
wealthfolio-sync image (last successful run 36+ days ago). The image
is built from /home/wizard/code/broker-sync (the brokerage data sync —
Trading 212, Schwab, Fidelity, IMAP-CSV → wealthfolio).

Set up: viktor/broker-sync repo on Forgejo with .woodpecker/build.yml
that pushes to forgejo.viktorbarzin.me/viktor/wealthfolio-sync. Until
Woodpecker recognises the new repo's webhook, the image was bootstrapped
via 'docker pull viktorbarzin/broker-sync:latest && docker tag … &&
docker push forgejo.viktorbarzin.me/viktor/wealthfolio-sync:latest' so
the CronJob unblocks immediately.
2026-05-07 23:29:34 +00:00
Viktor Barzin
d942a21d93 [woodpecker] Bump server + agent v3.13.0 → v3.14.0
Fixes the 'could not load config from forge: context deadline exceeded'
issue that blocked every Forgejo-triggered pipeline during the
forgejo-registry-consolidation cutover. Helm chart 3.5.1 stays
(no 3.6 yet); only the image tag overrides change.
2026-05-07 23:29:34 +00:00
Viktor Barzin
59885c21d0 [claude-memory] Restore truncated main.tf — apply Phase 3 image flip on full file
The Phase 3 commit 3148d15d ran into a disk-full ENOSPC during edit
of stacks/claude-memory/main.tf, and the file was committed truncated
at line 286 mid-string ('Cor instead of 'Core Platform' / closing
braces). terraform validate failed with 'Unterminated template string'.
Restoring the trailing 2 lines + re-applying the
viktorbarzin/claude-memory-mcp:17 → forgejo.viktorbarzin.me/viktor/
claude-memory-mcp:17 cutover that Phase 3 was meant to do.
2026-05-07 23:29:34 +00:00
Viktor Barzin
3f3e5fc954 chrome-service: open NP for Traefik → noVNC sidecar (port 6080)
Existing NetworkPolicy only admitted port 3000 (Playwright WS) from
labelled client namespaces, blocking Traefik's traffic to the noVNC
sidecar on port 6080. The chrome.viktorbarzin.me ingress would hang
forever — page never loads, eventually times out.

Adds a second ingress rule allowing TCP/6080 from the traefik
namespace only. Authentik forward-auth still gates external access
at the Traefik layer.

Also reconciles the noVNC image to the new Forgejo registry path
(:v4 unchanged) — already declared in TF, just live-state drift from
the Phase 3 registry consolidation.

Updates the architecture doc; the previous text still described the
old nginx static health stub that noVNC replaced.
2026-05-07 23:29:34 +00:00
Viktor Barzin
a91bbe189e f1-stream: subreddit extractor finds Reddit '[Watch / Download]' threads
Two fixes for the previously-dormant subreddit extractor + a chrome-browser TARGETS pivot to MotoGP weekend live URLs.

1. **Reddit fetch was 403'd by `Accept: application/json`**. Cluster IP +
   that header trips Reddit's anti-bot fingerprint and returns HTML 403.
   Removing the explicit Accept (default `*/*`) restores HTTP 200 with
   JSON. Confirmed via direct httpx test from the f1-stream pod.

2. **Search the right things**. The community uses a stable
   `[Watch / Download] <Series> <Year> - <Round> | <Event>` post pattern
   with selftext links to admin-curated WordPress sites (motomundo.net
   for MotoGP, sister sites for F1 when active). New extractor:
   - Hits both /new.json and /search.json across r/MotorsportsReplays
     and three smaller motorsport subs.
   - Filters posts where title contains `[watch`, `watch online`, or
     flair = `live`.
   - Extracts URLs from selftext (regex), filters to a positive
     `_INTERESTING_HOSTS` allowlist (motomundo, freemotorsports,
     pitsport, rerace, dd12, etc.) so we don't drown the verifier in
     YouTube/Discord/gofile links.
   - Returns each as embed-type so the chrome-service verifier visits.

3. **chrome_browser.TARGETS pivoted** to the live MotoMundo MotoGP
   French GP iframes (motomundo.top/e/<id> + motomundo.upns.xyz/#<id>)
   while the weekend is on. The previous DD12 NASCAR + Acestrlms F1
   targets were both broken JW Player paths anyway.

State after deploy:
- /streams: 3 verified live (WRC Rally Portugal, NASCAR 24/7, Premier League Darts) — Darts is currently active because UK is mid-match.
- Subreddit extractor surfaces the live MotoMundo URL but the verifier
  marks the WordPress wrapper page playable=False (no top-level <video>
  element; the m3u8 lives in nested iframes). Next iteration: drill the
  verifier into iframe contentDocument and capture from there.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:34 +00:00
Viktor Barzin
4ec40ea804 [forgejo] Phases 3+4+5: cutover, decommission, docs sweep
End of forgejo-registry-consolidation. After Phase 0/1 already landed
(Forgejo ready, dual-push CI, integrity probe, retention CronJob,
images migrated via forgejo-migrate-orphan-images.sh), this commit
flips everything off registry.viktorbarzin.me onto Forgejo and
removes the legacy infrastructure.

Phase 3 — image= flips:
* infra/stacks/{payslip-ingest,job-hunter,claude-agent-service,
  fire-planner,freedify/factory,chrome-service,beads-server}/main.tf
  — image= now points to forgejo.viktorbarzin.me/viktor/<name>.
* infra/stacks/claude-memory/main.tf — also moved off DockerHub
  (viktorbarzin/claude-memory-mcp:17 → forgejo.viktorbarzin.me/viktor/...).
* infra/.woodpecker/{default,drift-detection}.yml — infra-ci pulled
  from Forgejo. build-ci-image.yml dual-pushes still until next
  build cycle confirms Forgejo as canonical.
* /home/wizard/code/CLAUDE.md — claude-memory-mcp install URL updated.

Phase 4 — decommission registry-private:
* registry-credentials Secret: dropped registry.viktorbarzin.me /
  registry.viktorbarzin.me:5050 / 10.0.20.10:5050 auths entries.
  Forgejo entry is the only one left.
* infra/stacks/infra/main.tf cloud-init: dropped containerd
  hosts.toml entries for registry.viktorbarzin.me +
  10.0.20.10:5050. (Existing nodes already had the file removed
  manually by `setup-forgejo-containerd-mirror.sh` rollout — the
  cloud-init template only fires on new VM provision.)
* infra/modules/docker-registry/docker-compose.yml: registry-private
  service block removed; nginx 5050 port mapping dropped. Pull-
  through caches for upstream registries (5000/5010/5020/5030/5040)
  stay on the VM permanently.
* infra/modules/docker-registry/nginx_registry.conf: upstream
  `private` block + port 5050 server block removed.
* infra/stacks/monitoring/modules/monitoring/main.tf: registry_
  integrity_probe + registry_probe_credentials resources stripped.
  forgejo_integrity_probe is the only manifest probe now.

Phase 5 — final docs sweep:
* infra/docs/runbooks/registry-vm.md — VM scope reduced to pull-
  through caches; forgejo-registry-breakglass.md cross-ref added.
* infra/docs/architecture/ci-cd.md — registry component table +
  diagram now reflect Forgejo. Pre-migration root-cause sentence
  preserved as historical context with a pointer to the design doc.
* infra/docs/architecture/monitoring.md — Registry Integrity Probe
  row updated to point at the Forgejo probe.
* infra/.claude/CLAUDE.md — Private registry section rewritten end-
  to-end (auth, retention, integrity, where the bake came from).
* prometheus_chart_values.tpl — RegistryManifestIntegrityFailure
  alert annotation simplified now that only one registry is in
  scope.

Operational follow-up (cannot be done from a TF apply):
1. ssh root@10.0.20.10 — edit /opt/registry/docker-compose.yml to
   match the new template AND `docker compose up -d --remove-orphans`
   to actually stop the registry-private container. Memory id=1078
   confirms cloud-init won't redeploy on TF apply alone.
2. After 1 week of no incidents, `rm -rf /opt/registry/data/private/`
   on the VM (~2.6GB freed).
3. Open the dual-push step in build-ci-image.yml and drop
   registry.viktorbarzin.me:5050 from the `repo:` list — at that
   point the post-push integrity check at line 33-107 also needs
   to be repointed at Forgejo or removed (the per-build verify is
   redundant with the every-15min Forgejo probe).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:34 +00:00
Viktor Barzin
874f80ecbe [woodpecker] Persist hostAliases patch via null_resource (chart doesn't expose it)
Helm chart 3.5.1 has no `server.hostAliases` field, so the YAML
addition I made earlier was a no-op. Apply via kubectl patch in a
null_resource keyed on helm revision so it re-asserts on every
chart upgrade. Same pattern as the CoreDNS replicas/affinity patch
in stacks/technitium/.

Without this, every helm upgrade on woodpecker reverts the
hostAliases fix and the Forgejo pipeline triggers start failing
with context-deadline-exceeded again.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:34 +00:00
Viktor Barzin
ff19d86557 [woodpecker] Pin forgejo.viktorbarzin.me to in-cluster Traefik LB
Pipeline triggers from Forgejo were failing with "could not load
config from forge: context deadline exceeded" — Woodpecker's
forge-API fetch path was round-tripping through Cloudflare via the
public IP, hitting 30s deadline timeouts on cold connections. The
in-cluster path via the Traefik LB (10.0.20.200) is consistently
sub-100ms.

Same trick we use for the containerd hosts.toml redirect on each
node — Traefik serves the *.viktorbarzin.me wildcard cert so SNI
verification still passes. OAuth callbacks still use the public
hostname (correct, those come from the user's browser).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:34 +00:00
Viktor Barzin
a0b70482fe [forgejo] Bump webhook DELIVER_TIMEOUT 5s -> 30s
Forgejo→Woodpecker webhooks were timing out on first request after
pod restart. The default 5s deadline is too tight for the cold
Cloudflare-tunnel TLS handshake (observed 6-8s). 30s comfortably
covers retries.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:34 +00:00
Viktor Barzin
83496f6e0c [forgejo] Allow webhook delivery to ci.viktorbarzin.me + *.viktorbarzin.me
The Forgejo→Woodpecker webhook (so Woodpecker fires on each push to
viktor/<repo>) was being blocked by the existing ALLOWED_HOST_LIST
of *.svc.cluster.local — ci.viktorbarzin.me resolves to the public IP
because Cloudflare proxying wasn't covering that path. Without this
fix, no Woodpecker pipeline run was triggered on push, the dual-push
bake would never start, and Forgejo's package catalog stays empty.

Add ci.viktorbarzin.me explicitly + *.viktorbarzin.me as a future-
proofing wildcard. The list still excludes arbitrary external hosts,
so this is not a security regression — just unblocking the webhook
to our own CI.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:34 +00:00
Viktor Barzin
413ceec35c [forgejo] securityContext.fsGroup=1000 so /data is writable to forgejo
Phase 0 enabled packages but the pod crashloops on
`mkdir /data/tmp: permission denied` — Forgejo loads the chunked
upload path (default /data/tmp/package-upload) before s6-overlay
gets a chance to chown /data. fsGroup tells kubelet to recursively
chown the volume to GID 1000 on mount, which fixes it.

Pre-23-day Forgejo deployed with packages off so this code path
never ran.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:34 +00:00
Viktor Barzin
3fb05825d8 [forgejo] Drop the FORGEJO__packages__CHUNKED_UPLOAD_PATH override
Setting it to /data/tmp/package-upload triggers a CrashLoopBackOff
because /data is the volume mount root and is owned by root, not
the forgejo user (uid 1000) — Forgejo can't `mkdir /data/tmp`.

The default value resolves under the AppDataPath (a subdir Forgejo
itself owns) which works fine. Keep the ENABLED=true override; v11
ships packages on but explicit is safer.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:34 +00:00
Viktor Barzin
d67e8ddaf8 f1-stream: add chrome-browser, subreddit, dd12 extractors; fix streamed.pk
User asked to broaden the source pipeline so f1-stream can find F1 (and
adjacent motorsport) streams from Sky Sports / DAZN / Reddit / etc.,
using the in-cluster chrome-service headed browser where needed. Four
changes:

1. **streamed.py**: BASE_URL streamed.su → streamed.pk. The .su domain
   stopped serving the API host in 2026 (only the marketing page is
   left); .pk hosts the JSON API now. Adds 3 events/round (currently
   all routed through embedsports.top — see #2 caveat).

2. **chrome_browser.py** (new): generic chrome-service-driven extractor.
   Connects to the existing chrome-service WS (CHROME_WS_URL +
   CHROME_WS_TOKEN env), navigates a list of TARGETS, captures any HLS
   playlist URL the page fetches at runtime, returns one ExtractedStream
   per discovery. Uses the same stealth init script as the verifier so
   anti-bot checks don't trip the page. Handles iframes (DD12-style
   /nas → /new-nas/jwplayer) and probes child-frame <video>/source
   elements after settle. Caveat: most aggregator sites (pooembed,
   embedsports, hmembeds, even DD12's JW Player path) use a broken
   runtime decoder that produces no m3u8 in our environment, so the
   TARGETS list is currently 0-yielding; the framework is the
   contribution and concrete sites can be added as they're discovered.

3. **subreddit.py** (new): scans r/MotorsportsReplays, r/motorsports,
   r/formula1, r/motogp via the public old.reddit.com JSON API for
   posts whose flair/title indicates a live stream. Discovered URLs
   are returned as embed-type streams; the verifier visits each via
   chrome-service to confirm playability. Note: Reddit currently HTTP
   403's our cluster outbound IP for anonymous JSON requests; the
   extractor returns 0 in that state and logs a debug message. Will
   work from any IP Reddit isn't blocking.

4. **dd12.py** (new): inline-HTML scraper for DD12Streams. The site
   embeds `playerInstance.setup({file: "..."})` directly in HTML — no
   JS decoder needed. Currently surfaces NASCAR Cup Series 24/7 (clean
   BunnyCDN-hosted HLS at w9329432hnf3h34.b-cdn.net/pdfs/master.m3u8);
   add new `(path, label, title)` tuples to CHANNELS as DD12 expands.

Result: /streams now shows 2 verified live streams (Rally TV via
pitsport + DD12 NASCAR Cup 24/7). When the next F1 weekend (Canadian
GP, May 22-24) goes live, pitsport will surface F1 sessions
automatically via the existing pushembdz path.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:34 +00:00
Viktor Barzin
70ea1cf6fd [forgejo] Tolerate missing Vault keys during Phase 0 bootstrap
Wrap the three new Vault key reads in try(...) so the first apply
succeeds even when forgejo_pull_token / forgejo_cleanup_token /
secret/ci/global haven't been populated yet. Without this, CI
auto-apply blocks on the very push that introduces the references —
chicken-and-egg with the runbook order (which is: apply Forgejo bumps,
then create users + PATs, then apply the rest).

Empty tokens are intentionally visible-broken (auth fails, probe
reports auth failure, cleanup CronJob errors) — that's the signal
to run the bootstrap runbook. Subsequent apply picks up the real
values.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:33 +00:00
Viktor Barzin
f793a5f50b [forgejo] Phase 0 of registry consolidation: prepare Forgejo OCI registry
Stage 1 of moving private images off the registry:2 container at
registry.viktorbarzin.me:5050 (which has hit distribution#3324 corruption
3x in 3 weeks) onto Forgejo's built-in OCI registry. No cutover risk —
pods still pull from the existing registry until Phase 3.

What changes:
* Forgejo deployment: memory 384Mi→1Gi, PVC 5Gi→15Gi (cap 50Gi).
  Explicit FORGEJO__packages__ENABLED + CHUNKED_UPLOAD_PATH (defensive,
  v11 default-on).
* ingress_factory: max_body_size variable was declared but never wired
  in after the nginx→Traefik migration. Now creates a per-ingress
  Buffering middleware when set; default null = no limit (preserves
  existing behavior). Forgejo ingress sets max_body_size=5g to allow
  multi-GB layer pushes.
* Cluster-wide registry-credentials Secret: 4th auths entry for
  forgejo.viktorbarzin.me, populated from Vault secret/viktor/
  forgejo_pull_token (cluster-puller PAT, read:package). Existing
  Kyverno ClusterPolicy syncs cluster-wide — no policy edits.
* Containerd hosts.toml redirect: forgejo.viktorbarzin.me → in-cluster
  Traefik LB 10.0.20.200 (avoids hairpin NAT for in-cluster pulls).
  Cloud-init for new VMs + scripts/setup-forgejo-containerd-mirror.sh
  for existing nodes.
* Forgejo retention CronJob (0 4 * * *): keeps newest 10 versions per
  package + always :latest. First 7 days dry-run (DRY_RUN=true);
  flip the local in cleanup.tf after log review.
* Forgejo integrity probe CronJob (*/15): same algorithm as the
  existing registry-integrity-probe. Existing Prometheus alerts
  (RegistryManifestIntegrityFailure et al) made instance-aware so
  they cover both registries during the bake.
* Docs: design+plan in docs/plans/, setup runbook in docs/runbooks/.

Operational note — the apply order is non-trivial because the new
Vault keys (forgejo_pull_token, forgejo_cleanup_token,
secret/ci/global/forgejo_*) must exist BEFORE terragrunt apply in the
kyverno + monitoring + forgejo stacks. The setup runbook documents
the bootstrap sequence.

Phase 1 (per-project dual-push pipelines) follows in subsequent
commits. Bake clock starts when the last project goes dual-push.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:33 +00:00
Viktor Barzin
00614a3302 f1-stream: drop broken curated, dedupe streams, accept all pitsport categories
User feedback: every stream on /watch shows ads but the player fails
to load. Three causes, three fixes:

1. CuratedExtractor's two hmembeds 24/7 channels (Sky F1, DAZN F1)
   sat at the top of the list and ALWAYS failed: they load the
   upstream's ad overlay then JW Player throws error 102630 (empty
   playlist; the obfuscated decoder produces no fileURL in our
   environment). Disabled the registration in extractors/__init__.py
   until/unless we find a working bypass — leaving the existing
   `CURATED_BYPASS = {"curated"}` shim in service.py so the swap is
   reversible.

2. Pitsport surfaces every WRC stage / MotoGP session as its own
   /watch UUID, but they all resolve to the same upstream m3u8 URL
   (e.g. RallyTV one master.m3u8 across all 22 Rally de Portugal
   stages). Added URL-keyed dedupe in service.run_extraction so the
   /streams response shows one row per actual stream.

3. The pitsport category filter was still narrowed to motorsport.
   Pitsport.xyz only lists curated sports broadcasts (WRC, MotoGP,
   IndyCar, NASCAR, Premier League Darts, Premier League football…),
   so the site's own selection is the right filter. Replaced the
   hand-maintained MOTORSPORT_KEYWORDS list with `bool(category or
   title)` — anything pitsport returns goes through. Streams that
   aren't actually live get filtered out downstream when the embed
   API returns an empty manifest.

Frontend: hls.js `lowLatencyMode` was on by default but RallyTV (and
most non-LL-HLS providers) don't ship the LL-HLS extensions, which
broke playback in real browsers. Default to `lowLatencyMode: false`.

Result: /streams is now 1 verified live entry (Rally TV WRC stage
currently airing); was 24 with the top 2 always broken + 22 dupes.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:33 +00:00
Viktor Barzin
18d96712c7 f1-stream: pitsport extractor — broaden categories + new safeStream payload
The previous extractor only surfaced Formula 1/2/3 and never returned
anything outside race weekends. Two fixes:

1. Broadened category filter from {formula 1/2/3} to a motorsport set
   (MotoGP/Moto2/Moto3, WRC/WEC/IndyCar/NASCAR + the F1 series).
   Replaces the NON_F1_KEYWORDS exclusion list with a positive-match
   MOTORSPORT_KEYWORDS set; removes the F1-specific filter on title
   keywords. Old `_is_f1_*` aliases retained as compat shims.

2. Updated `_parse_stream_config` for the current pushembdz.store embed
   payload — Next.js now serves `safeStream` (just title + method) and
   the actual stream URL is fetched at runtime from
   `pushembdz.store/api/stream/<slug>`. Extractor now hits that endpoint
   when the inline link is missing. Treats `method=jwp` as HLS and
   accepts URLs ending in `.css` (pushembdz disguises some HLS playlists
   with a `.css` extension).

End-to-end result: /streams went from 2 (curated, broken JW decoder) to
24 streams marked `is_live=True`. The verifier confirms each via
`manifest_parsed_codec_missing_in_verifier` (Playwright Chromium has no
H.264 — manifest fetch alone is the codec-independent positive signal).
Currently surfaces Rally de Portugal SS1–SS22 (WRC); MotoGP starts
appearing once the French GP weekend goes live tomorrow.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:33 +00:00
Viktor Barzin
8146d05191 chrome-service: replace static health stub with noVNC view
The static nginx stub at chrome.viktorbarzin.me wasn't useful for
debugging anti-bot interactions. Swap it for a live noVNC HTML5 view
of the headed Chromium session: x11vnc taps Xvfb's :99 over localhost
TCP (added `-listen tcp -ac` to Xvfb), websockify wraps it as a WS
endpoint, and noVNC's vendored web client serves it on :6080.

The ingress chain is unchanged — chrome.viktorbarzin.me stays
Authentik-gated, dns_type=proxied, port 3000 (the Playwright WS) stays
internal-only behind the NetworkPolicy + token. Custom image
`registry.viktorbarzin.me/chrome-service-novnc:v4` (ubuntu:24.04 +
x11vnc + websockify + novnc apt packages) needs imagePullSecrets, so
also added registry-credentials reference to the deployment spec.

x11vnc flags: `-noshm -noxdamage -nopw -shared -forever`. SHM is
disabled because each container has its own /dev/shm so the X server
can't grant access; XDAMAGE isn't compiled into the noble Xvfb. The
sidecar entrypoint waits up to 30s for both Xvfb (:6099) and x11vnc
(:5900) to bind before exec'ing websockify.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:33 +00:00
Viktor Barzin
f18cd1d314 chrome-service: in-cluster headed Chromium pool for f1-stream verifier
The f1-stream verifier's in-process headless Chromium kept tripping
hmembeds' disable-devtool.js Performance detector (CDP latency on
console.log vs console.table) and getting redirected to google.com.

This adds a single-replica chrome-service stack running Playwright
launch-server under Xvfb so callers can connect via WS+token to a
shared headed browser. f1-stream's _ensure_browser now prefers
chromium.connect(CHROME_WS_URL/CHROME_WS_TOKEN) and adds a vendored
stealth init script (webdriver/plugins/languages/Permissions/WebGL
spoofs + querySelector hijack to disarm disable-devtool-auto) on
every new context. Falls back to in-process headless if the env
vars aren't set.

Encrypted PVC for profile + npm cache, NetworkPolicy to TCP/3000
gated by client-namespace label, 6h tar.gz backup CronJob to NFS,
Authentik-gated nginx sidecar at chrome.viktorbarzin.me for human
liveness checks. Image pinned to playwright:v1.48.0-noble in
lockstep with the Python client's playwright==1.48.0.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 23:29:32 +00:00
Viktor Barzin
41655096c7 openclaw: realtime usage dashboard via Prometheus exporter sidecar
Stdlib-only Python exporter ($1) reads ~/.openclaw/agents/*/sessions/*.jsonl
(assistant messages with usage) plus auth-profiles.json (OAuth expiry,
Plus-tier label) and exposes Prometheus text format on :9099/metrics.
Container is python:3.12-slim; pod template gets prometheus.io/scrape
annotations so the existing kubernetes-pods job picks it up — no
ServiceMonitor needed.

Metrics exported:
  openclaw_codex_messages_total{provider,model,session_kind}    counter
  openclaw_codex_input/output/cache_read/cache_write_tokens_total
  openclaw_codex_message_errors_total{reason}
  openclaw_codex_active_sessions{kind}                          gauge
  openclaw_codex_oauth_expiry_seconds{provider,account,plan}    gauge
  openclaw_codex_last_run_timestamp                             gauge

Grafana dashboard "OpenClaw — Codex Usage" (Applications folder, 30s
refresh): messages/5h vs Plus rate-card, % of 1,200 floor, tokens/5h,
cache hit %, OAuth expiry days, active sessions, last-turn age, errors,
plus per-model timeseries + bar gauge + error table.

Plus rate-card thresholds in the gauge are conservative (1,200/5h floor;
real cap is dynamic 1,200–7,000). Re-baseline if throttling shows up
below 80%.
2026-05-07 23:29:32 +00:00
Viktor Barzin
115ca184ff openclaw: switch primary to ChatGPT Plus OAuth (openai-codex/gpt-5.4-mini)
Bumps image 2026.2.26 → 2026.5.4 (openai-codex provider plugin landed in
2026.4.21+). Auth profile is OAuth via the device-pairing flow against the
Codex backend (account ancaelena98@gmail.com); token persists in
/home/node/.openclaw/agents/main/agent/auth-state.json on NFS so it survives
pod restarts. Plus tier accepts gpt-5.4-mini (1,200–7,000 local msgs/5h);
gpt-5-mini and gpt-5.1-codex-mini both return errors on Plus, so we pin
gpt-5.4-mini explicitly. doctor --fix auto-promotes the highest-tier model
(gpt-5-pro) after model discovery, so the container command pins the mini
back as default after doctor runs but before gateway start.
2026-05-07 23:29:32 +00:00