- Replace $nextTick with setTimeout(100) for D3 render (same fix as dashboard)
- Re-render graph when switching back to tab (not just on first load)
- Fallback width from parent element when container clientWidth is 0
- Use contextvars to resolve user identity from Authorization header
in SSE connections, replacing hardcoded "default" user_id
- memory_recall now includes shared memories (individual + tag-based)
with deduplication and shared_by attribution
- memory_list now includes shared memories with same approach
- All 11 MCP tool functions use _current_user contextvar
- Add "New Memory" button with collapsible form (content, category, tags, importance)
- Add share memory with user typeahead via new GET /api/users endpoint
- Fix dashboard charts not rendering (replace $nextTick with setTimeout for x-if timing)
- Redesign theme: warm walnut/amber palette, Playfair Display + JetBrains Mono fonts
- Update Chart.js and D3 graph colors to match warm theme
Alpine.js + Tailwind CSS (CDN) served by FastAPI StaticFiles — no build step,
zero Docker changes. Features: memory browser with inline edit/delete, full-text
search with debounce, D3.js force-directed graph, Chart.js stats dashboard.
New endpoints: GET /api/stats, GET /api/categories, pagination on GET /api/memories.
- Document 7 new MCP tools for memory sharing
- Add Memory Sharing section with usage examples and permission levels
- Update API Reference with 7 new sharing endpoints + PUT update
- Update migrations list with 004_add_sharing
- Add permissions.py to project structure
- 7 new tools: memory_share, memory_unshare, memory_share_tag,
memory_unshare_tag, memory_update, memory_shared_with_me, memory_my_shares
- Tools only advertised when API key is configured (HTTP_ONLY or HYBRID mode)
- SQLite-only mode stays unchanged — sharing requires the API server
- Tool handlers proxy to REST API endpoints
- New migration 004: memory_shares and tag_shares tables with indexes
- Share individual memories or entire tags with other users (read/write)
- Tag shares are live rules: future memories with shared tags auto-visible
- Recall query merges own + shared memories via UNION, returns shared_by field
- Owner-only delete enforcement (403 for non-owners, even with write access)
- PUT /api/memories/{id} update endpoint with permission checks
- 5 new MCP SSE tools: memory_share, memory_unshare, memory_share_tag,
memory_unshare_tag, memory_update
- Permission helper checks ownership, individual shares, and tag shares
Adds SSE endpoint at /mcp/sse so Claude Code can connect over HTTPS
without needing a local Python bridge script. Benefits:
- No local files or sandbox permission issues
- Works from any machine (OpenClaw, DevVM)
- No startup delay or stderr suppression hack
- Auth via Bearer token in request headers
- Decouple push and pull in _sync_once() so pull always runs even if push fails
- Add startup full resync to catch drift from other agents and schema changes
- Add periodic full resync every ~10 minutes for continuous drift correction
- Add auth failure detection (401/403) with graceful SQLite-only degradation
- Add /api/auth-check endpoint for lightweight key validation
- Add retry cap (5 attempts) on pending ops to prevent infinite queue buildup
- Add orphan reconciliation: push local-only records with content dedup
- Add memory_count MCP tool for sync diagnostics
- Add version-based SQLite schema migration (PRAGMA user_version)
- Fix API key in ~/.claude.json to match server
- Update README with sync resilience docs, test structure, project layout
- Add 30 new tests covering all new behaviors (155 total, all passing)
Add detailed Search Algorithm section covering FTS5/BM25 (SQLite) and
tsvector/ts_rank (PostgreSQL) backends, query construction, semantic
expansion at store time, and a comparison table. Also add Sensitive
Memory & Secrets section, Auto-Learn details, and Background Sync Engine
documentation.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add type annotations to all FastAPI endpoints in api/app.py
- Fix bare list/dict generics in sync.py and app.py
- Fix no-any-return in vault_client.py and sync.py
- Remove mypy || true from GitHub Actions CI — mypy is now clean
- Add type annotations to all _sqlite_* private methods
- Type sqlite_conn as sqlite3.Connection | None with assert narrowing
- Fix _api_request return type (dict[str, Any])
- Add return type to _init_sqlite
- Fix tool_name type in handle_tools_call
- Import sqlite3 at module level
- GHA ci.yml: add build + deploy jobs (push to DockerHub, trigger Woodpecker)
- Drop test matrix to single Python 3.12, preserve mypy step
- Build/deploy gated on push to main (PRs still run tests only)
- Woodpecker: deploy.yml (manual event, kubectl set image + slack notify)
- Old pipeline preserved as build-fallback.yml (manual trigger)
The delete endpoint intentionally returns 200 (not 404) when a memory
is already deleted, to prevent retry loops in old clients. Tests were
asserting 404 incorrectly.
- Blend BM25/ts_rank relevance with importance instead of sorting by one
dimension only. Default mode: 40% relevance + 60% importance. Relevance
mode: 70% relevance + 30% importance.
- Try AND-match first for precise results, fall back to OR-match when too
few results are found. Prevents single-word matches from flooding results.
- Applied to both SQLite (local) and PostgreSQL (API) search paths.
- Add MAX_MEMORY_CHARS=800 Pydantic validation on MemoryStore.content
- Update auto-learn judge prompts: "ONE topic per event", 100-500 chars,
include the WHY not just the WHAT
- Split 9 mega-memories (800-2400ch) into 70 focused memories (100-500ch)
via migration script
Before: median 331ch, 11 memories >800ch, recall wastes 84% of returned tokens
After: median 213ch, 2 memories >800ch (dense single-topic refs), recall returns
only the relevant knowledge
Trade-off research: PostgreSQL ts_rank gives the same score regardless of
document size, so a 2400-char memory with 12 topics gets recalled for any
of its 12 topics but wastes context with the other 11. Focused memories
(100-500ch) give higher signal-to-noise per recall.
Old sync clients without 404 handling get stuck in infinite retry
loops when trying to delete an already-deleted memory. Making the
endpoint idempotent (returning success regardless) fixes this for
all existing clients without requiring client upgrades.
Three fixes for high 4xx alert rate:
1. Catch urllib.error.HTTPError (not just RuntimeError) for 404 on
DELETE in pending_ops — was causing infinite retry loop for
already-deleted memories
2. try_sync_delete: treat 404 as success instead of enqueuing
a retry that will also 404 forever
3. URL-encode the `since` query param to prevent `+` in timezone
offset being decoded to a space (the asyncpg-string-timestamp
pattern applied to the sync client)
- claude CLI: run from /tmp to avoid internet-mode-used marker prompts
- ollama: try small local models (qwen2.5:3b, llama3.2:3b, etc.)
- heuristic: pattern matching for corrections, preferences, decisions
- better JSON extraction: handles markdown fences and surrounding text
- Deep extraction every 5 turns: reads last 5 exchanges for debugging
insights, workarounds, architectural patterns, and operational knowledge
- Single-turn extraction on every other turn (cheap, corrections/prefs only)
- State tracking per session: turn counter, content hashes for dedup
- Writes to both memory API/SQLite AND auto-memory markdown files
- Expanded judge prompt: now catches debugging (error→cause→fix),
workarounds, and operational knowledge — not just corrections/facts
- Auto-cleanup of state files older than 24 hours
The '+' in '+00:00' timezone offsets gets URL-decoded to a space,
causing datetime.fromisoformat() to fail with ValueError. Replace
spaces back to '+' before parsing.
Lead with value prop and install command, add conversation example,
restructure around features/hooks/auto-behaviors, consolidate env
vars and API reference into clean tables, move API server docs below
plugin setup since it's optional.
Consolidates plugin components (hooks, commands, skills, MCP config)
into this repo so it can be installed with:
claude plugins install github:ViktorBarzin/claude-memory-mcp
- .claude-plugin/plugin.json: manifest with all hook events
- mcp/memory-mcp.json: MCP server config using existing src/
- hooks/: compaction survival, auto-recall, auto-learn, auto-approve
- commands/: /remember and /recall slash commands
- skills/: memory-management skill
- Bump MCP server to v2.0.0 with metaclaw migration fallback
- Update README with quick install and plugin hooks docs
- Add SyncEngine for background sync between local SQLite cache and
remote API with pending_ops queue for offline resilience
- Refactor MCP server to support three modes: SQLite-only, hybrid
(local cache + sync, new default), and HTTP-only (legacy)
- Add GET /api/memories/sync endpoint for incremental sync
- Change DELETE to soft delete (set deleted_at) for sync support
- Add deleted_at IS NULL filters to all read queries
- Scale API deployment to 2 replicas with pod anti-affinity, PDB,
and startup probe for high availability
- Add migration 003 for deleted_at column and updated_at index
- Add comprehensive tests for sync engine and API sync endpoint