When content exceeds 500 chars, it's automatically split into multiple
memories on paragraph boundaries. Each chunk gets the same category,
tags (with part-N-of-M suffix), keywords, and importance. Removes the
old 800 char hard limit from the Pydantic model.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
With 375 memories and 1M context window, low limits just hide results.
Agents can still pass a smaller limit when they want fewer results.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
All memories are now visible to all users in recall/list/count queries.
Each memory still has an owner (user_id) who retains exclusive delete
rights. This removes the need for explicit sharing — wizard and emo
automatically see each other's memories.
Changes:
- recall/list: single query without user_id filter, added owner field
- count: counts all memories globally
- REST categories/tags: show all users' data
- Delete/update: unchanged (owner-only or write-share)
- Sync: unchanged (stays user-scoped)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Decouple push and pull in _sync_once() so pull always runs even if push fails
- Add startup full resync to catch drift from other agents and schema changes
- Add periodic full resync every ~10 minutes for continuous drift correction
- Add auth failure detection (401/403) with graceful SQLite-only degradation
- Add /api/auth-check endpoint for lightweight key validation
- Add retry cap (5 attempts) on pending ops to prevent infinite queue buildup
- Add orphan reconciliation: push local-only records with content dedup
- Add memory_count MCP tool for sync diagnostics
- Add version-based SQLite schema migration (PRAGMA user_version)
- Fix API key in ~/.claude.json to match server
- Update README with sync resilience docs, test structure, project layout
- Add 30 new tests covering all new behaviors (155 total, all passing)
The delete endpoint intentionally returns 200 (not 404) when a memory
is already deleted, to prevent retry loops in old clients. Tests were
asserting 404 incorrectly.
- Add SyncEngine for background sync between local SQLite cache and
remote API with pending_ops queue for offline resilience
- Refactor MCP server to support three modes: SQLite-only, hybrid
(local cache + sync, new default), and HTTP-only (legacy)
- Add GET /api/memories/sync endpoint for incremental sync
- Change DELETE to soft delete (set deleted_at) for sync support
- Add deleted_at IS NULL filters to all read queries
- Scale API deployment to 2 replicas with pod anti-affinity, PDB,
and startup probe for high availability
- Add migration 003 for deleted_at column and updated_at index
- Add comprehensive tests for sync engine and API sync endpoint
- API recall and list endpoints now return {"memories": [...]} matching
the format expected by the MCP server
- Rewrote README with comprehensive setup instructions for new agents,
accurate API reference, migration docs, and multi-user setup guide