ClawMem

Upgrading ClawMem

Guide for upgrading between released versions. Current: v0.10.4.

ClawMem upgrades are designed to be drop-in: pull the new version, restart any long-lived processes, and the SQLite schema auto-migrates on first open. This guide documents per-version specifics for upgrades that have additional considerations beyond the quick path below.

Quick path

# Option A: npm / bun global install
bun update -g clawmem   # or: npm update -g clawmem

# Option B: source install
cd ~/clawmem && git pull

# Restart long-lived processes to pick up the new code
systemctl --user restart clawmem-watcher.service  # if installed as a user unit

Hooks (spawned fresh per Claude Code invocation) and the MCP stdio server (respawned per Claude Code session) pick up new code automatically on their next invocation — no restart required for those. Only persistent daemons like clawmem watch, clawmem serve, and the systemd embed/watcher/curator units need to be restarted.

What auto-applies on first open

All schema changes from v0.7.1 → v0.9.0 are additive and idempotent:

The first time any v0.7.1+ process opens an existing vault, the migrations run silently. Hook invocations alone are sufficient — you do not need a manual upgrade command.

What you do NOT need to run


v0.10.3 → v0.10.4

v0.10.4 fixes issue #11clawmem setup openclaw previously hardcoded ~/.openclaw/extensions/clawmem and ignored OPENCLAW_STATE_DIR, breaking installs into custom OpenClaw profiles. The fix is non-breaking: vault on disk is byte-identical, no schema changes, no env-var changes for users on the default profile, no retrieval-pipeline or hook changes. Pure bun update -g clawmem.

Behavior changes you should know about

Custom profile users (the headline win)

If you run OpenClaw with a non-default profile, the upgrade is now a one-liner:

# Pre-v0.10.4 (broken): installed into ~/.openclaw regardless of OPENCLAW_STATE_DIR
OPENCLAW_STATE_DIR=~/.openclaw-dev clawmem setup openclaw

# v0.10.4+: installs into ~/.openclaw-dev/extensions/clawmem as expected
OPENCLAW_STATE_DIR=~/.openclaw-dev clawmem setup openclaw

If you previously worked around the bug by manually copying the plugin directory into your profile, you can now run clawmem setup openclaw --remove (in the affected profile via OPENCLAW_STATE_DIR) and then a fresh clawmem setup openclaw to land cleanly. Or leave the manual copy in place — v0.10.4 is backwards-compatible and --remove will fall back to manual cleanup if the CLI uninstall fails because the install isn’t in OpenClaw’s records.

Quick path

# Source install
cd ~/clawmem && git pull

# Or via npm/bun
bun update -g clawmem

# Re-run setup so any new behavior takes effect (idempotent)
clawmem setup openclaw

# Restart long-lived processes (only if you re-installed the plugin)
sudo systemctl restart openclaw-gateway.service   # or your gateway unit

No schema migration. No vault changes. The OpenClaw plugin source under src/openclaw/ is unchanged from v0.10.3 — the change is entirely in cmdSetupOpenClaw (and a new src/openclaw-paths.ts helper module that mirrors OpenClaw’s path-resolution semantics for the CLI-absent fallback). See docs/guides/openclaw-plugin.md for the full Install reference and docs/troubleshooting.md for the symptom→fix entries this release closes.

v0.9.0 → v0.10.0

v0.10.0 is the OpenClaw pure-memory migration (§14.3). It changes how the ClawMem plugin registers with OpenClaw and updates clawmem setup openclaw to produce a layout that the v2026.4.11+ plugin discoverer actually finds. There are no schema changes, no new env vars, and no changes to the retrieval pipeline, hook set, or agent tools. The vault on disk is byte-identical to v0.9.0. Claude Code users who do not run OpenClaw can upgrade with no action beyond git pull.

Why the migration, in one paragraph. OpenClaw and Hermes have converged on a two-surface plugin model — one slot for memory plugins (cross-session, retrieval-first) and a separate slot for context-engine plugins (in-session, compaction-first). Under that model ClawMem is a memory layer, not a context engine, and Hermes has always had it plugged in correctly via MemoryProvider. Pre-v0.10.0 the OpenClaw integration occupied the context-engine slot only because OpenClaw had no separate memory slot at the time. v0.10.0 moves ClawMem to the OpenClaw memory slot and frees the context-engine slot for genuine compression/compaction plugins like lossless-claw. You can now run both at once. See RELEASE_NOTES.md and docs/guides/openclaw-plugin.md for the full rationale.

OpenClaw users MUST upgrade to OpenClaw v2026.4.11+ before running v0.10.0’s clawmem setup openclaw. The new discovery contract and the new install layout both depend on behavior that only exists in OpenClaw v2026.4.11 and later.

Quick path

# Source install
cd ~/clawmem && git pull

# Restart long-lived daemons (still needed if you run the watcher / serve as a user unit)
systemctl --user restart clawmem-watcher.service

# OpenClaw users: re-run setup so the plugin dir switches from symlink to recursive copy
clawmem setup openclaw

# Multi-user installs only: chown the new plugin dir to the gateway user
#   (see docs/guides/openclaw-plugin.md for the full ownership gotcha)
sudo chown -R <openclaw-gateway-user>:<gateway-group> ~/.openclaw/extensions/clawmem

# Then restart the gateway so it re-discovers the plugin
sudo systemctl restart openclaw-gateway.service   # or whatever your gateway unit is called

Hooks and the stdio MCP server pick up the new binary automatically on their next invocation — no clawmem setup hooks re-run needed.

What changed under the hood

Rollback

Rolling back to v0.9.0 is a git checkout v0.9.0 && clawmem setup openclaw --link away. The --link flag produces the old symlink layout, which is what v0.9.0 expected. If you are rolling back because you are on an OpenClaw version older than v2026.4.11, the symlink layout will still work (pre-v2026.4.11 discovery did not require the package.json file and did not have the dirent.isDirectory() gate). You do not need to downgrade OpenClaw.

If you rolled back and still see context engine 'clawmem' is not registered, remove the stale slot config: openclaw config set plugins.slots.memory "" and openclaw config set plugins.slots.contextEngine clawmem, then restart the gateway.

What you do NOT need to run


v0.8.5 → v0.9.0

v0.9.0 adds two new context-surfacing features — <vault-facts> KG injection and session-scoped focus topic boost — and is drop-in safe. One idempotent expression-index migration, no breaking API changes, no schema rewrites, no reindex/embed/graph-build needed. All behavior changes on existing code paths are additive and fail-open: if the new stages don’t fire (no entity seeds from the prompt, no focus file set), the <vault-context> output is byte-identical to v0.8.5.

Quick path

bun update -g clawmem   # or: npm update -g clawmem
# Or source install:
cd ~/clawmem && git pull

# Restart long-lived daemons so they pick up the new context-surfacing stages
systemctl --user restart clawmem-watcher.service
# If you run `clawmem serve` or `clawmem watch` in systemd, restart those too.

Hooks + MCP stdio pick up new code automatically on next invocation — no restart needed for those.

What changes on first open

New <vault-facts> block in <vault-context>

When the user’s prompt mentions entities already known to the vault (via entity_nodes), context-surfacing now appends a token-bounded <vault-facts> block of raw SPO triple lines to <vault-context>, alongside the existing <facts> / <relationships> blocks. This feeds the model current-state knowledge about entities the user is talking about, without requiring the agent to call kg_query explicitly.

No configuration required. You can verify the block appears by running echo "tell me about <some entity from entity_nodes>" | clawmem surface --context --stdin after upgrade.

New clawmem focus CLI — session-scoped topic boost

Three new subcommands write a per-session focus file that steers context-surfacing for that session only:

clawmem focus set "authentication flow"                       # uses CLAUDE_SESSION_ID env var
clawmem focus set "authentication flow" --session-id abc123   # explicit
clawmem focus show --session-id abc123
clawmem focus clear --session-id abc123

When a focus topic is set:

Session isolation contract: the focus file is keyed by sessionId and never writes to SQLite, never mutates confidence / status / snoozed_until / any lifecycle column. Concurrent sessions on the same host cannot cross-contaminate each other’s topic biasing. CLAWMEM_SESSION_FOCUS env var is a debug-only override that does NOT provide per-session scoping — do not rely on it in multi-session deployments. CLAWMEM_FOCUS_ROOT override is available for hermetic testing.

What you do NOT need to run

Rollback

If you need to roll back to v0.8.5, the idx_entity_nodes_lower_name index is harmless on pre-v0.9.0 code — SQLite will simply ignore it. No data cleanup required, no compatibility shim needed.


v0.8.4 → v0.8.5

v0.8.5 is a drop-in fix for the SPO triple extraction bug cluster (see RELEASE_NOTES.md entry for the full bug list). entity_triples stayed at zero on pre-v0.8.5 vaults regardless of activity, making kg_query return empty for every entity. v0.8.5 fixes the population path end-to-end. No schema changes, no new env vars, no new dependencies, no breaking API changes. All behavior changes are additive.

Quick path

bun update -g clawmem   # or: npm update -g clawmem
# Or source install:
cd ~/clawmem && git pull

# Restart long-lived daemons so they pick up the new decision-extractor pipeline
systemctl --user restart clawmem-watcher.service

Claude Code hooks and the stdio MCP server pick up the new code automatically on their next invocation — no clawmem setup hooks re-run needed. The hook command in ~/.claude/settings.json is ${binPath} hook ${name}, which resolves at runtime to the upgraded binary.

What changed behaviorally

Do I need to clean up dead pre-v0.8.5 data?

Optional. Pre-v0.8.5 runs left two kinds of harmless dead data in SQLite:

  1. entity_nodes rows with entity_type='auto' — minted by the old regex-based triple path. 'auto' is not a valid compatibility bucket, so these entities never resolve via kg_query — they cost a few KB of storage and nothing else.
  2. entity_triples rows with schema-placeholder source_fact values (e.g. "Individual atomic fact", "canonical entity name") — the 1.7B observer occasionally echoed example text from the old prompt into real facts.

Neither affects correctness or query results going forward, because v0.8.5 writes canonical IDs only and the new prompt + parser filter placeholder strings before persistence. Clean them if you want a tidy store:

sqlite3 ~/.cache/clawmem/index.sqlite "
  DELETE FROM entity_triples WHERE source_fact LIKE '%atomic fact%' OR source_fact LIKE '%canonical entity name%';
  DELETE FROM entity_nodes WHERE entity_type='auto';
"

The troubleshooting guide has the full set of diagnostic queries and the symptom-by-symptom checklist for confirming you were on pre-v0.8.5: see the “kg_query returns empty for every entity” entry in docs/troubleshooting.md.

Do I need to re-run clawmem reindex --enrich?

No. v0.8.5 does not introduce new enrichment stages, so --enrich is not required to benefit from the fix. New Stop-hook activity from v0.8.5 onward is the cleanest source of triples.

Running --enrich anyway is harmless but unnecessary — it will re-extract entities against the same entity cap as v0.8.3, not re-fire the decision-extractor hook. Past observation transcripts are gone (they were consumed by the Stop hook on their original session), so re-enrichment on already-persisted _clawmem/observations/*.md files cannot recover observations lost to the pre-v0.8.5 path-collision bug — those are permanently gone. Only future sessions generate new triples.

Confirming the fix is live

After a real Claude Code session that fires the Stop hook:

# Should show reconstructed "subject predicate object" strings, not placeholder echoes or JSON blobs
sqlite3 ~/.cache/clawmem/index.sqlite \
  "SELECT source_fact FROM entity_triples ORDER BY created_at DESC LIMIT 5;"

# Should show only real bucket types (project/service/tool/concept/person/org/location), never 'auto'
sqlite3 ~/.cache/clawmem/index.sqlite \
  "SELECT DISTINCT entity_type FROM entity_nodes
   WHERE entity_id IN (SELECT subject_id FROM entity_triples ORDER BY created_at DESC LIMIT 50);"

# Should increment after each real session — not after every indexing tick
sqlite3 ~/.cache/clawmem/index.sqlite "SELECT COUNT(*) FROM entity_triples;"

If entity_triples still shows zero after multiple Stop-hook-firing sessions, the Stop hook itself is not firing — check ~/.claude/settings.json for a ClawMem entry under hooks.Stop, and run clawmem doctor to verify hook installation.


v0.8.2 → v0.8.3

v0.8.3 is a drop-in patch release. No schema changes, no new env vars, no new dependencies. All changes are transparent behavior fixes that take effect automatically after upgrading the package and restarting any long-lived clawmem processes.

Quick path

bun update -g clawmem   # or: npm update -g clawmem
# Or source install:
cd ~/clawmem && git pull

# Restart long-lived daemons so they pick up the new entity extraction + self-loop guard
systemctl --user restart clawmem-watcher.service

What changed behaviorally

Do I need to re-run clawmem reindex --enrich to pick up the new entity cap?

Only if you want previously-enriched long-form documents to re-extract entities against the new cap. A-MEM enrichment is tied to an input_hash of (title + body), so re-enrichment is skipped when the document content is unchanged. To force re-extraction against the new cap on already-indexed docs:

# Re-enrich all documents (LLM call per doc — expect latency on large vaults)
clawmem reindex --enrich

This is purely opt-in. New and modified documents pick up the new cap automatically on their next enrichment pass — no manual step needed for those.

Do I need to rebuild graphs?

No. The self-loop guard only affects new writes. Existing self-loops in memory_relations (if any) are not scrubbed on upgrade. To check for and clean pre-existing self-loops:

sqlite3 ~/.cache/clawmem/index.sqlite \
  "SELECT COUNT(*) FROM memory_relations WHERE source_id = target_id;"

# If non-zero and you want to clean them:
sqlite3 ~/.cache/clawmem/index.sqlite \
  "DELETE FROM memory_relations WHERE source_id = target_id;"

This is optional housekeeping — the guard ensures no new self-loops are created, and existing ones have no observed impact beyond graph noise.


v0.8.1 → v0.8.2

v0.8.2 is a pure code release: no schema changes, no new dependencies, no new env vars. The only behavior change is operational — the long-lived clawmem watch process now hosts the consolidation and heavy maintenance lane workers in addition to cmdMcp, and the light lane gained the same DB-backed worker_leases exclusivity the heavy lane already had. See docs/concepts/architecture.md#dual-host-worker-architecture-v082 for the architectural walkthrough.

Quick path

git pull   # or: bun update -g clawmem / npm update -g clawmem
systemctl --user restart clawmem-watcher.service  # if installed as a user unit

Move worker hosting from cmdMcp (per-session) to cmdWatch (long-lived, canonical) by setting the env vars on your watcher service unit instead of the wrapper. Example systemd drop-in:

systemctl --user edit clawmem-watcher.service

Then paste:

[Service]
Environment=CLAWMEM_ENABLE_CONSOLIDATION=true
Environment=CLAWMEM_HEAVY_LANE=true
Environment=CLAWMEM_HEAVY_LANE_WINDOW_START=2
Environment=CLAWMEM_HEAVY_LANE_WINDOW_END=6

Then systemctl --user restart clawmem-watcher.service. The watcher process now runs both lanes 24/7 — the heavy lane sees the configured 02:00-06:00 quiet window every night regardless of whether any Claude Code session is open at the time, and the light lane drains the enrichment backlog continuously.

cmdMcp remains a supported fallback host for users who do not run clawmem watch (e.g. macOS users running everything via Claude Code launchd). When CLAWMEM_HEAVY_LANE=true is set on a stdio MCP host, cmdMcp emits a one-line warning to stderr advising operators to move heavy-lane hosting to the watcher.

What changed under the hood

Multi-host safety

Running BOTH clawmem watch (with env vars) AND a per-session clawmem mcp (with env vars) against the same vault is supported in v0.8.2. The worker_leases table arbitrates: only one host wins each tick, the other journals a skip (heavy lane) or logs “lease held” (light lane). For the cleanest setup, set the env vars on clawmem-watcher.service only and leave cmdMcp unset.

What you do NOT need to do

Verify the upgrade

# Confirm watcher is running latest code
systemctl --user status clawmem-watcher.service

# After enabling the env vars + restart, watcher should log both worker
# startup banners. The exact intervals depend on the env vars you set —
# defaults are 5-min light lane and 30-min heavy lane.
journalctl --user -u clawmem-watcher.service -n 50 --no-pager | \
  grep -E "Starting (consolidation|heavy)"
# Expected:
#   [watch] Starting consolidation worker (light lane, interval=...)
#   [consolidation] Worker started
#   [watch] Starting heavy maintenance lane worker
#   [heavy-lane] Starting worker (interval=..., window=..., ...)

For the full operator guide — what to expect over the first hour, the per-usage-pattern tuning matrix, the complete monitoring query set, and rollback steps — see docs/guides/systemd-services.md.


v0.7.0 → v0.8.1

Schema migrations (automatic)

Version Change Tables / columns added
v0.7.1 Contradiction gate memory_relations.contradict_confidence
v0.8.0 Heavy maintenance lane maintenance_runs + worker_leases tables
v0.8.1 Multi-turn lookback context_usage.query_text

Applied on first open of any v0.7.1+ process against the vault. No action required.

Optional: legacy contradicts taxonomy cleanup

The v0.7.1 P0 taxonomy cleanup standardized on the A-MEM plural form contradicts across the codebase. Prior code mixed contradict and contradicts, so v0.7.0 vaults may contain orphaned rows with relation_type = 'contradict' (singular) that v0.7.1+ queries cannot reach.

Check whether your vault has any:

sqlite3 -readonly ~/.cache/clawmem/index.sqlite \
  "SELECT relation_type, COUNT(*) FROM memory_relations \
   WHERE relation_type LIKE 'contradict%' GROUP BY relation_type"

If the output shows only contradicts, nothing to do. If it shows a contradict (singular) row, rescue them:

sqlite3 ~/.cache/clawmem/index.sqlite \
  "UPDATE memory_relations SET relation_type='contradicts' \
   WHERE relation_type='contradict'"

Cosmetic cleanup — orphaned rows are harmless but invisible to contradict-aware features (the merge-time contradiction gate, Phase 3 deductive dedupe linking, and intent_search WHY-graph traversal over contradicts edges).

Opt-in features

None of these auto-enable. They are new capabilities gated behind environment variables on the long-lived clawmem process.

Feature Version How to enable
Heavy maintenance lane v0.8.0 CLAWMEM_HEAVY_LANE=true + CLAWMEM_HEAVY_LANE_WINDOW_START/END for quiet hours. Second consolidation worker gated by query-rate, scoped exclusively via DB-backed worker_leases, stale-first batching, journaled in maintenance_runs.
Surprisal selector v0.8.0 CLAWMEM_HEAVY_LANE_SURPRISAL=true. Seeds Phase 2 with k-NN anomaly-ranked doc ids; falls back to stale-first (surprisal-fallback-stale metric) on vaults without embeddings.
Post-import conversation synthesis v0.7.2 clawmem mine <dir> --synthesize flag. One-shot, not persistent. Runs a two-pass LLM pipeline over freshly imported conversation docs to extract structured decision / preference / milestone / problem facts with cross-fact relations.
Consolidation worker v0.7.1 CLAWMEM_ENABLE_CONSOLIDATION=true (flag exists pre-v0.7.1). v0.7.1 attaches the new safety gates (Ext 1/2/3) to the existing Phase 2/3 path — enabling the flag picks up the gates automatically.
Contradiction policy v0.7.1 CLAWMEM_CONTRADICTION_POLICY=link (default, keep both rows + insert contradicts edge) or supersede (mark old row status='inactive').
Merge guard dry-run v0.7.1 CLAWMEM_MERGE_GUARD_DRY_RUN=true logs merge-safety rejections without enforcing — useful for calibration on older vaults before flipping the gate on. Leave false (default) to enforce.

Full environment variable reference: docs/reference/cli.md.

Auto-active (no opt-in, no action)

These activate automatically as soon as the new code runs:

Verify the upgrade

# Schema check — confirms v0.7.1 / v0.8.0 / v0.8.1 migrations applied
sqlite3 -readonly ~/.cache/clawmem/index.sqlite \
  "SELECT name FROM sqlite_master WHERE type='table' AND name IN \
   ('maintenance_runs','worker_leases','recall_stats','recall_events')"
# expect: all four tables listed

sqlite3 -readonly ~/.cache/clawmem/index.sqlite "PRAGMA table_info(context_usage)" \
  | grep query_text
# expect: a row with query_text TEXT

sqlite3 -readonly ~/.cache/clawmem/index.sqlite "PRAGMA table_info(memory_relations)" \
  | grep contradict_confidence
# expect: a row with contradict_confidence REAL

# Full health check
clawmem doctor

Per-version feature summary

v0.7.1 — Safety release

Five independent gates around consolidation and context-surfacing:

See docs/concepts/architecture.md for the architectural walkthrough.

v0.7.2 — Post-import conversation synthesis

Two-pass LLM pipeline over freshly imported content_type='conversation' docs. Pass 1 extracts structured facts; Pass 2 resolves cross-fact links via a local alias map with SQL fallback. Gated behind clawmem mine <dir> --synthesize. Idempotent on reruns.

v0.8.0 — Quiet-window heavy maintenance lane

Second consolidation worker gated by a configurable quiet-hour window and query-rate, scoped exclusively via DB-backed worker_leases with atomic INSERT ... ON CONFLICT DO UPDATE ... WHERE expires_at <= ? acquisition and 16-byte fencing tokens. Journals every attempt (including skips) in maintenance_runs. Stale-first batching by default; optional surprisal selector degrades gracefully on vaults without embeddings.

See docs/concepts/architecture.md for the architectural walkthrough.

v0.8.1 — Multi-turn prior-query lookback

Context-surfacing hook joins the current prompt with up to 2 recent same-session priors from the new context_usage.query_text column for discovery queries (vector, FTS, expansion). Rerank, composite scoring, chunk selection, snippet extraction, file-path FTS supplements, and recall attribution all stay on the raw current prompt. Privacy-conscious persistence split: gated skip paths (slash commands, heartbeats, too-short prompts) persist query_text = NULL to keep agent noise out of future lookback; post-retrieval empty paths still persist so a follow-up turn can reuse the intent.

See docs/concepts/architecture.md for the architectural walkthrough.

v0.8.5 — SPO triple extraction fix

The knowledge graph finally populates. Pre-v0.8.5 decision-extractor wrote zero rows to entity_triples on production vaults because of a bug cluster: observation-type gate too narrow (rejected ~77% of real observations), regex-based triple extractor expected sentence shape (facts are usually descriptive phrases), entity IDs written with invalid 'auto' type, same-type observations in one session collided on filename and the second was silently dropped, and a weak model occasionally echoed schema placeholder text ("Individual atomic fact") into real triples. v0.8.5 replaces the regex path with observer-LLM-emitted <triples> blocks using a tight 13-predicate vocabulary (adopted, migrated_to, deployed_to, runs_on, replaced, depends_on, integrates_with, uses, prefers, avoids, caused_by, resolved_by, owned_by), canonical vault:type:slug entity IDs via ensureEntityCanonical (shared namespace with A-MEM, never writes 'auto'), ambiguity-safe type inheritance via resolveEntityTypeExact (zero or multiple bucket matches → default to concept), widened observation-type gate (decision/preference/milestone/problem/discovery/feature), 8-char SHA256 hash slice in observation paths for collision-free persistence, placeholder defense at both prompt and parser, kg_query canonical-ID round-trip, and source_doc_id provenance on every triple. 4-turn Codex review.

See docs/troubleshooting.md “kg_query returns empty for every entity” for diagnostic symptoms and optional cleanup SQL.


Downgrading

Schema migrations are additive — downgrading to an earlier version leaves new tables and columns in place, and the older code simply does not touch them. No data corruption risk.

cd ~/clawmem
git checkout v0.7.0   # or your target tag
systemctl --user restart clawmem-watcher.service

To fully remove v0.7.1+ schema additions (destructive, loses any v0.7.1+ data written to the new tables):

sqlite3 ~/.cache/clawmem/index.sqlite << 'SQL'
DROP TABLE IF EXISTS maintenance_runs;
DROP TABLE IF EXISTS worker_leases;
SQL

context_usage.query_text and memory_relations.contradict_confidence cannot be removed without rebuilding the table in SQLite. They are nullable and cost nothing to keep. Not recommended.