Common issues when running ClawMem with hooks, MCP server, or OpenClaw plugin. Organized by subsystem.
Snap Bun: EPERM on stdin (hooks return empty)
/snap/bin/bun) cannot read stdin due to snap’s confinement sandbox. Hooks receive no prompt input and silently return empty context.curl -fsSL https://bun.sh/install | bash) which places it at ~/.bun/bin/bun. The bin/clawmem wrapper prefers ~/.bun/bin/bun over the system bun for this reason. If hooks return empty on a snap-based system, verify ~/.bun/bin/bun exists and is executable.Two Bun binaries on PATH
/snap/bin/bun) and native bun (~/.bun/bin/bun) are installed, which bun may return the snap version. Direct bun -e or bun run commands will use the wrong binary.bin/clawmem wrapper handles this automatically. For manual commands, use ~/.bun/bin/bun explicitly or add ~/.bun/bin to PATH before /snap/bin.“Local model download blocked” error
CLAWMEM_NO_LOCAL_MODELS=true.CLAWMEM_NO_LOCAL_MODELS=false for in-process fallback.“Remote LLM in cooldown, falling back to in-process generation”
generate() and expandQuery() use local node-llama-cpp. Remote is retried automatically after cooldown expires. HTTP errors (400, 500) and AbortError do NOT trigger cooldown — remote is retried on the next call.CLAWMEM_NO_LOCAL_MODELS=true to prevent local fallback (returns null / passthrough instead).Unexpectedly slow inference (in-process fallback)
node-llama-cpp (logged as cooldown message). With GPU acceleration (Metal on Apple Silicon, Vulkan on supported hardware), the fallback is fast. On CPU-only systems, inference is significantly slower.Restart=on-failure. Or set CLAWMEM_NO_LOCAL_MODELS=true to fail fast instead of falling back.Query expansion always fails or returns garbage
Some documents never get embedded (stuck after multiple sweeps)
clawmem embed --force (which calls clearAllEmbeddings()) to reset all embed state and retry everything. Check getEmbedStats() via MCP status tool for pending/synced/failed counts.failed and retried — seq=0 is required for surprisal scoring, semantic graph, and health checks.Vector search returns no results but BM25 works
clawmem embed or wait for the daily embed timer.Vector search: “Dimension mismatch” error
node-llama-cpp falls back to the default model, which has different dimensions.curl http://host:8088/health). Or set CLAWMEM_NO_LOCAL_MODELS=true to fail fast instead of falling back to a mismatched model. If you switched models, re-embed the vault with clawmem embed --force.Embedding fails with “input is too large to process”
full document fragment exceeds the model’s token context (2048 tokens for EmbeddingGemma).API key + localhost warning
CLAWMEM_EMBED_API_KEY but CLAWMEM_EMBED_URL points to localhost.context-surfacing hook returns empty
/, or no docs score above threshold.clawmem status for doc counts. Check clawmem embed for embedding coverage.intent_search returns weak results for WHY/ENTITY
build_graphs to add temporal backbone + semantic edges.search returns results but query returns nothing
query applies stricter scoring (composite + MMR + expansion). If expansion LLM is down, the pipeline may return empty.search or vsearch as a fallback.kg_query returns empty for every entity
entity_triples is populated by the decision-extractor Stop hook from observer-emitted <triples> blocks. Zero rows typically means either (a) the Stop hook has never fired in this vault, or (b) the observer LLM is not emitting <triples> blocks.sqlite3 ~/.cache/clawmem/index.sqlite "SELECT COUNT(*) FROM entity_triples".sqlite3 ~/.cache/clawmem/index.sqlite "SELECT COUNT(*) FROM documents WHERE collection='_clawmem' AND content_type='observation' AND active=1". If zero, decision-extractor has never fired successfully.<triples> blocks alongside <facts>. Older prompts may not include this schema — re-run clawmem setup hooks to refresh the hook binaries, and verify the observer is a recent build.entity_triples count increments.entity_triples stayed at 0 regardless of activity due to regex+gate issues in decision-extractor. v0.8.5 fixes all of this — if you’re on an older version, upgrade.entity_triples is 0 or near-0 despite tens of observations in activity; (2) SELECT COUNT(*) FROM entity_nodes WHERE entity_type='auto' returns > 0 — the old regex path minted nodes with entity_type='auto' which is not a valid bucket, so those entities never resolve via kg_query; (3) SELECT path FROM documents WHERE collection='_clawmem' AND path LIKE 'observations/%' shows at most one row per (date, session, obs_type) — the old path scheme had no hash disambiguator, so multiple same-type observations in one session collided on UNIQUE(collection, path) and were silently dropped (second observation’s triples lost with it); (4) any surviving entity_triples.source_fact values look like 'Individual atomic fact' or similar schema-placeholder strings that leaked from the observer prompt.entity_type='auto' rows are harmless (they never resolve via kg_query). (b) For a cleaner slate, delete just the polluted rows and let A-MEM re-enrich on next activity: sqlite3 ~/.cache/clawmem/index.sqlite "DELETE FROM entity_triples WHERE source_fact LIKE '%atomic fact%' OR source_fact LIKE '%canonical entity name%'; DELETE FROM entity_nodes WHERE entity_type='auto';". Full clawmem reindex --enrich is only needed if you want entity extraction to re-fire across the whole vault.sqlite3 ~/.cache/clawmem/index.sqlite "SELECT source_fact FROM entity_triples ORDER BY created_at DESC LIMIT 5" should show reconstructed subject predicate object strings (e.g. ClawMem depends_on Bun), never JSON or placeholder text. SELECT DISTINCT entity_type FROM entity_nodes WHERE entity_id IN (SELECT subject_id FROM entity_triples ORDER BY created_at DESC LIMIT 50) should show only real bucket types (project, service, tool, concept, person, org, location), never auto.Watcher fires events but collections show 0 docs
Bun.Glob not supporting brace expansion {a,b,c}.Watcher fires events but wrong collection processes them
reindex –force crashes with “UNIQUE constraint failed”
reindex –force after v0.2.0 upgrade shows no entity extraction
reindex --force treats existing documents as updates (isNew=false). The A-MEM pipeline skips entity extraction, link generation, and memory evolution for updates to avoid churn on routine reindexes.clawmem reindex --enrich instead. The --enrich flag forces the full enrichment pipeline (entity extraction + canonical resolution + co-occurrence tracking + link generation + memory evolution) on all documents, including unchanged ones.--force alone only refreshes A-MEM notes (keywords, tags, context). --enrich is needed after major upgrades that add new enrichment stages (e.g. 0.1.x → 0.2.0 added entity resolution).bin/clawmem wrapper, not bun run src/clawmem.ts directly. The wrapper sets GPU endpoint defaults (CLAWMEM_EMBED_URL, CLAWMEM_LLM_URL, CLAWMEM_RERANK_URL). Bypassing the wrapper causes fallback to slow in-process node-llama-cpp inference.“UserPromptSubmit hook error” (intermittent)
.jsonl files. Prior to v0.1.6, the watcher processed all .jsonl file changes (not just Beads .beads/*.jsonl), triggering database opens and brief write locks on every transcript update. If the context-surfacing hook fired during a lock, it exceeded its timeout..jsonl files within .beads/ directories (Dolt backend). Claude Code transcript .jsonl files are ignored entirely, eliminating the main source of lock contention and memory bloat.PRAGMA wal_checkpoint(PASSIVE) every 5 minutes to keep the WAL small.systemctl --user restart clawmem-watcher.service). Check systemctl --user status clawmem-watcher.service for memory usage — healthy is under 100MB, bloated is 400MB+.busy_timeout was 500ms — too tight. During A-MEM enrichment or heavy indexing, the watcher can hold write locks for 500ms+, causing the hook’s DB open to fail with SQLITE_BUSY. Raised to 5000ms (matches MCP server). The hook’s 8s outer timeout still leaves 3s for actual work after a 5s busy wait.timeout wrappers (e.g., timeout 8 clawmem hook context-surfacing) kill the process with exit 124 and no stderr — Claude Code reports “Failed with non-blocking status code: No stderr output”. This affects all hook events (UserPromptSubmit, Stop, SessionStart, PreCompact), not just Stop hooks. Fix: Remove shell timeout from all hook commands and use Claude Code’s native timeout property instead. Run clawmem setup hooks to reinstall with correct config (v0.3.1+), or manually update ~/.claude/settings.json — see setup-hooks.Watcher memory bloat (400MB+)
.jsonl files changing on every keystroke during active conversations. Each event opened the database briefly, and over hours of active use, memory grew to 400-800MB..jsonl files are no longer watched. Memory stays under 100MB during normal operation.journalctl --user -u clawmem-watcher -f). Common remaining causes:Diagnosing watcher memory issues:
journalctl --user -u clawmem-watcher -f
Each [change] or [rename] line shows the collection and file. High-frequency entries point to the source.
path: ~/Projects) but a narrow pattern (e.g. pattern: "specific-file.md"), the watcher receives fs.watch events for every .md change under that entire tree — even files that don’t match the pattern. Prior to v0.1.7, each event still triggered indexCollection() and opened the database.
indexCollection(). Non-matching files are silently skipped with no DB access.git pull, git checkout, or git merge in a directory covered by a **/*.md collection can change hundreds of .md files at once. Each triggers a watcher event, and even with debouncing (2s), batches of changes arrive in rapid succession.
systemctl --user restart clawmem-watcher.service..md~, .md.tmp, or shadow copies during autosave. These don’t match the .md extension check in the watcher, but frequent filesystem churn in the same directory can cause fs.watch callback overhead on some platforms (especially WSL2 where filesystem events cross the Linux/Windows boundary).
[change] log entries, the overhead is in fs.watch itself, not in ClawMem’s handler. Consider reducing the number of watched directories by consolidating collections or excluding directories with heavy non-.md file churn.fs.watch(dir, { recursive: true }) which registers an OS-level inotify watch on every subdirectory in the tree — including excluded directories like gits/, node_modules/, .git/. The shouldExclude() filter only prevented processing events from excluded paths but couldn’t prevent the kernel from allocating inotify handles for them. A collection path like ~/Projects with 67,000 subdirectories would exhaust inotify limits and eventually hang WSL or Linux.EXCLUDED_DIRS list as the indexer), and watches each non-excluded directory individually (non-recursive). A safety cap of 500 directories per collection path prevents FD exhaustion from overly broad collection paths. The cap logs a warning: WARNING: /path has N dirs — capping at 500 to prevent FD exhaustion. Consider narrowing the collection path.ls /proc/<watcher_pid>/fd | wc -l shows 100K+ file descriptors. Hook timeouts increase. System memory usage climbs without visible cause.ls /proc/$(pgrep -f "clawmem.*watch")/fd | wc -l. Healthy is under 15,000. If over 50,000, update ClawMem to v0.2.3+.path: ~/Projects watching a single file is wasteful — move the file to a subdirectory or create a dedicated directory for it. Check the watcher startup log for the [watcher] lines showing dir counts per collection.cat /proc/sys/fs/inotify/max_user_watches (Linux default: 8192). If the watcher reports ENOSPC errors, increase: echo 65536 | sudo tee /proc/sys/fs/inotify/max_user_watches.journalctl --user -u clawmem-watcher for rapid-fire events during initialization (common when collections contain recently-changed files that trigger immediate indexing).Quick recovery: systemctl --user restart clawmem-watcher.service — clears accumulated state, resets memory. Safe to do at any time; the watcher re-discovers its watch targets on startup.
“Stop hook error: Failed with non-blocking status code: No stderr output”
timeout wrappers (e.g., timeout 10 clawmem hook ...) killing the process with exit 124 before LLM inference completes. The Stop hooks (decision-extractor, handoff-generator, feedback-loop) call an LLM which routinely takes 8-15s with real transcripts.timeout from hook commands and use Claude Code’s native timeout property instead. Run clawmem setup hooks to reinstall with correct config (v0.3.0+), or manually update ~/.claude/settings.json — see setup-hooks.Hooks hang or timeout
curl http://host:8088/health). Hook timeouts are 8s for context-surfacing, 5s for SessionStart/PreCompact hooks, 30s for Stop hooks. See setup-hooks for the full table.Hooks slow or near timeout (4-6s per invocation)
node-llama-cpp (in-process models), the native addon import alone costs ~3.5s, leaving very little headroom in the 8s timeout for actual search and scoring.llama-server is running and the hook falls back to in-process inference. The MCP server (long-lived process) does not have this problem — node-llama-cpp loads once at startup and stays warm.| Setup | Hook latency | Notes |
|---|---|---|
llama-server running (local or remote) |
~200ms | Hooks use HTTP calls, never import node-llama-cpp. Recommended. |
| In-process Metal (Apple Silicon) | ~4-5s | node-llama-cpp addon import + model load. Under 8s but tight. |
| In-process Vulkan (discrete GPU) | ~4-5s | Same as Metal — addon import is the bottleneck, not inference. |
| In-process CPU-only (no Metal, no Vulkan) | >8s | Will timeout. Use speed profile or cloud embedding. |
Cloud embedding (CLAWMEM_EMBED_API_KEY set) |
~500ms | HTTP call to cloud provider, no node-llama-cpp needed. |
llama-server locally — even on the same machine, a persistent server eliminates the per-invocation import. See GPU Services for setup. This is what most users will do in practice.CLAWMEM_PROFILE=speed — disables vector search in hooks entirely, pure BM25, never loads node-llama-cpp. Hooks complete in under 500ms.CLAWMEM_EMBED_API_KEY + CLAWMEM_EMBED_URL + CLAWMEM_EMBED_MODEL — query embedding via cloud API, no local models needed in the hook path.CLAWMEM_NO_LOCAL_MODELS=true — prevents node-llama-cpp from loading at all. Hooks degrade to BM25-only when GPU servers are unreachable, instead of blocking for 3.5s on a fallback import.~/.claude/settings.json. Use the native timeout property (in seconds), not a shell timeout wrapper:
{
"type": "command",
"command": "/path/to/clawmem hook context-surfacing",
"timeout": 12
}
Or re-run clawmem setup hooks (v0.3.1+) which generates correct config automatically.
Tradeoffs of longer timeouts:
| Timeout | Effect |
|---|---|
| 8s (default) | Good balance for llama-server setups (~200ms) and Metal/Vulkan in-process (~4-5s). Tight for CPU-only. |
| 10-12s | Accommodates in-process Metal/Vulkan with SQLite contention headroom. Adds a noticeable delay before Claude sees your prompt when the hook is slow — you’ll see the spinner for up to 12s on cold starts. |
| 15s+ | Not recommended. Claude Code waits for the hook before processing the prompt. A 15s hook makes the agent feel unresponsive on every first prompt and after any Bun cache invalidation. If you need this, run llama-server instead. |
| 5s or less | Only viable with llama-server running or CLAWMEM_PROFILE=speed. In-process models will always timeout. |
The timeout applies per invocation. A slow first prompt (cold start) doesn’t mean subsequent prompts will be slow — Bun caches modules after the first load, and node-llama-cpp model files are cached on disk after the first download. Subsequent prompts in the same session are typically faster.
Stop hooks (decision-extractor, handoff-generator, feedback-loop) default to 30s (v0.3.1+, was 10s prior) because they run LLM inference (observer model). These run at session end, so latency doesn’t block the user.
“Stop hook error: Failed with non-blocking status code: No stderr output”
{"continue":true,"suppressOutput":false}). If you add your own Stop hook alongside ClawMem’s, every code path must output JSON — including early returns, error handling, and default/fallback paths.# BAD — exits 0 with no stdout on the early-return path
if [[ -z "$some_var" ]]; then
exit 0
fi
# GOOD — always output JSON
OK='{"continue":true,"suppressOutput":false}'
if [[ -z "$some_var" ]]; then
echo "$OK"; exit 0
fi
{"continue":false}), use {"continue":false,"stopReason":"..."} to provide context.echo '{"transcriptPath":"/path/to/transcript.jsonl","sessionId":"test"}' | bash ~/.claude/scripts/your-hook.sh — verify it outputs JSON.Hook fires but returns empty context
/, or matches the heartbeat/greeting filterclawmem status for doc counts and clawmem embed for embedding coverage. Verify the embedding server is reachable if using a remote GPU. Try CLAWMEM_PROFILE=deep which lowers the score threshold from 0.45 to 0.25 and adds budget-aware query expansion + reranking. For vaults with older documents, the balanced profile’s 0.45 threshold may filter out everything — deep compensates with a wider net.Context-surfacing returns results on balanced but not speed
speed profile disables vector search entirely and uses a higher minimum score (0.55). Documents that rank well via hybrid search (BM25+vector) may not score high enough on BM25 alone.balanced or deep for richer retrieval.Duplicate observations after every session
saveMemory() API enforces a 30-minute normalized content hash dedup window.clawmem setup openclaw installs into the wrong profile / ignores OPENCLAW_STATE_DIR (ClawMem v0.10.0–v0.10.3)
OPENCLAW_STATE_DIR=~/.openclaw-dev clawmem setup openclaw (or running ClawMem setup while OpenClaw is configured for a non-default profile via --profile / OPENCLAW_STATE_DIR) installs the plugin into ~/.openclaw/extensions/clawmem instead of the profile-specific extensions directory. The default profile picks the plugin up; the active profile does not see it.~/.openclaw/extensions/clawmem and never consulted OPENCLAW_STATE_DIR or the OpenClaw CLI’s profile resolution. Yoloshii/ClawMem#11.openclaw plugins install when the OpenClaw CLI is on PATH (which respects OPENCLAW_STATE_DIR, OPENCLAW_CONFIG_PATH, and the --profile flag) and falls back to a direct-copy install honoring OPENCLAW_STATE_DIR when the CLI is absent. See docs/guides/openclaw-plugin.md for the full env-var contract.clawmem setup openclaw --help runs setup instead of printing help (ClawMem ≤ v0.10.3)
clawmem setup openclaw --help performs the install instead of printing usage. Reported as part of yoloshii/ClawMem#11.--help / -h before any spawn or filesystem work and prints the full flag and env-var reference.“plugin not found: clawmem” / gateway ready line omits clawmem (OpenClaw v2026.4.11+)
readdirSync({ withFileTypes: true }) with dirent.isDirectory(), and symlinks to directories report isDirectory() === false on that API shape. A symlinked <extensions>/clawmem is silently skipped during discovery. Pre-ClawMem-v0.10.0 setup always produced a symlink, which worked on OpenClaw v2026.3.x but started failing silently when OpenClaw released v2026.4.11. Note: from v0.10.4, the delegated path’s --link mode uses OpenClaw’s plugins.load.paths (a load-path entry, not a filesystem symlink) and is not affected by this discovery skip.clawmem setup openclaw. v0.10.0 changed the default to recursive copy (cpSync(..., { recursive: true, dereference: true })); v0.10.4 added profile-aware delegation that further sidesteps the issue when openclaw is on PATH. The result on disk is a real directory, not a link.package.json. OpenClaw v2026.4.11+ reads package.json for the openclaw.extensions: ["./index.ts"] field as part of discovery. openclaw.plugin.json alone is not enough. v0.10.0 ships src/openclaw/package.json with the required manifest, and setup refuses to install if it is missing. If you are on v0.10.0 and the file is there (verify with ls ~/.openclaw/extensions/clawmem/package.json) but discovery still fails, see the next two entries.“blocked plugin candidate: suspicious ownership” (multi-user installs)
[plugins] clawmem: blocked plugin candidate: suspicious ownership (/home/<user>/.openclaw/extensions/clawmem, uid=1001, expected uid=997 or root). OpenClaw v2026.4.11+ enforces that plugin directories be owned by the current runtime user or root. This is a security feature that prevents a privileged gateway process from loading code that a less-privileged user dropped into its extensions directory.openclaw) that is different from the user who ran clawmem setup openclaw (e.g. your admin account). Setup copies the plugin as the installer user, so the new directory is owned by the installer and rejected by the gateway.sudo chown -R <gateway-user>:<gateway-group> ~/.openclaw/extensions/clawmem. Then restart the gateway. sudo systemctl restart openclaw-gateway.service (or whatever your gateway unit is called).openclaw plugins inspect clawmem from a different shell user than the gateway, the CLI’s own ownership check may reject the plugin (expected uid=<your uid> or root) even though the gateway is loading it fine at runtime. Trust the gateway journal over the CLI inspect output when they disagree — the journal is the authoritative runtime state.Gateway fails to start with “Missing config. Run openclaw setup or set gateway.mode=local”
~/<gateway-user's-home>/.openclaw/, the gateway cannot traverse into its own config directory if the directory is 700 (drwx------). The error is misleading — the config file itself is readable (correctly chowned by the systemd ExecStartPre step), but the parent directory has no group-execute bit, so the gateway user cannot even cd into it to open the file.sudo stat /home/<installer>/.openclaw | grep Access — if it shows (0700/drwx------), this is the cause.sudo chmod 750 /home/<installer>/.openclaw (owner rwx, group rx). The gateway user must be a member of the owning group — check with id <gateway-user> and confirm <installer-group> is listed. On Debian-family systems with a sciros:sciros-owned home directory and an openclaw:openclaw gateway that is also in the sciros group, chmod 750 is enough.“plugins.entries.clawmem: plugin not found (stale config entry ignored)”
plugins.entries.clawmem: { enabled: true } entry in its config but could not find a corresponding plugin directory under ~/.openclaw/extensions/. This typically means the plugin directory was deleted or moved without running openclaw config unset plugins.entries.clawmem.clawmem setup openclaw to restore the plugin directory, then the stale entry resolves. Or, if you intentionally uninstalled the plugin and want to keep it gone, openclaw config unset plugins.entries.clawmem + openclaw config unset plugins.slots.memory (which restores the default memory-core).Older OpenClaw version notes
plugins.slots.contextEngine was silently dropped during config processing (openclaw/openclaw#64192). Only relevant on ClawMem < v0.10.0, which used the contextEngine slot. ClawMem v0.10.0+ uses the memory slot and is not affected by #64192.readdirSync({ withFileTypes: true }) + dirent.isDirectory()) and the plugin ownership check described above. Required for ClawMem v0.10.0+. Upgrade OpenClaw with sudo npm i -g openclaw@latest.REST API tools return no results
clawmem serve process may not be running. The plugin auto-starts it, but it doesn’t survive plugin crashes.curl http://localhost:7438/health. If unreachable, either restart OpenClaw or run clawmem serve as a systemd service for persistence.Agent tools silently fail but hooks still work
curl http://localhost:7438/health. Start it manually (./bin/clawmem serve) or via systemd.Plugin registers but hooks don’t fire
OpenClaw agent doesn’t use ClawMem tools
enableTools must be true and servePort must match the running server port (default 7438).clawmem update crashes with “Binding expected string, TypedArray, boolean, number, bigint or null”
gray-matter (via js-yaml): title: 2023-09-27 becomes a JS Date object, title: true becomes a boolean, title: null stays null. Bun’s SQLite driver rejects Date objects as bind parameters, crashing the indexer.title, domain, workstream, content_type, review_by.parseDocument() runtime-checks all frontmatter string fields. Defense-in-depth guards in insertDocument(), updateDocument(), and reactivateDocument().title: "2023-09-27") as a workaround.“Unknown vault” error
config.yaml or CLAWMEM_VAULTS.~/.config/clawmem/config.yaml or set CLAWMEM_VAULTS env var.Vault path with ~ doesn’t resolve
~ expansion.High memory usage in long-running MCP process