ClawMem

Troubleshooting

Common issues when running ClawMem with hooks, MCP server, or OpenClaw plugin. Organized by subsystem.

Bun runtime

Snap Bun: EPERM on stdin (hooks return empty)

Two Bun binaries on PATH

Embedding & GPU

“Local model download blocked” error

“Remote LLM in cooldown, falling back to in-process generation”

Unexpectedly slow inference (in-process fallback)

Query expansion always fails or returns garbage

Some documents never get embedded (stuck after multiple sweeps)

Vector search returns no results but BM25 works

Vector search: “Dimension mismatch” error

Embedding fails with “input is too large to process”

API key + localhost warning

Search & retrieval

context-surfacing hook returns empty

intent_search returns weak results for WHY/ENTITY

search returns results but query returns nothing

kg_query returns empty for every entity

Indexing

Watcher fires events but collections show 0 docs

Watcher fires events but wrong collection processes them

reindex –force crashes with “UNIQUE constraint failed”

reindex –force after v0.2.0 upgrade shows no entity extraction

Hooks

“UserPromptSubmit hook error” (intermittent)

Watcher memory bloat (400MB+)

Diagnosing watcher memory issues:

  1. Identify what’s triggering events. Watch the journal in real time:
    journalctl --user -u clawmem-watcher -f
    

    Each [change] or [rename] line shows the collection and file. High-frequency entries point to the source.

  2. Broad collection paths with narrow patterns. If a collection has a broad path (e.g. path: ~/Projects) but a narrow pattern (e.g. pattern: "specific-file.md"), the watcher receives fs.watch events for every .md change under that entire tree — even files that don’t match the pattern. Prior to v0.1.7, each event still triggered indexCollection() and opened the database.
    • Fixed in v0.1.7: the watcher pre-checks if the changed file could match the collection pattern before calling indexCollection(). Non-matching files are silently skipped with no DB access.
    • If you’re on an older version: narrow the collection path to the smallest directory that contains the files you actually want indexed.
  3. Git operations in watched directories. git pull, git checkout, or git merge in a directory covered by a **/*.md collection can change hundreds of .md files at once. Each triggers a watcher event, and even with debouncing (2s), batches of changes arrive in rapid succession.
    • Not a bug — the watcher is doing its job (re-indexing changed docs). But if this causes contention with hooks, restart the watcher afterward: systemctl --user restart clawmem-watcher.service.
  4. Editor autosave and temp files. Some editors (VS Code, JetBrains) write .md~, .md.tmp, or shadow copies during autosave. These don’t match the .md extension check in the watcher, but frequent filesystem churn in the same directory can cause fs.watch callback overhead on some platforms (especially WSL2 where filesystem events cross the Linux/Windows boundary).
    • Fix: If memory grows without visible [change] log entries, the overhead is in fs.watch itself, not in ClawMem’s handler. Consider reducing the number of watched directories by consolidating collections or excluding directories with heavy non-.md file churn.
  5. Too many watched directories / inotify FD exhaustion (v0.2.3 fix).
    • Prior to v0.2.3, the watcher used fs.watch(dir, { recursive: true }) which registers an OS-level inotify watch on every subdirectory in the tree — including excluded directories like gits/, node_modules/, .git/. The shouldExclude() filter only prevented processing events from excluded paths but couldn’t prevent the kernel from allocating inotify handles for them. A collection path like ~/Projects with 67,000 subdirectories would exhaust inotify limits and eventually hang WSL or Linux.
    • Fixed in v0.2.3: The watcher now walks each collection directory at startup, skips excluded subtrees (using the same EXCLUDED_DIRS list as the indexer), and watches each non-excluded directory individually (non-recursive). A safety cap of 500 directories per collection path prevents FD exhaustion from overly broad collection paths. The cap logs a warning: WARNING: /path has N dirs — capping at 500 to prevent FD exhaustion. Consider narrowing the collection path.
    • Symptoms: WSL hangs or becomes unresponsive during long sessions. ls /proc/<watcher_pid>/fd | wc -l shows 100K+ file descriptors. Hook timeouts increase. System memory usage climbs without visible cause.
    • Diagnosis: Check FD count: ls /proc/$(pgrep -f "clawmem.*watch")/fd | wc -l. Healthy is under 15,000. If over 50,000, update ClawMem to v0.2.3+.
    • If still high after v0.2.3: Narrow collection paths. A collection with path: ~/Projects watching a single file is wasteful — move the file to a subdirectory or create a dedicated directory for it. Check the watcher startup log for the [watcher] lines showing dir counts per collection.
    • Check max inotify watches: cat /proc/sys/fs/inotify/max_user_watches (Linux default: 8192). If the watcher reports ENOSPC errors, increase: echo 65536 | sudo tee /proc/sys/fs/inotify/max_user_watches.
    • macOS: FSEvents has no hard limit but memory scales with watched directory depth.
  6. Healthy baseline. After a fresh restart, the watcher should stabilize under 100MB within 30 seconds. If it immediately spikes above 200MB during startup, check journalctl --user -u clawmem-watcher for rapid-fire events during initialization (common when collections contain recently-changed files that trigger immediate indexing).

Quick recovery: systemctl --user restart clawmem-watcher.service — clears accumulated state, resets memory. Safe to do at any time; the watcher re-discovers its watch targets on startup.

“Stop hook error: Failed with non-blocking status code: No stderr output”

Hooks hang or timeout

Hooks slow or near timeout (4-6s per invocation)

Setup Hook latency Notes
llama-server running (local or remote) ~200ms Hooks use HTTP calls, never import node-llama-cpp. Recommended.
In-process Metal (Apple Silicon) ~4-5s node-llama-cpp addon import + model load. Under 8s but tight.
In-process Vulkan (discrete GPU) ~4-5s Same as Metal — addon import is the bottleneck, not inference.
In-process CPU-only (no Metal, no Vulkan) >8s Will timeout. Use speed profile or cloud embedding.
Cloud embedding (CLAWMEM_EMBED_API_KEY set) ~500ms HTTP call to cloud provider, no node-llama-cpp needed.

“Stop hook error: Failed with non-blocking status code: No stderr output”

Hook fires but returns empty context

Context-surfacing returns results on balanced but not speed

Duplicate observations after every session

OpenClaw

clawmem setup openclaw installs into the wrong profile / ignores OPENCLAW_STATE_DIR (ClawMem v0.10.0–v0.10.3)

clawmem setup openclaw --help runs setup instead of printing help (ClawMem ≤ v0.10.3)

“plugin not found: clawmem” / gateway ready line omits clawmem (OpenClaw v2026.4.11+)

“blocked plugin candidate: suspicious ownership” (multi-user installs)

Gateway fails to start with “Missing config. Run openclaw setup or set gateway.mode=local”

“plugins.entries.clawmem: plugin not found (stale config entry ignored)”

Older OpenClaw version notes

REST API tools return no results

Agent tools silently fail but hooks still work

Plugin registers but hooks don’t fire

OpenClaw agent doesn’t use ClawMem tools

Indexing

clawmem update crashes with “Binding expected string, TypedArray, boolean, number, bigint or null”

General

“Unknown vault” error

Vault path with ~ doesn’t resolve

High memory usage in long-running MCP process