Inside the Mémoire harness.
TL;DR
- Mémoire is a harness, not a service. Five layers: CLI, harness (Hermes + Mirofish), workspace, MCP server, Studio.
- The CLI (
memi) is the entry point. Every Studio operation, every MCP tool, and every agent action eventually calls the same internals. - Everything is open source. This post is the source-walking tour. Read it, then read the code.
The map
A walk through the engine repo. Every directory has one job.
src/engine/ Core orchestrator, project detection, registry, sync,
token-differ, code-watcher, pipeline.
src/figma/ Figma WebSocket bridge (auto-discovers the plugin on
ports 9223-9232), tokens, stickies.
src/research/ Research engine: Excel, web, stickies, all of it
normalized into typed insights.
src/specs/ Spec types (component, page, dataviz, design, ia) plus
Zod validation for every shape.
src/codegen/ Code generation: shadcn mapper, dataviz, pages, all of
it routed into atomic-design folders.
src/notes/ Mémoire Notes: skill-pack loader, resolver, installer.
src/preview/ Localhost preview gallery (HTML + API server +
pipeline/sync/agent dashboards).
src/agents/ Agent orchestrator, multi-agent registry, task queue,
agent bridge, agent workers.
src/mcp/ MCP server (stdio transport): 14 tools, 3 resources.
src/tui/ Terminal UI (Ink/React).
src/commands/ CLI (Commander.js).
skills/ Built-in skill definitions (ship inside the npm package).
Why this layout? Because every layer is a unit you can replace. Swap codegen, swap the bridge, swap the harness. The engine boundary is what stays.
Layer 1. The CLI surface.
memi is a Commander.js binary installed via:
npm i -g @sarveshsea/memoire
Every command lands in src/commands/. They share the same getEngine() factory, so calling memi pull from the shell or pull_design_system from MCP both end up in the same code path.
| Command | Purpose |
|---|---|
memi connect | Connect to Figma (auto-discovers plugin on 9223-9232) |
memi connect --role <role> | Connect as a named agent with a role |
memi pull | Extract design system from Figma |
memi spec component|page|dataviz <name> | Create a spec |
memi generate [name] | Generate code from specs into atomic folders |
memi research from-file|from-stickies|synthesize|report | Research pipeline |
memi tokens | Export design tokens (DTCG JSON, Tailwind v4) |
memi compose "<intent>" | Agent orchestrator: classify, plan, execute, report |
memi preview | Start localhost preview server |
memi dashboard | Launch the operator dashboard |
memi ia extract|create|show|validate|list | Information architecture tools |
memi watch | Watch specs for changes, auto-regenerate code |
memi watch --code | Also watch generated code for hand edits |
memi sync | Full sync pipeline (Figma to design system to specs to code) |
memi sync --live | Keep running and sync on every change |
memi sync --direction <dir> | figma-to-code, code-to-figma, bidirectional |
memi sync --conflicts | Show and resolve pending sync conflicts |
memi daemon start|stop|status|restart | Start the auto-pull/spec/generate daemon |
memi mcp start | Start Mémoire as an MCP server (stdio) |
memi mcp config | Print MCP config for Claude Code / Cursor |
memi agent spawn <role> | Spawn a persistent agent worker |
memi agent list|kill <id>|status | Manage agents and the task queue |
memi notes install <source> | Install a Note (local path or github:user/repo) |
memi notes list|remove|create|info | Manage Notes |
The CLI is not a wrapper around a service. There is no service. Every command is the same TypeScript that Studio and the MCP server call.
Layer 2. The harness.
The harness is the orchestration layer between intent and execution. Two pieces: Hermes (routing) and Mirofish (sandboxed execution).
Hermes routing
Hermes picks a model per task. The decision matrix is opinionated, not hard-coded; you can override per command with --model.
| Task class | Default model | Why |
|---|---|---|
| Spec generation, design rationale | Claude | Long-form structured output, low hallucination on shape |
| Fast extraction, command generation | Codex / GPT | Speed, deterministic output |
| Tool-heavy sandboxed runs | Mirofish | Tight permission gates, JSONL trace |
| Local privacy, offline | Ollama | Stays on the machine |
| Image / vision | Claude with vision | Best at design comprehension |
Hermes calls are recorded to .memoire/.agent-bus/runs/<run-id>.jsonl. Every prompt, every tool call, every response is on disk. Replayable.
Mirofish sandbox
Mirofish is the execution side. When the harness asks the model to “run the code generator,” the model returns a tool call. Mirofish:
- Validates the tool name against an allowlist.
- Validates arguments against a Zod schema.
- Runs the tool inside a sandboxed shell with permission gates (filesystem scoped to the workspace, no network unless explicitly allowed).
- Streams output back to the model.
- Appends a JSONL line to the run trace.
{"t":"2026-05-12T18:01:42Z","kind":"tool.call","name":"writeFile","args":{"path":"components/ui/Button.tsx","bytes":1284}}
{"t":"2026-05-12T18:01:42Z","kind":"tool.result","ok":true,"durMs":18}
{"t":"2026-05-12T18:01:43Z","kind":"model.message","role":"assistant","tokens":412}
The agent loop
classify(intent) -> route + plan
for step in plan:
pick model via Hermes
call model with workspace context
if tool calls:
Mirofish.execute(call)
append trace
if step.gates.approval:
pause, surface in TUI / Studio, wait
yield step.result
report(plan, trace)
Approval gates are first-class. Anything that mutates Figma, runs
git, or writes outside the workspace requires an explicit yes. The default is: the harness can read everything, but only writes after you approve.
Layer 3. The workspace.
Every project that uses Mémoire gets a .memoire/ directory. Cheaply structured, plain text wherever possible.
.memoire/
manifest.json Project manifest, version-pinned.
specs/ Component, page, dataviz, design specs (JSON, Zod-validated).
tokens/ Design tokens. DTCG JSON + Tailwind v4 @theme CSS.
references/ Public design-reference corpus snapshots.
research/ Insights, transcripts, scenario outputs.
generated/ Code output from the codegen pipeline.
notes/ Installed skill packs. Each is `<name>/note.json` + skills/*.md.
sync-conflicts.json Pending bidirectional-sync conflicts.
.agent-bus/
agents.json Registered agents + heartbeats.
tasks/ Pending and in-flight tasks.
runs/ JSONL trace per run.
The workspace is the source of truth. Studio reads it. The CLI reads it. The MCP server reads it. They never disagree because they’re all reading the same files.
Layer 4. The MCP server.
memi mcp start exposes the engine over the Model Context Protocol via stdio. Claude Code, Cursor, and Windsurf can drop Mémoire into their tool list with a one-line config.
Tools
| Tool | What it does |
|---|---|
pull_design_system | Pull tokens, components, and pages from connected Figma. |
get_specs | List all specs in the workspace. |
get_spec | Read one spec by name. |
create_spec | Create a new spec from typed args. |
generate_code | Generate code for one or more specs. |
get_tokens | Read the active token export. |
update_token | Mutate one token; triggers diff + sync. |
capture_screenshot | Screenshot the live preview at a route. |
get_selection | Read the current Figma selection (node IDs, styles). |
compose | Run the agent orchestrator with a natural-language intent. |
run_audit | Run the design-system audit (token discipline, atomic levels). |
get_research | Read research insights and transcripts. |
figma_execute | Send a low-level command to the Figma plugin. |
get_page_tree | Read the IA tree for the active project. |
Resources
memoire://design-system The full design system, tokens + components.
memoire://specs/{name} One spec by name.
memoire://project Project metadata, manifest, versions.
.mcp.json snippet
{
"mcpServers": {
"memoire": {
"command": "memi",
"args": ["mcp", "start"],
"env": {}
}
}
}
Run
memi mcp config --target claude-codeand it prints the config you paste into Claude Code’s settings. No copy-pasting from docs.
Layer 5. Studio.
Studio is the macOS app. Tauri shell, native windows, signed and notarized DMG. Internally it’s a thin operator surface over the same engine the CLI uses.
apps/studio/
src-tauri/ Tauri Rust shell. Window mgmt, IPC, sidecar.
src/ React UI. Run cockpit, Figma cockpit, traces.
capabilities/ Tauri capability allowlist (filesystem scopes).
The Tauri shell starts a sidecar daemon that wraps the Node engine. The frontend talks to the sidecar over an IPC bridge. When you click “Pull tokens” in Studio, the same memi pull code path runs. Same workspace, same trace.
Distribution: signed by Humyn LLC, Team Z4ZUZ884U3. One DMG. macOS 13 Ventura or later. Apple silicon and Intel.
Bidirectional sync
Design-code drift is the oldest problem in design systems. Mémoire fixes it with per-entity SHA-256 hashes on both sides.
- Figma to code. Variable / component changes are detected via granular plugin events on the WebSocket bridge.
- Code to Figma. Token changes are pushed via the
pushTokensbridge command. - Conflict detection. When both sides change within a 1-second window, the entity is added to
sync-conflicts.jsonand the user resolves manually withmemi sync --conflicts. - Echo guard. Before a push, the orchestrator sets a guard flag. Inbound events tagged with the guard are ignored. No loops.
// pseudocode
async function syncFigmaToCode() {
const remote = await readFigmaState();
for (const entity of remote) {
const hash = sha256(entity);
if (hash !== localHashes[entity.id]) writeLocal(entity);
}
}
Multi-agent orchestration
When memi compose "<intent>" runs, the orchestrator picks a role (or several) from the registry.
| Role | Owns |
|---|---|
token-engineer | Token extraction, diffs, exports. |
component-architect | Atomic levels, composition, props. |
layout-designer | Page templates, IA, navigation. |
dataviz-specialist | Charts, dashboards, data binding. |
code-generator | shadcn/ui mapping, file emission. |
accessibility-checker | Contrast, ARIA, keyboard order. |
design-auditor | Token discipline, atomic violations. |
research-analyst | Insights, transcripts, synthesis. |
general | Catch-all, lowest-priority claimer. |
Lifecycle:
- Spawn.
memi agent spawn <role>writes a registration to.agent-bus/agents.json. - Heartbeat. Every 10 seconds the worker updates
lastSeen. Stale workers (>30s) are evicted. - Claim. Workers poll the task queue, claim with a lock file. Other workers see the lock and skip.
- Execute. The worker runs the task. JSONL trace appended.
- Report. Result is posted back to the queue. The orchestrator advances.
- Reclaim. Tasks unclaimed for >120s are automatically requeued.
The dispatch layer always checks for external agents first. If a worker with the right role is alive, it claims the task. If not, the orchestrator falls back to internal execution. You can run zero workers and everything still works.
Trust by design
Approvals, traces, artifacts.
- Approvals. Mutating Figma, running shell commands, writing outside the workspace, all gate on explicit approval. Configurable per project in
manifest.json. - Traces. Every run produces a JSONL trace at
.memoire/.agent-bus/runs/<run-id>.jsonl. Replay withmemi compose --replay <run-id>. - Artifacts. Codegen outputs go to
.memoire/generated/. The workspace is the audit log.
No telemetry leaves your machine. The only outbound traffic is the AI provider calls you configured. See Legal for the short version.
What you can read
All of this is open. Read the source. File issues. Send PRs.
- Engine repo: github.com/sarveshsea/m-moire
- Notes pack: github.com/sarveshsea/memoire-notes
- License: MIT
- Issues: github.com/sarveshsea/m-moire/issues
If something in this post is wrong, the source is the source of truth. Tell me where; I’ll fix it.