Stop re-explaining your codebase every Monday
Why did we pick stateless? When did we defer refresh tokens? Decision logs extracted automatically — searchable, taggable, available as context the next time you open Claude.
Luminaria captures every Claude session — Code, Desktop, claude.ai — into a private knowledge graph that lives in the EU. Search what you've already discussed. Pick up where you left off. Stop re-explaining yourself.
Every conversation you have with Claude — across Claude Code, Claude Desktop, and claude.ai — captured automatically. Indexed into a private knowledge graph. Available the next time you ask Claude a question, so it picks up exactly where you left off.
By meaning, not just keywords. Find the specific exchange you half-remember.
What you said in Claude Desktop is available in Code. And vice versa.
Compute, storage, AI inference, auth — all EU jurisdictions, by design.
Capture, index, retrieve. The whole pipeline stays inside the EU and inside your tenant boundary — by construction, not policy.
Claude Code, Claude Desktop, claude.ai. Every session you finish syncs to your private graph automatically. No copy-paste, no manual export.
$ luminaria sync → 12 sessions → 847 atoms captured → 412 insights extracted ✓ synced in 2.3s
Sessions become connected atoms in Neo4j (Belgium). Mistral embeddings for semantic recall, full-text indexing for keyword precision. Every query is tenant-scoped.
streams: 247 atoms: 18,341 insights: 412 decisions: 89 patterns: 23
Standard Model Context Protocol. 12 tools live inside Claude. Search, recall, inject — your AI starts each session knowing what you already discussed.
> mcp.find("auth refactor")
← 4 results · 0.42s
→ injected 3 atoms
→ context primedAnyone who works with Claude builds a paper trail. Memo gives that trail back to you — searchable, structured, yours.
Why did we pick stateless? When did we defer refresh tokens? Decision logs extracted automatically — searchable, taggable, available as context the next time you open Claude.
Why did we agree to that liability cap last year? What was the language Henderson finally accepted? Months of redlines, calls, and counsel chats — searchable by meaning, not just keyword.
Why did you cut chapter 4? What was the agreed voice for the antagonist? When did you rule out the framing device? Every craft conversation kept, organised, retrievable.
Every paper you read with Claude, every methodology you debated, every finding you synthesised — kept and searchable. The discussion from February that resolved your Q3 direction, on demand.
Background conversations on a story — every angle you considered, every quote you tested, every pivot. When the editor asks "why this framing?", the receipts are there.
Every interaction captured, tagged, exportable — ready for client review or compliance. Sessions stay tenant-isolated; only your team sees your team's work.
Same question, asked of two different memory systems. One reinterprets; the other retrieves.
One developer. One question. Four ways to find the answer.
Real test from a Memo user's session log. Reproduce: install Memo, sync your sessions, ask Claude the same question.
Append-only session capture. Tenant-isolated by design. EU-resident. On the path to hash-chained tamper-evidence and certified AI Act Article 12 conformity.
↳ Read /auditContext reuse cuts redundant LLM inference by 30–50% on recurring topics. Less compute, less energy, less waste. Reproducible benchmarking methodology in development with EU sustainable-AI research partners.
↳ Read /sustainabilityGraph-based context adapts to your individual history without ML retraining. Engineer, novelist, lawyer, researcher — each gets context shaped by their own work, not a population average.
↳ Read /productClaude memory tells Claude who you are.
Luminaria tells Claude what you've built together.
Sub-processor list and DPA template at /sub-processors.
Sessions sync automatically when they end. Search from the terminal.
$ npm i -g luminaria $ luminaria setup ✓ ready
Add as a custom connector. 12 MCP tools live inside Claude.
https://api.luminaria.so/mcp → one-click OAuth → search · context · tag · note
Browse, search, manage every session. Upload Claude exports. Tag, annotate, export your data anytime.
A lightweight script syncs each session when you finish. Works with Claude Code, Claude Desktop, and claude.ai (via MCP).
Yes. No copy-paste, no manual export.
Anytime. JSON or markdown, full content, your tags and annotations included.
EU only. Compute in France, graph database in Belgium, auth in Frankfurt.
Only you. Tenant boundary is enforced by automated CI lint, not just policy.
Yes — full GDPR Article 17 right-to-erasure. Deletions themselves are audit-logged events.
No.
Yes. DPA available for any paid plan; sub-processor list public at /sub-processors.
On the path to Article 12 conformity. Today: append-only logs, EU residency, tenant isolation. Funded buildout: hash-chained tamper-evidence, ISO 27001, SOC 2.
12–18 months post-funding. Roadmap on the funder page.
Free forever for the basics. Pro for Smart Search and AI chat. EU-resident by default, audit-aligned by design, your data exportable anytime.