Skip to content

lociLocal-first AI memory

Search everything you've ever said to an AI. Organise it into rooms. Never start a session from scratch.

The problem loci solves

You talk to AI constantly. Claude, ChatGPT, Cursor, Claude Code. Hundreds of conversations. Thousands of hours of reasoning.

Then you close the tab and lose it.

loci fixes this. It indexes every conversation locally, makes it searchable in milliseconds, and organises it into persistent workspaces called rooms. When you return to a room, your context returns with you.

No cloud. No accounts. No sync that breaks. The index lives on your machine, in ~/.loci/. You own it completely.

Three ways to use loci

loci has three user tiers. They share the same local index but expose different surfaces.

Scholar

You want search. Install the browser extension, let it index your conversations, and search across everything from a single interface. Five minutes to set up. No configuration required.

Scholar is for people who use AI tools daily and keep losing track of useful conversations. It gives you a personal search engine for your AI memory.

Wizard

You want the full system. Rooms, crystals, garden, MCP integration, local LLM routing. The "palace": a structured local-first workspace that persists context across sessions and agents.

Wizard is for power users who work with AI agents, build with MCP, and need their context to survive beyond a single conversation window.

LLMAGE

You want the primitive. loci as a CLI tool and MCP server. No GUI, no extension, just the indexing and retrieval layer. Embed it into your own tooling or use it from Claude Code.

LLMAGE is for AI engineers building LLM-native systems who need reliable local context persistence without cloud dependencies.

Where to start

If you just want to search your AI conversations, start with Scholar. Five minutes.

If you want the full palace: rooms, crystals, persistent context across tools: start with Wizard. Twenty minutes.

If you're building tooling and want loci as a primitive, start with LLMAGE. Five minutes to integrate.

All three tiers use the same local index. You can start as Scholar and graduate to Wizard without losing data.

Architecture

loci is built from four components:

ComponentPurposeLocation
Browser extensionIndexes conversations from Claude.ai, ChatGPTChrome (MV3)
Desktop appSearch interface, room managementMac / Windows (Tauri v2)
MCP serverExposes rooms as tools to your IDELocal (Rust, port 3721)
Local indexMiniSearch + SQLite~/.loci/

Read the full architecture for implementation details.

Built by Hux × Vesper · Apache 2.0