Skip to content

Getting Started: LLMAGE

For: AI engineers who need a local context persistence layer in LLM-native workflows. Setup time: 5 minutes. Requirements: Git, an MCP-compatible client (Claude Code, Cursor, Zed).


What you get

The LLMAGE tier is loci stripped to its primitives:

  • CLI-first: no GUI required
  • MCP server: expose context as tools
  • Namespace isolation: rooms stay separate
  • Zero cloud: no external network calls in the core loop
  • Headless mode: run as a daemon

Install

bash
git clone https://github.com/huximaxi/Loci.git
cd Loci
cp config.example.json ~/.loci/config.json

Edit ~/.loci/config.json to match your setup.


Full config reference

FieldTypeDefaultDescription
versionstring"0.1.0"Config schema version
tierstring"llmage"User tier: scholar, wizard, or llmage
index.auto_syncbooleantrueAutomatically sync index on content change
index.sync_intervalstring"5m"Interval between index syncs (e.g., "1m", "30s")
index.full_textbooleantrueEnable full-text search indexing
index.semanticbooleanfalseEnable semantic/embedding-based search (requires LLM)
llm.providerstring"local"Provider: local, anthropic, openai
llm.endpointstring"http://localhost:11434"Endpoint URL (for local provider)
llm.modelstring"llama3"Model identifier
mcp.expose_roomsarray["dev"]Rooms to expose as MCP tools
mcp.portnumber3721MCP server port
mcp.authstringnullOptional auth token for MCP server
chronicle.enabledbooleanfalseEnable session logging
chronicle.pathstring"~/.loci/chronicle/"Chronicle storage location

Starting the MCP server

bash
loci serve

The server starts on the port specified in config (default: 3721).

With explicit port

bash
loci serve --port 3800

Headless mode (daemon)

bash
loci serve --daemon

The process detaches and writes logs to ~/.loci/logs/mcp.log. Stop with:

bash
loci stop

Available MCP tools

ToolDescriptionParametersReturns
loci_searchFull-text search across indexed contentquery: string, room?: string, limit?: numberArray of matched results with excerpts
room_{name}Load room context (soul + context.md + crystals)NoneRoom context as structured text
loci_getRetrieve a specific crystal or locusid: stringCrystal content or error
loci_gardenRetrieve a garden plantslug: stringPlant content with metadata
loci_writeWrite content to a room's contextroom: string, content: string, append?: booleanSuccess confirmation
loci_crystalCreate a new crystalroom: string, id: string, content: string, type?: stringCrystal metadata

Querying from IDE

Example 1: Search for past discussions

@loci loci_search "kafka vs sqs"

Output:

json
{
  "results": [
    {
      "id": "conv-20260501-142200",
      "room": "dev",
      "excerpt": "...decided on SQS for the payment webhook pipeline. Lower ops overhead, sufficient throughput...",
      "score": 0.92
    }
  ],
  "total": 1
}

Example 2: Load room context

@loci room_dev

Output:

markdown
# Dev Room: Soul

You are operating inside the Dev Room...

---

## Recent context

Last session: 2026-05-03
State: Implementing auth middleware

## Crystals

- auth-decision: Using Clerk for authentication
- db-choice: PostgreSQL via Supabase

Example 3: Retrieve a specific crystal

@loci loci_get auth-decision

Output:

markdown
---
id: "auth-decision"
created: "2026-05-01T09:00:00Z"
room: "dev"
type: "decision"
---

# Auth provider decision

We use Clerk for authentication...

Namespace isolation

Each room is an isolated context namespace. When you invoke room_dev, you get Dev Room context only. Research Room content does not appear.

This isolation is enforced at the MCP tool level:

  • loci_search defaults to all rooms but accepts a room parameter for scoped queries
  • room_{name} tools only return content from that room
  • Crystals are namespaced by room in the filesystem

Filesystem structure:

~/.loci/
├── config.json
├── rooms/
│   ├── dev/
│   │   ├── CLAUDE.md
│   │   ├── context.md
│   │   └── crystals/
│   └── research/
│       ├── CLAUDE.md
│       ├── context.md
│       └── crystals/
└── index/
    └── metadata.db

Snapshot export

Export room state for backup or migration:

bash
loci export --room dev --format json

Output: ~/.loci/exports/dev-2026-05-04.json

Export options

FlagDescription
--roomRoom to export (required)
--formatOutput format: json, markdown, archive
--include-chronicleInclude session logs
--outputCustom output path

Export as markdown (for version control)

bash
loci export --room dev --format markdown --output ./docs/dev-room/

Creates a folder of markdown files suitable for committing to git.


Provider backends

loci supports three LLM provider backends.

Local (Ollama)

json
{
  "llm": {
    "provider": "local",
    "endpoint": "http://localhost:11434",
    "model": "llama3"
  }
}

Requires Ollama running locally. No external network calls.

Claude API

json
{
  "llm": {
    "provider": "anthropic",
    "model": "claude-sonnet-4-20250514"
  }
}

Set API key via environment variable:

bash
export ANTHROPIC_API_KEY="sk-ant-..."

OpenAI API

json
{
  "llm": {
    "provider": "openai",
    "model": "gpt-4o"
  }
}

Set API key via environment variable:

bash
export OPENAI_API_KEY="sk-..."

WARNING

When using cloud providers, LLM calls leave your machine. The core index/search/serve loop remains local: only explicit LLM operations (semantic search, summarization) contact external APIs.


Headless mode

For server or CI environments, run loci as a daemon:

bash
loci serve --daemon --port 3721

Daemon management

bash
# Check status
loci status

# View logs
tail -f ~/.loci/logs/mcp.log

# Stop daemon
loci stop

# Restart
loci restart

Systemd service (Linux)

ini
[Unit]
Description=loci MCP server
After=network.target

[Service]
Type=simple
ExecStart=/usr/local/bin/loci serve --port 3721
Restart=on-failure
User=youruser

[Install]
WantedBy=multi-user.target

Architecture notes

For LLMAGE-tier use:

  • Search index: MiniSearch 7.x. Serializes via toJSON()/loadJSON(). No daemon required for the index itself.
  • Content store: SQLite at ~/.loci/index/metadata.db. Append-only, never modified in place.
  • MCP server: Rust binary. Stateless between calls.
  • Room contexts: Served from ~/.loci/rooms/{name}/context.md: plaintext, diffable, version-controllable.
  • No cloud path: Zero external network calls in the core loop.

Full details: Architecture documentation


Next steps

Built by Hux × Vesper · Apache 2.0