>_LizardBrain
v1.0.0· Open source · MIT License

Turn team conversations into structured knowledge

Decisions, expertise, tasks — buried in chats, meetings, and support threads. LizardBrain extracts them using any LLM and stores everything in searchable SQLite.

terminal
$ node src/cli.js init --profile team$ node src/cli.js extract --limit 200Extracted 12 members, 47 facts, 5 decisions, 11 tasksSuperseded 2 contradicted facts, linked 8 entity pairsUpdated 2 decisions (proposed→agreed), 3 tasks (open→done)
# your agent can now search everything$ node src/cli.js search "who handles deployments?"hybrid (fts5+vec) | 5 results | 23ms
# expose to any MCP-compatible agent$ node src/cli.js serveMCP server listening (stdio) | 9 tools registered

See it in action

Six messages in. Four structured entities out. Zero manual work.

#dev-team

Extracted knowledge

member

Sarah

Python, deployments / deployment lead

decision

Migrate to Kubernetes

Status: agreed / Deadline: Q2

task

Document deploy process

Owner: Sarah / Due: Friday

event

Architecture review

Thursday 2pm / Bring migration notes

Text layout powered by pretext — no DOM reflow.

Reads from

Telegram
Slack
Discord
WhatsApp
OpenClaw

Powered by

Anthropic
OpenAI
Gemini
Groq
Ollama
Mistral
OpenRouter

MCP clients

Claude Desktop
Claude Code
Cursor

Stop losing knowledge in the noise

Group chats, meeting transcripts, support threads — your team's knowledge is scattered across conversations. LizardBrain turns it into a searchable knowledge base your agent can use.

Agents that actually know your team

Your agent learns who the experts are, what decisions were made, and what's in progress. Cross-referenced entities let it trace a task back to the decision it implements.

7entity types, linked and cross-referenced

Instant answers from thousands of conversations

Hybrid search combines keyword matching and vector similarity to find exactly what you need. Results in milliseconds, not minutes of scrolling.

<25mstypical search response

Runs on autopilot for pennies

Set it on a cron and forget about it. Contradictions get superseded, expired facts drop out, new links form — all automatically. Cheap models work great.

$0.05per 1M tokens with the cheapest models

One tool, tuned to your use case

Pick a profile that matches your use case. Each one extracts exactly the entities that matter -- no noise, no wasted tokens.

knowledge

Your agent knows who the experts are

For open-source communities, Discord servers, interest groups. Tracks expertise, captures insights, remembers discussions.

MembersFactsTopics
team

Your agent tracks every decision and task

For teams and workplaces. Know who decided what, who's doing what, and what the team has learned.

MembersFactsTopicsDecisionsTasks
project

Your agent knows every open question

For client work and project teams. Capture decisions, track deliverables, surface unanswered questions.

MembersFactsDecisionsTasksQuestions
full

Your agent captures everything

All 7 entity types. Members, facts, topics, decisions, tasks, questions, and events. Nothing slips through.

MembersFactsTopicsDecisionsTasksQuestionsEvents

Or build a custom profile -- pick any combination of the 7 entity types.

From conversation noise to structured knowledge

Run it on a cron every few hours. Each run sees what was extracted before, so decisions get confirmed, tasks get closed, and questions get answered.

Read
Group by conversation, batch with overlap
Enrich
Fetch metadata for shared links
Context
Inject existing knowledge into LLM
Extract
LLM pulls entities + updates
Store
FTS + dedup, contradictions, index
Search
Hybrid keyword + semantic, per-conversation
Serve
MCP server for agent access

What your agent gets

7 entity types with status tracking, cross-references, contradiction detection, temporal validity, and hybrid search. Filter by conversation.

Members

Know who the experts are

Alice -- RAG, LangChain | builds: pipeline

Facts

Temporal validity, contradiction-safe

"LangChain works well with chunk size 512"

Topics

Track what your group discusses

"RAG Pipeline Comparison" -- Alice, Bob

Decisions

Track from proposed to agreed

"Use PostgreSQL" proposed → agreed

Tasks

Auto-update open → done

"Migrate user service" -- Bob, done

Questions

Know when questions get answered

"Best way to handle migrations?" -- answered

Events

Remember what happened when

"Architecture Review" -- Apr 1, Zoom

What's new in v1.0

Contradiction detection, temporal validity, entity cross-references — plus production-hardened operations.

Contradiction detection

"Team uses Postgres" then "Team migrated to MySQL" — the old fact gets superseded automatically. The LLM spots contradictions after extraction and marks stale knowledge so your agent never sees outdated info.

Temporal validity

Sprint goals, standup notes, and release dates expire naturally. The LLM assigns a durability to each fact — ephemeral, short, medium, or durable — and expired facts drop out of search automatically.

Entity cross-references

Link entities to each other — a task implements a decision, a fact supports another fact, a question blocks a task. Directional, typed relationships the LLM creates during extraction.

Start simple, scale up

Node.js + sqlite3 to start. Add MCP, vector search — each tier is one command away.

Security hardened

SQL/FTS injection protection, credential leakage blocking. Keys never reach the DB.

Production-ready ops

Health checks, embedding pruning, cursor reset, LLM retry with backoff. Built to run unattended.

Works wherever your team talks

Point LizardBrain at any conversation source. Some work out of the box, others need a lightweight adapter.

built-in

Group Chats

Slack, Telegram, Discord

Point at your chat database or export. Built-in SQLite and JSONL adapters handle the common formats. Extract expertise, decisions, and tasks automatically.

custom adapter

Meeting Transcripts

Zoom, Google Meet, Otter

Pipe transcript files through stdin or write a 10-line adapter. Decisions and action items are extracted the same way as from chat.

custom adapter

Support Threads

Zendesk, Intercom, email

Extract recurring issues, solutions, and customer expertise from support conversations. JSONL export or custom adapter.

built-in

Agent Memory

MCP write-back

Agents write structured knowledge or raw text via MCP tools. LizardBrain becomes persistent memory that survives across sessions.

How LizardBrain compares

Structured extraction beats brute-force approaches.

Setup effort
Manual
N/A
RAG
Medium
LizardBrain
3 commands
Data quality
Manual
Varies
RAG
Noisy, duplicates
LizardBrain
Clean, deduplicated
Search speed
Manual
Minutes
RAG
Seconds
LizardBrain
<25ms (FTS5)
Cost per query
Manual
Free but slow
RAG
$0.01-0.10
LizardBrain
Free (local)
Structured entities
Manual
No
RAG
No
LizardBrain
7 types (Zod)
Works offline
Manual
Yes
RAG
No
LizardBrain
Yes (Ollama)
Context-aware
Manual
No
RAG
No
LizardBrain
Yes (cross-run)
Security hardened
Manual
N/A
RAG
Varies
LizardBrain
Yes
Agent-ready output
Manual
No
RAG
Partial
LizardBrain
Yes
Contradiction detection
Manual
No
RAG
No
LizardBrain
Yes (auto-supersede)
Temporal expiry
Manual
No
RAG
No
LizardBrain
Yes (4 tiers)
Entity cross-references
Manual
No
RAG
No
LizardBrain
Yes (5 link types)
MCP server
Manual
No
RAG
No
LizardBrain
Yes (9 tools)
Built for OpenClaw

The memory layer for OpenClaw agents

LizardBrain was built with OpenClaw in mind. It reads directly from OpenClaw's chat database, extracts knowledge your agents can use, and generates compact rosters that fit in any context window. Give your OpenClaw agents persistent memory across conversations.

Native integration

Point at your OpenClaw SQLite database. Column mapping is pre-configured -- just set the path.

Agent-ready output

Generate member rosters and search results formatted for agent context windows. Drop into any system prompt.

Continuous learning

Run on a cron alongside your OpenClaw agents. Knowledge stays fresh as conversations evolve.

Up and running in 60 seconds

Three steps. Any LLM. No infrastructure.

1. Clone and configure

git clone https://github.com/pandore/lizardbrain
cd lizardbrain && npm install
cp examples/lizardbrain.json lizardbrain.json

2. Point at your chat and pick an LLM

lizardbrain.json
{
  "profile": "team",
  "llm": {
    "baseUrl": "https://api.openai.com/v1",
    "model": "gpt-5-nano"
  },
  "source": {
    "type": "sqlite",
    "path": "./chat.db"
  }
}

3. Extract and search

node src/cli.js init --profile team
LIZARDBRAIN_LLM_API_KEY=sk-... node src/cli.js extract --limit 200
node src/cli.js search "who handles deployments?"
node src/cli.js health

Bring any LLM

ProviderModelCost / 1M tokens
Anthropicclaude-haiku-4-5$0.80 / $4.00
OpenAIgpt-5-nano$0.05 / $0.40
Geminigemini-2.5-flash-lite$0.10 / $0.40
Groqllama-3.3-70b$0.10 / $0.32
Mistralministral-3b$0.10 / $0.10
Ollamaqwen2.5:7bfree (local)
OpenRouterllama-4-scout$0.08 / $0.30

Via Vercel AI SDK — any OpenAI-compatible endpoint. Zod-validated structured output.

Connect agents via MCP

npm install better-sqlite3
node src/cli.js serve
# Claude Desktop, Cursor, Claude Code — any MCP client

9 tools: search, get_context, who_knows, add_knowledge, ingest, add_link, get_links, and more.

Want semantic search? Add vector support.

npm install sqlite-vec
# Add embedding config to lizardbrain.json
LIZARDBRAIN_EMBEDDING_API_KEY=sk-... \
  node src/cli.js embed --backfill

Optional. Keyword search works great on its own.

Frequently asked questions

Everything you need to know about running LizardBrain.