Turn team conversations into structured knowledge
Decisions, expertise, tasks — buried in chats, meetings, and support threads. LizardBrain extracts them using any LLM and stores everything in searchable SQLite.
$ node src/cli.js init --profile team$ node src/cli.js extract --limit 200Extracted 12 members, 47 facts, 5 decisions, 11 tasksSuperseded 2 contradicted facts, linked 8 entity pairsUpdated 2 decisions (proposed→agreed), 3 tasks (open→done)
# your agent can now search everything$ node src/cli.js search "who handles deployments?"hybrid (fts5+vec) | 5 results | 23ms
# expose to any MCP-compatible agent$ node src/cli.js serveMCP server listening (stdio) | 9 tools registeredSee it in action
Six messages in. Four structured entities out. Zero manual work.
Extracted knowledge
Sarah
Python, deployments / deployment lead
Migrate to Kubernetes
Status: agreed / Deadline: Q2
Document deploy process
Owner: Sarah / Due: Friday
Architecture review
Thursday 2pm / Bring migration notes
Text layout powered by pretext — no DOM reflow.
Reads from
Powered by
MCP clients
Stop losing knowledge in the noise
Group chats, meeting transcripts, support threads — your team's knowledge is scattered across conversations. LizardBrain turns it into a searchable knowledge base your agent can use.
Agents that actually know your team
Your agent learns who the experts are, what decisions were made, and what's in progress. Cross-referenced entities let it trace a task back to the decision it implements.
Instant answers from thousands of conversations
Hybrid search combines keyword matching and vector similarity to find exactly what you need. Results in milliseconds, not minutes of scrolling.
Runs on autopilot for pennies
Set it on a cron and forget about it. Contradictions get superseded, expired facts drop out, new links form — all automatically. Cheap models work great.
One tool, tuned to your use case
Pick a profile that matches your use case. Each one extracts exactly the entities that matter -- no noise, no wasted tokens.
Your agent knows who the experts are
For open-source communities, Discord servers, interest groups. Tracks expertise, captures insights, remembers discussions.
Your agent tracks every decision and task
For teams and workplaces. Know who decided what, who's doing what, and what the team has learned.
Your agent knows every open question
For client work and project teams. Capture decisions, track deliverables, surface unanswered questions.
Your agent captures everything
All 7 entity types. Members, facts, topics, decisions, tasks, questions, and events. Nothing slips through.
Or build a custom profile -- pick any combination of the 7 entity types.
From conversation noise to structured knowledge
Run it on a cron every few hours. Each run sees what was extracted before, so decisions get confirmed, tasks get closed, and questions get answered.
What your agent gets
7 entity types with status tracking, cross-references, contradiction detection, temporal validity, and hybrid search. Filter by conversation.
Members
Know who the experts are
Alice -- RAG, LangChain | builds: pipeline
Facts
Temporal validity, contradiction-safe
"LangChain works well with chunk size 512"
Topics
Track what your group discusses
"RAG Pipeline Comparison" -- Alice, Bob
Decisions
Track from proposed to agreed
"Use PostgreSQL" proposed → agreed
Tasks
Auto-update open → done
"Migrate user service" -- Bob, done
Questions
Know when questions get answered
"Best way to handle migrations?" -- answered
Events
Remember what happened when
"Architecture Review" -- Apr 1, Zoom
What's new in v1.0
Contradiction detection, temporal validity, entity cross-references — plus production-hardened operations.
Contradiction detection
"Team uses Postgres" then "Team migrated to MySQL" — the old fact gets superseded automatically. The LLM spots contradictions after extraction and marks stale knowledge so your agent never sees outdated info.
Temporal validity
Sprint goals, standup notes, and release dates expire naturally. The LLM assigns a durability to each fact — ephemeral, short, medium, or durable — and expired facts drop out of search automatically.
Entity cross-references
Link entities to each other — a task implements a decision, a fact supports another fact, a question blocks a task. Directional, typed relationships the LLM creates during extraction.
Start simple, scale up
Node.js + sqlite3 to start. Add MCP, vector search — each tier is one command away.
Security hardened
SQL/FTS injection protection, credential leakage blocking. Keys never reach the DB.
Production-ready ops
Health checks, embedding pruning, cursor reset, LLM retry with backoff. Built to run unattended.
Works wherever your team talks
Point LizardBrain at any conversation source. Some work out of the box, others need a lightweight adapter.
Group Chats
Slack, Telegram, Discord
Point at your chat database or export. Built-in SQLite and JSONL adapters handle the common formats. Extract expertise, decisions, and tasks automatically.
Meeting Transcripts
Zoom, Google Meet, Otter
Pipe transcript files through stdin or write a 10-line adapter. Decisions and action items are extracted the same way as from chat.
Support Threads
Zendesk, Intercom, email
Extract recurring issues, solutions, and customer expertise from support conversations. JSONL export or custom adapter.
Agent Memory
MCP write-back
Agents write structured knowledge or raw text via MCP tools. LizardBrain becomes persistent memory that survives across sessions.
How LizardBrain compares
Structured extraction beats brute-force approaches.
The memory layer for OpenClaw agents
LizardBrain was built with OpenClaw in mind. It reads directly from OpenClaw's chat database, extracts knowledge your agents can use, and generates compact rosters that fit in any context window. Give your OpenClaw agents persistent memory across conversations.
Native integration
Point at your OpenClaw SQLite database. Column mapping is pre-configured -- just set the path.
Agent-ready output
Generate member rosters and search results formatted for agent context windows. Drop into any system prompt.
Continuous learning
Run on a cron alongside your OpenClaw agents. Knowledge stays fresh as conversations evolve.
Up and running in 60 seconds
Three steps. Any LLM. No infrastructure.
1. Clone and configure
git clone https://github.com/pandore/lizardbrain
cd lizardbrain && npm install
cp examples/lizardbrain.json lizardbrain.json2. Point at your chat and pick an LLM
{
"profile": "team",
"llm": {
"baseUrl": "https://api.openai.com/v1",
"model": "gpt-5-nano"
},
"source": {
"type": "sqlite",
"path": "./chat.db"
}
}3. Extract and search
node src/cli.js init --profile team
LIZARDBRAIN_LLM_API_KEY=sk-... node src/cli.js extract --limit 200
node src/cli.js search "who handles deployments?"
node src/cli.js healthBring any LLM
| Provider | Model | Cost / 1M tokens |
|---|---|---|
| Anthropic | claude-haiku-4-5 | $0.80 / $4.00 |
| OpenAI | gpt-5-nano | $0.05 / $0.40 |
| Gemini | gemini-2.5-flash-lite | $0.10 / $0.40 |
| Groq | llama-3.3-70b | $0.10 / $0.32 |
| Mistral | ministral-3b | $0.10 / $0.10 |
| Ollama | qwen2.5:7b | free (local) |
| OpenRouter | llama-4-scout | $0.08 / $0.30 |
Via Vercel AI SDK — any OpenAI-compatible endpoint. Zod-validated structured output.
Connect agents via MCP
npm install better-sqlite3
node src/cli.js serve
# Claude Desktop, Cursor, Claude Code — any MCP client9 tools: search, get_context, who_knows, add_knowledge, ingest, add_link, get_links, and more.
Want semantic search? Add vector support.
npm install sqlite-vec
# Add embedding config to lizardbrain.json
LIZARDBRAIN_EMBEDDING_API_KEY=sk-... \
node src/cli.js embed --backfillOptional. Keyword search works great on its own.
Frequently asked questions
Everything you need to know about running LizardBrain.