Repository: openmaster-ai/clawmaster
Author: AIwork4me
Applications — this is a new application module that transforms how users accumulate and retrieve knowledge through OpenClaw agents.
Today, knowledge management in OpenClaw follows a RAG-like pattern: the agent retrieves relevant chunks from raw sources at query time and synthesizes an answer from scratch. This works for simple lookups, but it has a fundamental limitation — there is no accumulation. Ask a subtle question that requires synthesizing five documents, and the agent must find and piece together the relevant fragments every single time. Nothing is built up. The agent pays the extraction cost on every question, cannot cross-reference across sessions, and cannot build up structured knowledge over time.
Andrej Karpathy recently described an alternative pattern he calls LLM Wiki: instead of just retrieving from raw documents at query time, the LLM incrementally builds and maintains a persistent wiki — a structured, interlinked collection of markdown files that sits between the user and the raw sources. When you add a new source, the LLM doesn't just index it for later retrieval. It reads it, extracts key information, and integrates it into the existing wiki — updating entity pages, revising topic summaries, noting where new data contradicts old claims. The knowledge is compiled once and then kept current, not re-derived on every query.
The key insight: the wiki is a persistent, compounding artifact. The cross-references are already there. The contradictions have already been flagged. The synthesis already reflects everything you've read. The wiki keeps getting richer with every source you add and every question you ask.
This is particularly relevant for ClawMaster users who manage multiple LLM providers and want a unified knowledge layer across agent sessions, need their agents to build domain expertise over time (research, competitive analysis, documentation), or want knowledge that compounds rather than resets with each conversation.
ClawMaster already has the infrastructure to make this work — PowerMem for semantic search, OCR for document parsing, multi-LLM provider support, and a plugin system. What's missing is the wiki layer that turns these building blocks into a compounding knowledge engine.
Proposed solution: Add a new LLM Wiki module to ClawMaster, consisting of frontend, backend plugin, and three core operations: Ingest, Query, and Lint. Three-layer architecture: Raw Sources, Wiki Pages generated and maintained by LLM, and Schema with conventions and rules.
Core operation Ingest: the LLM parses the source, extracts entities, concepts, key claims with evidence, creates structured wiki pages with bidirectional links, and updates existing related pages.