Scouttlo
All ideas/devtools/Universal AI conversation management platform that indexes, searches, and analyzes all AI coding sessions across different tools and environments
GitHubB2Bdevtools

Universal AI conversation management platform that indexes, searches, and analyzes all AI coding sessions across different tools and environments

Scouted 5 hours ago

7.0/ 10
Overall score

Turn this signal into an edge

We help you build it, validate it, and get there first.

Go from idea to plan: who buys, what MVP to launch, how to validate it, and what to measure before spending months.

Extra context

Learn more about this idea

Get a clearer explanation of what the opportunity means, the current problem behind it, how this idea solves it, and the key concepts involved.

Share your email to view this expanded analysis.

Score breakdown

Urgency7.0
Market size8.0
Feasibility7.0
Competition6.0
Pain point

Developers lose visibility and control over AI-assisted coding sessions that happen outside their main workflow tools

Who'd pay for this

Development teams, tech companies, and individual developers using AI coding assistants

Source signal

"A developer might spend hours in ad-hoc Claude Code sessions across multiple projects, and that context is completely lost to Panopticon"

Original post

Conversation discovery — index sessions inside and outside Panopticon

Repository: eltmon/panopticon-cli Author: eltmon ## Problem Panopticon only knows about conversations it spawned. Conversations started manually via `claude` in a terminal — debugging sessions, quick fixes, exploratory chats — are invisible. There's no way to review what happened, correlate cost, or learn from past sessions unless Panopticon managed them. This is a significant blind spot. A developer might spend hours in ad-hoc Claude Code sessions across multiple projects, and that context is completely lost to Panopticon. ## Feature: Conversation Discovery & Indexing A new subsystem that discovers, indexes, and exposes Claude Code conversations from anywhere on the system — whether Panopticon-managed or not. Includes full-text and semantic search, tiered LLM enrichment, and optional vector embedding. ### Three Discovery Modes #### 1. `pan conversations scan <dir> [<dir>...]` — Targeted directory scan - Scan one or more directories for Claude Code session artifacts - Walks `~/.claude/projects/` hashed directories, resolves hashes back to workspace paths, filters to those under the given dirs - Pure filesystem ops — **zero LLM calls** #### 2. `pan conversations scan --watched` — Watched directories - Scan all directories configured in `config.yaml` under `conversations.watchDirs` - Default: `["~/Projects"]` - Same mechanics as targeted scan, just reads dirs from config #### 3. `pan conversations scan --system` — Full system scan - Discovers ALL Claude Code sessions on the system by scanning `~/.claude/projects/` exhaustively - Resolves every hashed project dir back to its original workspace path - **Performance-adaptive parallelism** (see below) - Pure filesystem + scripted extraction — **zero LLM calls for discovery** ### What Gets Indexed For each discovered session, store a reference row in SQLite (new `discovered_sessions` table): ```sql CREATE TABLE discovered_sessions ( id TEXT PRIMARY KEY, -- Claude Code session UUID project_hash TEXT NOT NULL, -- ~/.claude/projects/<hash> workspace_path TEXT, -- Resolved original path (NULL if unresolvable) jsonl_path TEXT NOT NULL, -- Full path to .jsonl file file_size INTEGER NOT NULL, -- Bytes (for change detection) file_mtime INTEGER NOT NULL, -- Epoch seconds (for change detection) message_count INTEGER, -- Total messages in session first_message_at TEXT, -- ISO timestamp of first message last_message_at TEXT, -- ISO timestamp of last message duration_seconds INTEGER, -- last - first message timestamps model_primary TEXT, -- Most-used model in session models_used TEXT, -- JSON array of all models token_input INTEGER DEFAULT 0, -- Total input tokens token_output INTEGER DEFAULT 0, -- Total output tokens estimated_cost REAL DEFAULT 0, -- USD estimate from token counts panopticon_managed INTEGER DEFAULT 0, -- 1 if linked to a pan conversation/agent pan_issue_id TEXT, -- Linked issue ID if managed pan_agent_id TEXT, -- Linked agent ID if managed summary TEXT, -- LLM-generated summary (NULL until enriched) summary_detailed TEXT, -- Deep enrichment summary (NULL until deep-enriched) tags TEXT, -- JSON array of LLM-assigned tags (NULL until enriched) tools_used TEXT, -- JSON array of tools invoked (extracted from JSONL) files_touched TEXT, -- JSON array of file paths read/written (extracted from JSONL) enrichment_level INTEGER DEFAULT 0, -- 0=none, 1=quick, 2=deep, 3=custom enrichment_model TEXT, -- Model used for last enrichment has_embedding INTEGER DEFAULT 0, -- 1 if vector embedding exists indexed_at TEXT NOT NULL, -- When this row was created/updated enriched_at TEXT, -- When LLM enrichment ran (NULL if not yet) UNIQUE(jsonl_path) ); CREATE INDEX idx_ds_workspace ON discovered_sessions(workspace_path); CREATE INDEX idx_ds_last_message ON discovered_sessions(last_message_at); CREATE INDEX idx_ds_managed ON discovered_sessions(panopticon_managed); CREATE INDEX idx_ds_project_hash ON discovered_sessions(project_hash); CREATE INDEX idx_ds_enrichment ON discovered_sessions(enrichment_level); CREATE INDEX idx_ds_tags ON discovered_sessions(tags); -- for LIKE-based tag search ``` **Full-text search table** (SQLite FTS5): ```sql CREATE VIRTUAL TABLE sessions_fts USING fts5( id UNINDEXED, -- join key back to discovered_sessions summary, -- quick enrichment summary summary_detailed, -- deep enrichment summary tags, -- space-separated tags for FTS workspace_path, -- searchable path fragments tools_used, -- searchable tool names files_touched, -- searchable file paths content=discovered_sessions, content_rowid=rowid ); ``` **Vector embeddings table** (optional, for semantic search): ```sql CREATE TABLE session_embeddings ( session_id TEXT PRIMARY KEY REFERENCES discovered_sessions(id), embedding BLOB NOT NULL, -- Float32 array, serialized embedding_model TEXT NOT NULL, -- e.g. 'text-embedding-3-small' embedding_dim INTEGER NOT NULL, -- e.g. 1536 created_at TEXT NOT NULL ); ``` **Discovery populates everything except `summary*`, `tags`, and embeddings** — those are NULL until enrichment. ### Scripted Extraction (Zero LLM) The scanner extracts metadata from JSONL without any LLM calls: 1. **Session discovery** — `readdir` on `~/.claude/projects/*/` for `*.jsonl` files 2. **Hash resolution** — Read `sessions-index.json` or reverse-map from Claude Code's project hash algorithm 3. **Metadata extraction** — Stream-parse JSONL (first line, last line, line count, model fields, token fields) 4. **Tool & file extraction** — Parse tool_use blocks to extract `tools_used` and `files_touched` arrays (pure JSON parsing, no LLM) 5. **Cost estimation** — Apply known per-model token pricing (already have this in `model-capabilities.ts`) 6. **Panopticon correlation** — Match against existing `conversations` and `cost_events` tables by session_id 7. **Change detection** — Compare file size + mtime against stored values; skip unchanged files This should handle thousands of sessions in seconds. The JSONL parser already exists in `src/lib/cost-parsers/jsonl-parser.ts`. ### Performance-Adaptive Parallelism (System Scan) The system scan can encounter hundreds of project directories. To maximize throughput: **System capability probe (runs once, cached):** ```typescript interface SystemCapabilities { cpuCores: number; // os.cpus().length driveType: 'ssd' | 'hdd' | 'unknown'; // rotational flag from lsblk driveReadMBps: number; // Quick sequential read benchmark (read 10MB, measure time) availableMemoryMB: number; // os.freemem() } ``` **Parallelism strategy:** | Drive Type | CPU Cores | Concurrent Reads | Rationale | |-----------|-----------|-------------------|-----------| | SSD | any | `min(cpuCores, 16)` | SSD handles random reads well; CPU-bound on parsing | | HDD | any | `2` | Random seeks kill HDD; serialize to preserve sequential access | | Unknown | any | `4` | Conservative default | The scanner uses a work-stealing pool: N workers each pop a project directory, scan its sessions, parse metadata, and write to a batch insert queue. SQLite writes are serialized through a single writer (already the pattern in `database/index.ts`). **Progress reporting:** ``` Scanning ~/.claude/projects/... [████████████░░░░░░░░] 847/1,203 dirs | 2,341 sessions found | 12.3s ``` --- ## Search Conversations are only useful if you can find them. The search system has three tiers, each progressively more powerful. ### Tier 1: Structured Filters (Zero LLM, instant) Filter on indexed metadata columns. These compose with AND logic: ```bash pan conversations search --workspace ~/Projects/myn # by workspace pan conversations search --model claude-opus-4-6 # by model used pan conversations search --managed # panopticon-managed only pan conversations search --unmanaged # ad-hoc only pan conversations search --since 7d # relative time pan conversations search --before 2026-03-01 # absolute time pan conversations search --after 2026-02-15 pan conversations search --min-cost 0.50 # cost filters pan conversations search --max-cost 5.00 pan conversations search --min-messages 20 # conversation length pan conversations search --tag debugging # tag match (requires enrichment) pan conversations search --tool Edit # used a specific tool pan conversations search --file src/lib/convoy.ts # touched a specific file pan conversations search --issue PAN-449 # linked to issue pan conversations search --enriched # has been enriched pan conversations search --not-enriched # needs enrichment ``` Filters are composable: ```bash pan conversations search --workspace ~/Projects/myn --since 30d --min-cost 1.00 --tag performance ``` Output formats: ```bash pan conversations search ... --format table # default: tabular summary pan conversations search ... --format json # machine-readable pan conversations search ... --format brief # one-line-per-result pan conversations search ... --format ids # session IDs only (pipe-friendly) ``` ### Tier 2: Full-Text Search (Zero LLM, fast) Uses SQLite FTS5 for ranked text search across summaries, tags, file paths, and tool names: ```bash pan conversations search "redis cache invalidation" # free-text query pan conversations search "Effect service extraction" --since 14d pan conversations search "Flyway migration" --workspace ~/Projects/myn ``` FTS5 features leveraged: - **BM25 ranking** — results ordered by relevance - **Prefix matching** — `"refact*"` matches "refactor", "refactoring" - **Phrase matching** — `"\"error handling\""` matches exact phrase - **Boolean operators** — `"redis AND cache NOT test"` - **Column weighting** — summary_detailed weighted 2x, summary 1.5x, tags 1x, files 0.5x - **Snippet extraction** — show matching context in results FTS is populated automatically when enrichment runs (triggers on INSERT/UPDATE to discovered_sessions). Unenriched sessions are still searchable by workspace_path, tools_used, and files_touched (which are populated during scan). ### Tier 3: Semantic Search (Optional, requires embeddings) For "find conversations similar to X" queries that keyword search can't handle: ```bash pan conversations search --semantic "debugging a race condition in the event loop" pan conversations search --semantic "how did I fix the auth middleware last month" pan conversations search --similar <session-id> # find sessions similar to this one ``` **How it works:** 1. Query text is embedded using the same model as stored embeddings 2. Cosine similarity computed against all session embeddings 3. Top-K results returned, optionally intersected with structured filters **Embedding generation** happens during enrichment (opt-in, see below). Not a separate step. **Implementation:** Pure SQLite — no external vector DB. Cosine similarity in a custom SQLite function (native addon or WASM). For the expected scale (thousands of sessions, not millions), brute-force cosine over Float32 blobs is fast enough. If scale demands it later, add an IVF index — but don't prematurely optimize. **Embedding model selection:** - OpenAI `text-embedding-3-small` (1536 dims, $0.02/M tokens) — preferred if API key available - Voyage AI `voyage-code-3` (1024 dims) — better for code-heavy sessions - Local via Ollama `nomic-embed-text` — free, no API key, slower - Configurable in `config.yaml` under `conversations.embeddingModel` ### Dashboard Search The Mission Control conversations panel includes: - Search bar with structured filter dropdowns - Free-text search box (FTS5-backed) - Semantic search toggle (if embeddings enabled) - Faceted results: group by workspace, model, time period, cost range - Click-to-filter on any facet value - Saved searches / pinned queries --- ## Tiered Enrichment Enrichment is **never automatic** — always user-initiated. Three enrichment levels, each building on the previous. ### Level 1: Quick Enrich (cheap, batch) ```bash pan conversations enrich # all un-enriched sessions pan conversations enrich --limit 50 # cap at 50 pan conversations enrich --workspace ~/Projects/foo # scope to workspace pan conversations enrich --since 7d # scope to recent ``` **What it does:** - Sends **first message, last message, and 1 sampled middle message** to cheapest available model - Model priority: Haiku 4.5 → gemini-flash → gpt-4o-mini - Generates: `summary` (1-2 sentences), `tags` (JSON array) - Sets `enrichment_level = 1` - Estimated cost: ~$0.001/session **Subagent parallelism:** spawn `min(sessions_to_enrich, maxParallel)` workers, each processing ~20 sessions sequentially. ### Level 2: Deep Enrich (moderate, per-session or batch) ```bash pan conversations enrich --deep # deep-enrich all level-1 sessions pan conversations enrich --deep <session-id> # deep-enrich specific session pan conversations enrich --deep --since 7d # deep-enrich recent ``` **What it does:** - Sends a **larger sample**: first 3 messages, last 3 messages, 5 evenly-sampled middle messages, plus tool_use summaries - Uses a mid-tier model: Sonnet 4.6 → gemini-pro → gpt-4o - Generates: `summary_detailed` (paragraph-length analysis — what was attempted, what worked, what failed, key decisions made), updated `tags` with finer granularity - Optionally generates embedding (if `conversations.embeddings: true` in config) - Sets `enrichment_level = 2` - Estimated cost: ~$0.01-0.03/session ### Level 3: Custom Enrich (user-selected model, interactive) ```bash pan conversations enrich --with claude-opus-4-6 <session-id> # use specific model pan conversations enrich --with kimi-k2.5 <session-id> # any configured model pan conversations enrich --with claude-opus-4-6 --since 3d # batch with specific model pan conversations enrich --with claude-opus-4-6 --full <session-id> # send FULL transcript ``` **What it does:** - User picks the model — any model configured in Panopticon's model list - By default sends the deep-enrich sample size, but `--full` sends the **entire transcript** (with a cost warning) - Generates: overwrites `summary_detailed` with the new model's analysis, updates `tags`, regenerates embedding - Custom prompt injection via `--prompt "Focus on the architectural decisions"` — appended to the system prompt - Sets `enrichment_level = 3`, records `enrichment_model` - Cost varies by model and `--full` flag; CLI shows estimate before proceeding: ``` Session abc123 — 847 messages, ~125K tokens Enriching with claude-opus-4-6 (full transcript) Estimated cost: $3.75 Proceed? [y/N] ``` ### Re-enrichment Any session can be re-enriched at any level at any time. Higher levels don't destroy lower-level data: - `summary` (level 1) is always preserved — it's the cheap quick-reference - `summary_detailed` gets overwritten by level 2 or 3 - `tags` get overwritten by the highest enrichment level - Embeddings get regenerated on level 2+ enrichment ```bash # Re-enrich a previously enriched session with a better model pan conversations enrich --with claude-opus-4-6 <session-id> # Re-enrich and also regenerate embedding pan conversations enrich --deep --embed <session-id> # Bulk re-enrich: upgrade all level-1 sessions to level-2 pan conversations enrich --deep --upgrade ``` ### Embedding Management ```bash # Generate embeddings for all enriched sessions that don't have one pan conversations embed # Regenerate all embeddings (e.g., after switching embedding model) pan conversations embed --regenerate # Stats pan conversations embed --status # Output: 1,247 sessions indexed | 892 with embeddings | model: text-embedding-3-small ``` **Config:** ```yaml conversations: watchDirs: ["~/Projects"] embeddings: false # opt-in, default off embeddingModel: "text-embedding-3-small" # or voyage-code-3, nomic-embed-text embeddingAutoOnDeep: true # auto-embed when deep-enriching ``` --- ## CLI Interface (Complete) ```bash # ── Discovery (pure scripted, no LLM) ── pan conversations scan ~/Projects # Scan specific dirs pan conversations scan --watched # Scan configured watch dirs pan conversations scan --system # Full system scan pan conversations scan --system --dry-run # Show what would be scanned # ── Search ── pan conversations search "query text" # FTS5 full-text search pan conversations search --workspace ~/Projects/foo # Structured filter pan conversations search --tag debugging --since 7d # Combined filters pan conversations search --semantic "race condition" # Vector similarity (requires embeddings) pan conversations search --similar <session-id> # Find similar sessions pan conversations search "query" --format json # Output format # ── Enrichment ── pan conversations enrich # Quick enrich (level 1) all pan conversations enrich --limit 50 # Quick enrich up to 50 pan conversations enrich --deep # Deep enrich (level 2) all level-1 pan conversations enrich --deep <session-id> # Deep enrich one session pan conversations enrich --with <model> <session-id> # Custom model (level 3) pan conversations enrich --with <model> --full <id> # Full transcript + custom model pan conversations enrich --with <model> --prompt "Focus on X" <id> pan conversations enrich --deep --upgrade # Upgrade all level-1 → level-2 # ── Embeddings ── pan conversations embed # Embed all enriched, un-embedded pan conversations embed --regenerate # Regenerate all embeddings pan conversations embed --status # Show embedding stats # ── Browsing ── pan conversations list # Recent sessions, tabular pan conversations list --managed # Only Panopticon-managed pan conversations list --unmanaged # Only ad-hoc sessions pan conversations show <session-id> # Full detail + summaries pan conversations cost # Cost breakdown pan conversations cost --by workspace # Cost by workspace pan conversations cost --by model # Cost by model ``` --- ## Dashboard Integration New "Conversations" panel in Mission Control: - Timeline view of all indexed sessions (managed + unmanaged) - Search bar: free-text (FTS5) + structured filter dropdowns + semantic toggle - Faceted results: group by workspace, model, time period, cost range, enrichment level - Click-to-filter on any facet value - Session detail view: messages, cost, models, duration, tools, files, summary at each level - Visual distinction between Panopticon-managed (linked to issues) and ad-hoc sessions - Enrichment controls: enrich/deep-enrich/custom-enrich buttons on session detail - Aggregate cost dashboard across all sessions - Saved searches / pinned queries --- ## Implementation Phases **Phase 1: Core scanner + SQLite storage + structured search** - `discovered_sessions` table + FTS5 table + migrations - JSONL metadata extractor (extend existing parser) - Tool & file extraction from JSONL tool_use blocks - Hash resolution logic - `pan conversations scan <dir>` command - `pan conversations search` with structured filters - `pan conversations list` + `pan conversations show` **Phase 2: System scan + adaptive parallelism** - System capability probe (CPU cores, drive type via lsblk, memory) - Work-stealing parallel scanner - Progress reporting - `--system` and `--watched` modes - Change detection (skip unchanged files) **Phase 3: Tiered enrichment + FTS** - Level 1 quick enrichment (cheapest model, 3 sampled messages) - Level 2 deep enrichment (mid-tier model, larger sample) - Level 3 custom enrichment (user-selected model, optional full transcript, custom prompt) - FTS5 population on enrichment (triggers or explicit sync) - `pan conversations enrich` with all flags - Cost estimation + confirmation for expensive operations **Phase 4: Vector embeddings + semantic search** - `session_embeddings` table - Embedding generation (OpenAI, Voyage, or local Ollama) - Cosine similarity function (SQLite native addon or WASM) - `pan conversations embed` command - `--semantic` and `--similar` search modes - Config for embedding model selection **Phase 5: Dashboard UI** - Conversations panel in Mission Control - Search bar with FTS + filters + semantic toggle - Faceted result display - Session detail view with enrichment controls - Cost aggregation views - Domain events for real-time scan/enrich progress --- ## Testing ### Scanner Tests ``` - discovers sessions in target directory - resolves project hash to workspace path - extracts message count, timestamps, models from JSONL - extracts tools_used and files_touched from tool_use blocks - calculates cost estimate from token counts - correlates with existing Panopticon conversations - skips unchanged files (same size + mtime) - handles corrupt/empty JSONL gracefully - handles missing sessions-index.json ``` ### System Scan Tests ``` - detects SSD vs HDD via lsblk - adjusts parallelism based on drive type - work-stealing pool processes all directories - progress callback fires with correct counts - handles permission-denied directories gracefully - respects maxParallel limit ``` ### Search Tests ``` - structured filters compose with AND logic - --since and --before/--after parse correctly - --tag matches JSON array elements - --tool and --file match against extracted arrays - FTS5 query returns BM25-ranked results - FTS5 prefix matching works (refact*) - FTS5 phrase matching works ("error handling") - FTS5 boolean operators work (redis AND cache NOT test) - semantic search returns cosine-similar results - --similar finds sessions with related content - filters + FTS compose correctly (FTS within filtered set) - --format json/table/brief/ids all produce correct output ``` ### Enrichment Tests ``` - level 1 selects cheapest available model - level 1 sends only 3 sampled messages - level 2 sends larger sample (11 messages + tool summaries) - level 2 uses mid-tier model - level 3 uses user-specified model - --full sends entire transcript - --prompt appends to system prompt - cost estimation matches actual token count (within 20%) - cost confirmation prompt shown for expensive operations - re-enrichment preserves level-1 summary - re-enrichment overwrites summary_detailed and tags - --upgrade batch converts level-1 to level-2 - enrichment populates FTS5 index - handles model API failure gracefully (marks session, moves on) - batches sessions across subagents ``` ### Embedding Tests ``` - generates embedding with configured model - stores correct dimensions in session_embeddings - --regenerate overwrites existing embeddings - --status reports correct counts - cosine similarity function returns correct rankings - embedding auto-generated on deep-enrich when config enabled - handles embedding API failure gracefully - works with OpenAI, Voyage, and local Ollama models ``` ### Integration Tests ``` - full pipeline: scan → enrich → embed → search (all three tiers) - re-scan updates changed sessions, skips unchanged - managed session links to issue_id and agent_id - cost aggregation matches sum of individual sessions - FTS index stays in sync after re-enrichment - semantic search results improve after enrichment ``` --- ## Settings UI — Embedding Provider Configuration Hosted embedding models all require an API key. The Settings panel needs first-class UI for enabling embeddings and managing provider credentials — without it, semantic search is undiscoverable and users have to hand-edit `config.yaml` + `~/.panopticon.env`. ### Required Settings panel fields Under a new **"Conversations & Search"** section in Settings: 1. **Enable embeddings** — toggle (writes `conversations.embeddings: true|false`). Off by default. 2. **Embedding provider** — dropdown: - OpenAI (`text-embedding-3-small`, `text-embedding-3-large`) - Voyage AI (`voyage-code-3`, `voyage-3-large`) - Cohere (`embed-v4`) — future - Ollama (local — `nomic-embed-text`, `mxbai-embed-large`) 3. **Embedding model** — dropdown, scoped to selected provider 4. **API key** — password-style input, **conditional on provider**: - OpenAI → `OPENAI_API_KEY` - Voyage → `VOYAGE_API_KEY` - Cohere → `COHERE_API_KEY` - Ollama → hidden (no key needed) 5. **Ollama base URL** — text input, only shown when Ollama is selected. Default `http://localhost:11434`. 6. **Auto-embed on deep enrich** — toggle (writes `conversations.embeddingAutoOnDeep`). Default on. 7. **Test connection** button — issues a single embedding request against the configured provider/model/key and reports success/failure + latency. Prevents silent misconfiguration. ### Where keys are stored API keys go to **`~/.panopticon.env`** (alongside other secrets), NOT `config.yaml`. Rationale: `config.yaml` may be checked into a dotfiles repo; the env file is gitignored by convention. The Settings UI writes keys via the existing secrets-management API path, never inlines them into yaml. ### Validation rules - If **Enable embeddings = on** and provider is hosted, the corresponding API key must be present — block save and surface the error inline. - If provider switches, the model dropdown resets to that provider's default model. - If embeddings are disabled, semantic search options in the conversations panel become disabled with a tooltip ("Enable embeddings in Settings to use semantic search"). - Changing the embedding model after sessions are already embedded surfaces a warning: "Existing embeddings were generated with `<old-model>` (dim=N). Run `pan conversations embed --regenerate` to re-embed all sessions with the new model — they cannot be mixed." ### CLI parity Everything settable in the UI must also be settable via `pan config set`: ```bash pan config set conversations.embeddings true pan config set conversations.embeddingProvider openai pan config set conversations.embeddingModel text-embedding-3-small pan config set conversations.embeddingAutoOnDeep true pan config set conversations.ollamaBaseUrl http://localhost:11434 # Keys never go through `pan config set` — use the secrets path: pan secrets set OPENAI_API_KEY sk-... ``` ### Tests ``` - Settings panel renders all fields - API key field is conditional on provider (hidden for Ollama) - Ollama base URL field only shown for Ollama provider - Model dropdown resets when provider changes - Save is blocked when embeddings enabled but required key is missing - Test connection button hits provider with a 1-token embed and reports result - Keys are written to ~/.panopticon.env, never to config.yaml - Disabling embeddings disables semantic search UI in conversations panel - Changing embedding model surfaces re-embed warning when sessions already embedded - CLI `pan config set` updates the same values shown in the UI ``` ## Priority Medium — this is a visibility/observability feature, not blocking any workflow. But it directly supports the goal of Panopticon being the single pane of glass for all AI-assisted development activity. 🤖 Generated with [Claude Code](https://claude.com/claude-code)