Rewrite all public documentation to be user-facing only: - Remove deployment, configuration, and self-hosting sections - Add guide pages for SBOM, issues, webhooks & PR reviews - Add reference pages for glossary and tools/scanners - Add 12 screenshots from live dashboard - Explain MCP, LLM triage, false positives, human-in-the-loop Fix edit repository modal not scrollable (max-height + overflow-y). Show full webhook URL using window.location.origin instead of path. Unset RUSTC_WRAPPER in agent cargo commands to avoid sccache errors. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2.6 KiB
AI Chat
The AI Chat feature lets you ask natural language questions about your codebase and get accurate, source-referenced answers.
What It Does
AI Chat uses Retrieval-Augmented Generation (RAG) to answer questions about your code. Instead of relying solely on the LLM's training data, it retrieves relevant code from your actual repository and uses it as context for generating answers.
This means you can ask questions like:
- "How does authentication work in this codebase?"
- "What functions handle database connections?"
- "Explain the error handling pattern used in this project"
- "Where are the API routes defined?"
- "What does the
process_scanfunction do?"
How RAG Works
In simple terms:
- Your codebase is parsed into functions, classes, and modules during graph building
- Each code symbol is converted into a numerical representation (an embedding) that captures its meaning
- When you ask a question, your question is also converted into an embedding
- The system finds the code snippets whose embeddings are most similar to your question
- Those snippets are sent to the LLM along with your question as context
- The LLM generates an answer grounded in your actual code, not generic knowledge
Getting Started
- Navigate to AI Chat in the sidebar
- Select a repository from the grid of cards
- If embeddings have not been built yet, click Build Embeddings and wait for the process to complete
- Once the status shows Embeddings ready, type your question and press Enter
::: tip Rebuild embeddings after significant code changes to ensure the AI has access to the latest version of your codebase. :::
Source References
Below each AI response, you will see source references showing exactly which code was used to generate the answer:
- Symbol name -- the qualified name of the function, class, or module
- File path -- where the code is located, with line range
- Code snippet -- the first several lines of the relevant code
- Relevance score -- how closely the code matched your question (0.0 to 1.0)
Source references let you verify the AI's answer against the actual code and navigate directly to the relevant files.
Tips for Better Results
- Be specific -- "How does the JWT validation middleware work?" is better than "Tell me about auth"
- Reference filenames -- "What does
server.rsdo?" helps the retrieval find relevant code - Ask about patterns -- "What error handling pattern does this project use?" works well with RAG
- Use follow-ups -- the chat maintains conversation history within a session, so you can ask follow-up questions