docs: rewrite user docs, fix modal scroll, webhook URL, and sccache
Some checks failed
CI / Clippy (push) Failing after 2m49s
CI / Security Audit (push) Has been skipped
CI / Tests (push) Has been skipped
CI / Detect Changes (push) Has been skipped
CI / Format (pull_request) Successful in 3s
CI / Clippy (pull_request) Failing after 2m52s
CI / Security Audit (pull_request) Has been skipped
CI / Tests (pull_request) Has been skipped
CI / Format (push) Successful in 3s
CI / Deploy Agent (push) Has been skipped
CI / Deploy Dashboard (push) Has been skipped
CI / Deploy Docs (push) Has been skipped
CI / Deploy MCP (push) Has been skipped
CI / Detect Changes (pull_request) Has been skipped
CI / Deploy Agent (pull_request) Has been skipped
CI / Deploy Dashboard (pull_request) Has been skipped
CI / Deploy Docs (pull_request) Has been skipped
CI / Deploy MCP (pull_request) Has been skipped
Some checks failed
CI / Clippy (push) Failing after 2m49s
CI / Security Audit (push) Has been skipped
CI / Tests (push) Has been skipped
CI / Detect Changes (push) Has been skipped
CI / Format (pull_request) Successful in 3s
CI / Clippy (pull_request) Failing after 2m52s
CI / Security Audit (pull_request) Has been skipped
CI / Tests (pull_request) Has been skipped
CI / Format (push) Successful in 3s
CI / Deploy Agent (push) Has been skipped
CI / Deploy Dashboard (push) Has been skipped
CI / Deploy Docs (push) Has been skipped
CI / Deploy MCP (push) Has been skipped
CI / Detect Changes (pull_request) Has been skipped
CI / Deploy Agent (pull_request) Has been skipped
CI / Deploy Dashboard (pull_request) Has been skipped
CI / Deploy Docs (pull_request) Has been skipped
CI / Deploy MCP (pull_request) Has been skipped
Rewrite all public documentation to be user-facing only: - Remove deployment, configuration, and self-hosting sections - Add guide pages for SBOM, issues, webhooks & PR reviews - Add reference pages for glossary and tools/scanners - Add 12 screenshots from live dashboard - Explain MCP, LLM triage, false positives, human-in-the-loop Fix edit repository modal not scrollable (max-height + overflow-y). Show full webhook URL using window.location.origin instead of path. Unset RUSTC_WRAPPER in agent cargo commands to avoid sccache errors. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -1,41 +1,12 @@
|
||||
# AI Chat (RAG)
|
||||
# AI Chat
|
||||
|
||||
The AI Chat feature lets you ask natural language questions about your codebase. It uses Retrieval-Augmented Generation (RAG) to find relevant code and provide accurate, source-referenced answers.
|
||||
The AI Chat feature lets you ask natural language questions about your codebase and get accurate, source-referenced answers.
|
||||
|
||||
## How It Works
|
||||
## What It Does
|
||||
|
||||
1. **Code graph** is built for the repository (functions, classes, modules)
|
||||
2. **Embeddings** are generated for each code symbol using an LLM embedding model
|
||||
3. When you ask a question, your query is **embedded** and compared against code embeddings
|
||||
4. The **top 8 most relevant** code snippets are retrieved
|
||||
5. These snippets are sent as context to the LLM along with your question
|
||||
6. The LLM generates a response **grounded in your actual code**
|
||||
AI Chat uses Retrieval-Augmented Generation (RAG) to answer questions about your code. Instead of relying solely on the LLM's training data, it retrieves relevant code from your actual repository and uses it as context for generating answers.
|
||||
|
||||
## Getting Started
|
||||
|
||||
### 1. Select a Repository
|
||||
|
||||
Navigate to **AI Chat** in the sidebar. You'll see a grid of repository cards. Click one to open the chat interface.
|
||||
|
||||
### 2. Build Embeddings
|
||||
|
||||
Before chatting, you need to build embeddings for the repository:
|
||||
|
||||
1. Click **Build Embeddings**
|
||||
2. Wait for the process to complete — a progress bar shows `X/Y chunks`
|
||||
3. Once the status shows **Embeddings ready**, the chat input is enabled
|
||||
|
||||
::: info
|
||||
Embedding builds require:
|
||||
- A code graph already built for the repository (via the Graph feature)
|
||||
- A configured embedding model (`LITELLM_EMBED_MODEL`)
|
||||
|
||||
The default model is `text-embedding-3-small`.
|
||||
:::
|
||||
|
||||
### 3. Ask Questions
|
||||
|
||||
Type your question in the input area and press Enter (or click Send). Examples:
|
||||
This means you can ask questions like:
|
||||
|
||||
- "How does authentication work in this codebase?"
|
||||
- "What functions handle database connections?"
|
||||
@@ -43,37 +14,42 @@ Type your question in the input area and press Enter (or click Send). Examples:
|
||||
- "Where are the API routes defined?"
|
||||
- "What does the `process_scan` function do?"
|
||||
|
||||
## Understanding Responses
|
||||
## How RAG Works
|
||||
|
||||
### Answer
|
||||
In simple terms:
|
||||
|
||||
The AI response is a natural language answer to your question, grounded in the actual source code of your repository.
|
||||
1. Your codebase is parsed into functions, classes, and modules during graph building
|
||||
2. Each code symbol is converted into a numerical representation (an embedding) that captures its meaning
|
||||
3. When you ask a question, your question is also converted into an embedding
|
||||
4. The system finds the code snippets whose embeddings are most similar to your question
|
||||
5. Those snippets are sent to the LLM along with your question as context
|
||||
6. The LLM generates an answer grounded in your actual code, not generic knowledge
|
||||
|
||||
### Source References
|
||||
## Getting Started
|
||||
|
||||
Below each response, you'll see source references showing exactly which code was used to generate the answer:
|
||||
1. Navigate to **AI Chat** in the sidebar
|
||||
2. Select a repository from the grid of cards
|
||||
3. If embeddings have not been built yet, click **Build Embeddings** and wait for the process to complete
|
||||
4. Once the status shows **Embeddings ready**, type your question and press Enter
|
||||
|
||||
- **Symbol name** — The qualified name of the function/class/module
|
||||
- **File path** — Where the code is located, with line range
|
||||
- **Code snippet** — The first ~10 lines of the relevant code
|
||||
- **Relevance score** — How closely the code matched your question (0.0 to 1.0)
|
||||
::: tip
|
||||
Rebuild embeddings after significant code changes to ensure the AI has access to the latest version of your codebase.
|
||||
:::
|
||||
|
||||
## Conversation Context
|
||||
## Source References
|
||||
|
||||
The chat maintains conversation history within a session. You can ask follow-up questions that reference previous answers. The system sends the last 10 messages as context to maintain coherence.
|
||||
Below each AI response, you will see source references showing exactly which code was used to generate the answer:
|
||||
|
||||
## Configuration
|
||||
- **Symbol name** -- the qualified name of the function, class, or module
|
||||
- **File path** -- where the code is located, with line range
|
||||
- **Code snippet** -- the first several lines of the relevant code
|
||||
- **Relevance score** -- how closely the code matched your question (0.0 to 1.0)
|
||||
|
||||
| Variable | Description | Default |
|
||||
|----------|-------------|---------|
|
||||
| `LITELLM_URL` | LiteLLM proxy URL | `http://localhost:4000` |
|
||||
| `LITELLM_API_KEY` | API key for the LLM provider | — |
|
||||
| `LITELLM_MODEL` | Model for chat responses | `gpt-4o` |
|
||||
| `LITELLM_EMBED_MODEL` | Model for code embeddings | `text-embedding-3-small` |
|
||||
Source references let you verify the AI's answer against the actual code and navigate directly to the relevant files.
|
||||
|
||||
## Tips
|
||||
## Tips for Better Results
|
||||
|
||||
- **Be specific** — "How does the JWT validation middleware work?" is better than "Tell me about auth"
|
||||
- **Reference filenames** — "What does `server.rs` do?" helps the retrieval find relevant code
|
||||
- **Ask about patterns** — "What error handling pattern does this project use?" works well with RAG
|
||||
- **Rebuild after changes** — If the repository has been updated significantly, rebuild embeddings to include new code
|
||||
- **Be specific** -- "How does the JWT validation middleware work?" is better than "Tell me about auth"
|
||||
- **Reference filenames** -- "What does `server.rs` do?" helps the retrieval find relevant code
|
||||
- **Ask about patterns** -- "What error handling pattern does this project use?" works well with RAG
|
||||
- **Use follow-ups** -- the chat maintains conversation history within a session, so you can ask follow-up questions
|
||||
|
||||
Reference in New Issue
Block a user