61 lines
2.4 KiB
Markdown
61 lines
2.4 KiB
Markdown
# Help Chat Assistant
|
|
|
|
The Help Chat is a floating assistant available on every page of the dashboard. It answers questions about the Compliance Scanner using the project documentation as its knowledge base.
|
|
|
|
## How It Works
|
|
|
|
1. Click the **?** button in the bottom-right corner of any page
|
|
2. Type your question and press Enter
|
|
3. The assistant responds with answers grounded in the project documentation
|
|
|
|
The chat supports multi-turn conversations -- you can ask follow-up questions and the assistant will remember the context of your conversation.
|
|
|
|
## What You Can Ask
|
|
|
|
- **Getting started**: "How do I add a repository?" / "How do I trigger a scan?"
|
|
- **Features**: "What is SBOM?" / "How does the code knowledge graph work?"
|
|
- **Configuration**: "How do I set up webhooks?" / "What environment variables are needed?"
|
|
- **Scanning**: "What does the scan pipeline do?" / "How does LLM triage work?"
|
|
- **DAST & Pentesting**: "How do I run a pentest?" / "What DAST tools are available?"
|
|
- **Integrations**: "How do I connect to GitHub?" / "What is MCP?"
|
|
|
|
## Technical Details
|
|
|
|
The help chat loads all project documentation (README, guides, feature docs, reference) at startup and caches them in memory. When you ask a question, it sends your message along with the full documentation context to the LLM via LiteLLM, which generates a grounded response.
|
|
|
|
### API Endpoint
|
|
|
|
```
|
|
POST /api/v1/help/chat
|
|
Content-Type: application/json
|
|
|
|
{
|
|
"message": "How do I add a repository?",
|
|
"history": [
|
|
{ "role": "user", "content": "previous question" },
|
|
{ "role": "assistant", "content": "previous answer" }
|
|
]
|
|
}
|
|
```
|
|
|
|
### Configuration
|
|
|
|
The help chat uses the same LiteLLM configuration as other LLM features:
|
|
|
|
| Environment Variable | Description | Default |
|
|
|---------------------|-------------|---------|
|
|
| `LITELLM_URL` | LiteLLM API base URL | `http://localhost:4000` |
|
|
| `LITELLM_MODEL` | Model for chat responses | `gpt-4o` |
|
|
| `LITELLM_API_KEY` | API key (optional) | -- |
|
|
|
|
### Documentation Sources
|
|
|
|
The assistant indexes the following documentation at startup:
|
|
|
|
- `README.md` -- Project overview and quick start
|
|
- `docs/guide/` -- Getting started, repositories, findings, SBOM, scanning, issues, webhooks
|
|
- `docs/features/` -- AI Chat, DAST, Code Graph, MCP Server, Pentesting, Help Chat
|
|
- `docs/reference/` -- Glossary, tools reference
|
|
|
|
If documentation files are not found at startup (e.g., in a minimal Docker deployment), the assistant falls back to general knowledge about the project.
|