README.md: - Add DAST, pentesting, code graph, AI chat, MCP, help chat to features table - Add Gitea to tracker list, multi-language LLM triage note - Update architecture diagram with all 5 workspace crates - Add new API endpoints (graph, DAST, chat, help, pentest) - Update dashboard pages table (remove Settings, add 6 new pages) - Update project structure with new directories - Add Keycloak, Chromium to external services New docs: - docs/features/help-chat.md — Help chat assistant usage, API, config - docs/features/deduplication.md — Finding dedup across SAST, DAST, PR, issues Updated: - docs/features/overview.md — Add help chat section, update tracker list Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2.4 KiB
Help Chat Assistant
The Help Chat is a floating assistant available on every page of the dashboard. It answers questions about the Compliance Scanner using the project documentation as its knowledge base.
How It Works
- Click the ? button in the bottom-right corner of any page
- Type your question and press Enter
- The assistant responds with answers grounded in the project documentation
The chat supports multi-turn conversations -- you can ask follow-up questions and the assistant will remember the context of your conversation.
What You Can Ask
- Getting started: "How do I add a repository?" / "How do I trigger a scan?"
- Features: "What is SBOM?" / "How does the code knowledge graph work?"
- Configuration: "How do I set up webhooks?" / "What environment variables are needed?"
- Scanning: "What does the scan pipeline do?" / "How does LLM triage work?"
- DAST & Pentesting: "How do I run a pentest?" / "What DAST tools are available?"
- Integrations: "How do I connect to GitHub?" / "What is MCP?"
Technical Details
The help chat loads all project documentation (README, guides, feature docs, reference) at startup and caches them in memory. When you ask a question, it sends your message along with the full documentation context to the LLM via LiteLLM, which generates a grounded response.
API Endpoint
POST /api/v1/help/chat
Content-Type: application/json
{
"message": "How do I add a repository?",
"history": [
{ "role": "user", "content": "previous question" },
{ "role": "assistant", "content": "previous answer" }
]
}
Configuration
The help chat uses the same LiteLLM configuration as other LLM features:
| Environment Variable | Description | Default |
|---|---|---|
LITELLM_URL |
LiteLLM API base URL | http://localhost:4000 |
LITELLM_MODEL |
Model for chat responses | gpt-4o |
LITELLM_API_KEY |
API key (optional) | -- |
Documentation Sources
The assistant indexes the following documentation at startup:
README.md-- Project overview and quick startdocs/guide/-- Getting started, repositories, findings, SBOM, scanning, issues, webhooksdocs/features/-- AI Chat, DAST, Code Graph, MCP Server, Pentesting, Help Chatdocs/reference/-- Glossary, tools reference
If documentation files are not found at startup (e.g., in a minimal Docker deployment), the assistant falls back to general knowledge about the project.