16 Commits

Author SHA1 Message Date
Sharang Parnerkar
be019b5d4c refactor: remove login landing page, redirect to Keycloak directly
All checks were successful
CI / Format (pull_request) Successful in 3s
CI / Detect Changes (pull_request) Has been skipped
CI / Deploy Agent (push) Has been skipped
CI / Deploy MCP (push) Has been skipped
CI / Deploy Agent (pull_request) Has been skipped
CI / Format (push) Successful in 21s
CI / Clippy (push) Successful in 3m50s
CI / Security Audit (push) Has been skipped
CI / Tests (push) Has been skipped
CI / Clippy (pull_request) Successful in 3m49s
CI / Security Audit (pull_request) Has been skipped
CI / Tests (pull_request) Has been skipped
CI / Detect Changes (push) Has been skipped
CI / Deploy Dashboard (push) Has been skipped
CI / Deploy Docs (push) Has been skipped
CI / Deploy Dashboard (pull_request) Has been skipped
CI / Deploy Docs (pull_request) Has been skipped
CI / Deploy MCP (pull_request) Has been skipped
Unauthenticated users are now redirected straight to /auth (Keycloak)
instead of seeing a custom landing page. Removed LoginPage component
and all associated CSS.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 09:02:22 +01:00
Sharang Parnerkar
1bf25525d8 docs: add MCP server documentation
All checks were successful
CI / Tests (push) Has been skipped
CI / Format (pull_request) Successful in 3s
CI / Clippy (pull_request) Successful in 3m59s
CI / Security Audit (pull_request) Has been skipped
CI / Tests (pull_request) Has been skipped
CI / Format (push) Successful in 3s
CI / Clippy (push) Successful in 3m56s
CI / Security Audit (push) Has been skipped
CI / Deploy Agent (pull_request) Has been skipped
CI / Deploy Dashboard (pull_request) Has been skipped
CI / Deploy Docs (pull_request) Has been skipped
CI / Deploy MCP (pull_request) Has been skipped
CI / Deploy Agent (push) Has been skipped
CI / Deploy Dashboard (push) Has been skipped
CI / Deploy Docs (push) Has been skipped
CI / Deploy MCP (push) Has been skipped
CI / Detect Changes (pull_request) Has been skipped
CI / Detect Changes (push) Has been skipped
New feature page covering architecture, available tools, local/HTTP
usage, Docker deployment, Coolify setup, dashboard management, and
example queries. Updated environment variable reference and
configuration guide with MCP_PORT. Added sidebar nav entry.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 22:20:53 +01:00
Sharang Parnerkar
e4495e405d feat: add MCP servers dashboard page with CRUD and token management
Some checks failed
CI / Deploy MCP (push) Has been cancelled
CI / Detect Changes (push) Has been cancelled
CI / Format (push) Successful in 3s
CI / Clippy (push) Successful in 3m58s
CI / Security Audit (push) Has been skipped
CI / Tests (push) Has been skipped
CI / Format (pull_request) Successful in 3s
CI / Deploy Agent (push) Has been cancelled
CI / Deploy Dashboard (push) Has been cancelled
CI / Deploy Docs (push) Has been cancelled
CI / Clippy (pull_request) Successful in 4m1s
CI / Security Audit (pull_request) Has been skipped
CI / Tests (pull_request) Has been skipped
CI / Deploy Dashboard (pull_request) Has been skipped
CI / Deploy Docs (pull_request) Has been skipped
CI / Deploy MCP (pull_request) Has been skipped
CI / Detect Changes (pull_request) Has been skipped
CI / Deploy Agent (pull_request) Has been skipped
New page at /mcp-servers to register, view, and manage MCP server
instances. Shows endpoint config, enabled tools, and access tokens
with reveal/regenerate controls. Includes McpServerConfig model in
compliance-core, MongoDB collection accessor, server functions for
list/add/delete/regenerate-token, sidebar nav entry, and full CSS.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 22:16:26 +01:00
Sharang Parnerkar
abd6f65d55 feat: add MCP server for exposing compliance data to LLMs
All checks were successful
CI / Format (push) Successful in 3s
CI / Clippy (pull_request) Successful in 3m51s
CI / Security Audit (pull_request) Has been skipped
CI / Tests (pull_request) Has been skipped
CI / Detect Changes (push) Has been skipped
CI / Detect Changes (pull_request) Has been skipped
CI / Deploy Agent (push) Has been skipped
CI / Deploy Docs (pull_request) Has been skipped
CI / Deploy MCP (pull_request) Has been skipped
CI / Clippy (push) Successful in 3m51s
CI / Security Audit (push) Has been skipped
CI / Tests (push) Has been skipped
CI / Format (pull_request) Successful in 3s
CI / Deploy Dashboard (push) Has been skipped
CI / Deploy Docs (push) Has been skipped
CI / Deploy MCP (push) Has been skipped
CI / Deploy Agent (pull_request) Has been skipped
CI / Deploy Dashboard (pull_request) Has been skipped
New `compliance-mcp` crate providing a Model Context Protocol server
with 7 tools: list/get/summarize findings, list SBOM packages, SBOM
vulnerability report, list DAST findings, and DAST scan summary.
Supports stdio (local dev) and Streamable HTTP (deployment via MCP_PORT).
Includes Dockerfile, CI clippy check, and Coolify deploy job.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 21:30:59 +01:00
Sharang Parnerkar
d13cef94cb Add Coolify deploy jobs with path-based change detection
All checks were successful
CI / Format (push) Successful in 4s
CI / Clippy (push) Successful in 3m27s
CI / Security Audit (push) Successful in 1m42s
CI / Tests (push) Successful in 4m45s
CI / Detect Changes (push) Successful in 5s
CI / Deploy Agent (push) Has been skipped
CI / Deploy Dashboard (push) Has been skipped
CI / Deploy Docs (push) Has been skipped
Deploys agent, dashboard, and docs independently based on which
files changed. Only triggers on main after tests pass.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 19:22:56 +01:00
Sharang Parnerkar
3a01a28591 Redesign sidebar user section to fix overlap issue
All checks were successful
CI / Format (push) Successful in 5s
CI / Clippy (push) Successful in 3m21s
CI / Security Audit (push) Successful in 1m41s
CI / Tests (push) Successful in 4m44s
Restructured layout: avatar, truncated username, and logout icon
in a single row. Collapsed state stacks vertically. Logout button
uses a subtle icon-only style with red hover. Proper text ellipsis
prevents name overflow.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 18:24:58 +01:00
Sharang Parnerkar
d490359591 Add polished login landing page with feature highlights
All checks were successful
CI / Format (push) Successful in 2s
CI / Clippy (push) Successful in 3m25s
CI / Tests (push) Successful in 4m41s
CI / Security Audit (push) Successful in 1m38s
Dark-themed login page with shield logo, feature grid, gradient
sign-in button, subtle grid background, and glow effect.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 17:51:41 +01:00
Sharang Parnerkar
b95ce44fb9 Bind dashboard to 0.0.0.0 for container accessibility
All checks were successful
CI / Format (push) Successful in 3s
CI / Security Audit (push) Successful in 1m40s
CI / Clippy (push) Successful in 3m22s
CI / Tests (push) Successful in 4m32s
Dioxus defaults to 127.0.0.1 which is unreachable from outside the
container. Hardcode 0.0.0.0 binding so reverse proxies can reach it.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 17:39:25 +01:00
Sharang Parnerkar
175d303dc4 Set IP=0.0.0.0 in dashboard Dockerfile for container networking
All checks were successful
CI / Tests (push) Successful in 4m34s
CI / Format (push) Successful in 4s
CI / Clippy (push) Successful in 3m28s
CI / Security Audit (push) Successful in 1m40s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 17:29:32 +01:00
Sharang Parnerkar
5a4af292fc Fix OTLP HTTP exporter: use reqwest-rustls for HTTPS support
All checks were successful
CI / Format (push) Successful in 3s
CI / Clippy (push) Successful in 3m11s
CI / Security Audit (push) Successful in 1m34s
CI / Tests (push) Successful in 4m29s
The reqwest-client feature doesn't include TLS support, causing
NoHttpClient error when connecting to HTTPS endpoints.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 17:07:41 +01:00
Sharang Parnerkar
04c8084943 Switch OTLP exporter from gRPC/tonic to HTTP/reqwest
Some checks failed
CI / Format (push) Successful in 3s
CI / Clippy (push) Successful in 3m6s
CI / Security Audit (push) Successful in 1m31s
CI / Tests (push) Has been cancelled
gRPC requires special reverse proxy config for HTTP/2. HTTP works
behind standard HTTPS proxies like Traefik/Caddy on port 4318.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 17:01:02 +01:00
Sharang Parnerkar
d67a51db18 Add nginx config for VitePress SPA routing
All checks were successful
CI / Tests (push) Successful in 4m23s
CI / Format (push) Successful in 3s
CI / Clippy (push) Successful in 3m17s
CI / Security Audit (push) Successful in 1m37s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 15:09:42 +01:00
7e12d1433a docs: added vite-press docs (#4)
All checks were successful
CI / Clippy (push) Successful in 3m17s
CI / Security Audit (push) Successful in 1m36s
CI / Format (push) Successful in 2s
CI / Tests (push) Successful in 4m38s
Co-authored-by: Sharang Parnerkar <parnerkarsharang@gmail.com>
Reviewed-on: #4
2026-03-08 13:59:50 +00:00
65abc55915 feat: opentelemetry-tracing (#3)
All checks were successful
CI / Format (push) Successful in 2s
CI / Clippy (push) Successful in 3m16s
CI / Security Audit (push) Successful in 1m39s
CI / Tests (push) Successful in 4m22s
Co-authored-by: Sharang Parnerkar <parnerkarsharang@gmail.com>
Reviewed-on: #3
2026-03-07 23:51:20 +00:00
0cb06d3d6d feat: add Keycloak authentication for dashboard and API endpoints (#2)
Some checks failed
CI / Clippy (push) Has been cancelled
CI / Format (push) Successful in 2s
CI / Security Audit (push) Has been cancelled
CI / Tests (push) Has been cancelled
Dashboard: OAuth2/OIDC login flow with PKCE, session-based auth middleware
protecting all server function endpoints, check-auth server function for
frontend auth state, login page gate in AppShell, user info in sidebar.

Agent API: JWT validation middleware using Keycloak JWKS endpoint,
conditionally enabled when KEYCLOAK_URL and KEYCLOAK_REALM are set.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Co-authored-by: Sharang Parnerkar <parnerkarsharang@gmail.com>
Reviewed-on: #2
2026-03-07 23:50:56 +00:00
42cabf0582 feat: rag-embedding-ai-chat (#1)
All checks were successful
CI / Format (push) Successful in 2s
CI / Clippy (push) Successful in 2m56s
CI / Security Audit (push) Successful in 1m25s
CI / Tests (push) Successful in 3m57s
Co-authored-by: Sharang Parnerkar <parnerkarsharang@gmail.com>
Reviewed-on: #1
2026-03-06 21:54:15 +00:00
125 changed files with 11151 additions and 339 deletions

View File

@@ -37,3 +37,14 @@ GIT_CLONE_BASE_PATH=/tmp/compliance-scanner/repos
# Dashboard
DASHBOARD_PORT=8080
AGENT_API_URL=http://localhost:3001
# Keycloak (required for authentication)
KEYCLOAK_URL=http://localhost:8080
KEYCLOAK_REALM=compliance
KEYCLOAK_CLIENT_ID=compliance-dashboard
REDIRECT_URI=http://localhost:8080/auth/callback
APP_URL=http://localhost:8080
# OpenTelemetry (optional - omit to disable)
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
# OTEL_SERVICE_NAME=compliance-agent

View File

@@ -70,6 +70,8 @@ jobs:
run: cargo clippy -p compliance-dashboard --features server --no-default-features -- -D warnings
- name: Clippy (dashboard web)
run: cargo clippy -p compliance-dashboard --features web --no-default-features -- -D warnings
- name: Clippy (mcp)
run: cargo clippy -p compliance-mcp -- -D warnings
- name: Show sccache stats
run: sccache --show-stats
if: always()
@@ -124,3 +126,119 @@ jobs:
- name: Show sccache stats
run: sccache --show-stats
if: always()
# ---------------------------------------------------------------------------
# Stage 3: Deploy (only on main, after tests pass)
# Each service only deploys when its relevant files changed.
# ---------------------------------------------------------------------------
detect-changes:
name: Detect Changes
runs-on: docker
if: github.ref == 'refs/heads/main'
needs: [test]
container:
image: alpine:latest
outputs:
agent: ${{ steps.changes.outputs.agent }}
dashboard: ${{ steps.changes.outputs.dashboard }}
docs: ${{ steps.changes.outputs.docs }}
mcp: ${{ steps.changes.outputs.mcp }}
steps:
- name: Install git
run: apk add --no-cache git
- name: Checkout
run: |
git init
git remote add origin "${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git"
git fetch --depth=2 origin "${GITHUB_SHA}"
git checkout FETCH_HEAD
- name: Detect changed paths
id: changes
run: |
CHANGED=$(git diff --name-only HEAD~1 HEAD 2>/dev/null || echo "")
echo "Changed files:"
echo "$CHANGED"
# Agent: core libs, agent code, agent Dockerfile
if echo "$CHANGED" | grep -qE '^(compliance-core/|compliance-agent/|compliance-graph/|compliance-dast/|Dockerfile\.agent|Cargo\.(toml|lock))'; then
echo "agent=true" >> "$GITHUB_OUTPUT"
else
echo "agent=false" >> "$GITHUB_OUTPUT"
fi
# Dashboard: core libs, dashboard code, dashboard Dockerfile, assets
if echo "$CHANGED" | grep -qE '^(compliance-core/|compliance-dashboard/|Dockerfile\.dashboard|Dioxus\.toml|assets/|bin/|Cargo\.(toml|lock))'; then
echo "dashboard=true" >> "$GITHUB_OUTPUT"
else
echo "dashboard=false" >> "$GITHUB_OUTPUT"
fi
# Docs: docs folder, docs Dockerfile
if echo "$CHANGED" | grep -qE '^(docs/|Dockerfile\.docs)'; then
echo "docs=true" >> "$GITHUB_OUTPUT"
else
echo "docs=false" >> "$GITHUB_OUTPUT"
fi
# MCP: core libs, mcp code, mcp Dockerfile
if echo "$CHANGED" | grep -qE '^(compliance-core/|compliance-mcp/|Dockerfile\.mcp|Cargo\.(toml|lock))'; then
echo "mcp=true" >> "$GITHUB_OUTPUT"
else
echo "mcp=false" >> "$GITHUB_OUTPUT"
fi
deploy-agent:
name: Deploy Agent
runs-on: docker
needs: [detect-changes]
if: needs.detect-changes.outputs.agent == 'true'
container:
image: alpine:latest
steps:
- name: Trigger Coolify deploy
run: |
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK_AGENT }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"
deploy-dashboard:
name: Deploy Dashboard
runs-on: docker
needs: [detect-changes]
if: needs.detect-changes.outputs.dashboard == 'true'
container:
image: alpine:latest
steps:
- name: Trigger Coolify deploy
run: |
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK_DASHBOARD }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"
deploy-docs:
name: Deploy Docs
runs-on: docker
needs: [detect-changes]
if: needs.detect-changes.outputs.docs == 'true'
container:
image: alpine:latest
steps:
- name: Trigger Coolify deploy
run: |
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK_DOCS }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"
deploy-mcp:
name: Deploy MCP
runs-on: docker
needs: [detect-changes]
if: needs.detect-changes.outputs.mcp == 'true'
container:
image: alpine:latest
steps:
- name: Trigger Coolify deploy
run: |
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK_MCP }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"

443
Cargo.lock generated
View File

@@ -413,6 +413,17 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
[[package]]
name = "chacha20"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f8d983286843e49675a4b7a2d174efe136dc93a18d69130dd18198a6c167601"
dependencies = [
"cfg-if",
"cpufeatures 0.3.0",
"rand_core 0.10.0",
]
[[package]]
name = "charset"
version = "0.1.5"
@@ -555,6 +566,7 @@ dependencies = [
"git2",
"hex",
"hmac",
"jsonwebtoken",
"mongodb",
"octocrab",
"regex",
@@ -582,11 +594,18 @@ dependencies = [
"chrono",
"hex",
"mongodb",
"opentelemetry",
"opentelemetry-appender-tracing",
"opentelemetry-otlp",
"opentelemetry_sdk",
"secrecy",
"serde",
"serde_json",
"sha2",
"thiserror 2.0.18",
"tracing",
"tracing-opentelemetry",
"tracing-subscriber",
"uuid",
]
@@ -595,6 +614,8 @@ name = "compliance-dashboard"
version = "0.1.0"
dependencies = [
"axum",
"base64",
"bson",
"chrono",
"compliance-core",
"dioxus",
@@ -605,14 +626,20 @@ dependencies = [
"dotenvy",
"gloo-timers",
"mongodb",
"rand 0.9.2",
"reqwest",
"secrecy",
"serde",
"serde_json",
"sha2",
"thiserror 2.0.18",
"time",
"tokio",
"tower-http",
"tower-sessions",
"tracing",
"url",
"uuid",
"web-sys",
]
@@ -661,6 +688,27 @@ dependencies = [
"uuid",
]
[[package]]
name = "compliance-mcp"
version = "0.1.0"
dependencies = [
"axum",
"bson",
"chrono",
"compliance-core",
"dotenvy",
"mongodb",
"rmcp",
"schemars 1.2.1",
"serde",
"serde_json",
"thiserror 2.0.18",
"tokio",
"tower-http",
"tracing",
"tracing-subscriber",
]
[[package]]
name = "console_error_panic_hook"
version = "0.1.7"
@@ -792,7 +840,12 @@ version = "0.18.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ddef33a339a91ea89fb53151bd0a4689cfce27055c291dfa69945475d22c747"
dependencies = [
"base64",
"hmac",
"percent-encoding",
"rand 0.8.5",
"sha2",
"subtle",
"time",
"version_check",
]
@@ -850,6 +903,15 @@ dependencies = [
"libc",
]
[[package]]
name = "cpufeatures"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b2a41393f66f16b0823bb79094d54ac5fbd34ab292ddafb9a0456ac9f87d201"
dependencies = [
"libc",
]
[[package]]
name = "crc32fast"
version = "1.5.0"
@@ -953,8 +1015,18 @@ version = "0.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9cdf337090841a411e2a7f3deb9187445851f91b309c0c0a29e05f74a00a48c0"
dependencies = [
"darling_core",
"darling_macro",
"darling_core 0.21.3",
"darling_macro 0.21.3",
]
[[package]]
name = "darling"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "25ae13da2f202d56bd7f91c25fba009e7717a1e4a1cc98a76d844b65ae912e9d"
dependencies = [
"darling_core 0.23.0",
"darling_macro 0.23.0",
]
[[package]]
@@ -971,13 +1043,37 @@ dependencies = [
"syn",
]
[[package]]
name = "darling_core"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9865a50f7c335f53564bb694ef660825eb8610e0a53d3e11bf1b0d3df31e03b0"
dependencies = [
"ident_case",
"proc-macro2",
"quote",
"strsim",
"syn",
]
[[package]]
name = "darling_macro"
version = "0.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d38308df82d1080de0afee5d069fa14b0326a88c14f15c5ccda35b4a6c414c81"
dependencies = [
"darling_core",
"darling_core 0.21.3",
"quote",
"syn",
]
[[package]]
name = "darling_macro"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3984ec7bd6cfa798e62b4a642426a5be0e68f9401cfc2a01e3fa9ea2fcdb8d"
dependencies = [
"darling_core 0.23.0",
"quote",
"syn",
]
@@ -1808,7 +1904,7 @@ version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f43e744e4ea338060faee68ed933e46e722fb7f3617e722a5772d7e856d8b3ce"
dependencies = [
"darling",
"darling 0.21.3",
"proc-macro2",
"quote",
"syn",
@@ -2085,6 +2181,7 @@ dependencies = [
"cfg-if",
"libc",
"r-efi",
"rand_core 0.10.0",
"wasip2",
"wasip3",
]
@@ -2104,6 +2201,12 @@ dependencies = [
"url",
]
[[package]]
name = "glob"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0cc23270f6e1808e30a928bdc84dea0b9b4136a8bc82338574f23baf47bbd280"
[[package]]
name = "gloo-net"
version = "0.6.0"
@@ -3519,6 +3622,96 @@ dependencies = [
"vcpkg",
]
[[package]]
name = "opentelemetry"
version = "0.29.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9e87237e2775f74896f9ad219d26a2081751187eb7c9f5c58dde20a23b95d16c"
dependencies = [
"futures-core",
"futures-sink",
"js-sys",
"pin-project-lite",
"thiserror 2.0.18",
"tracing",
]
[[package]]
name = "opentelemetry-appender-tracing"
version = "0.29.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e716f864eb23007bdd9dc4aec381e188a1cee28eecf22066772b5fd822b9727d"
dependencies = [
"opentelemetry",
"tracing",
"tracing-core",
"tracing-subscriber",
]
[[package]]
name = "opentelemetry-http"
version = "0.29.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "46d7ab32b827b5b495bd90fa95a6cb65ccc293555dcc3199ae2937d2d237c8ed"
dependencies = [
"async-trait",
"bytes",
"http",
"opentelemetry",
"reqwest",
"tracing",
]
[[package]]
name = "opentelemetry-otlp"
version = "0.29.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d899720fe06916ccba71c01d04ecd77312734e2de3467fd30d9d580c8ce85656"
dependencies = [
"futures-core",
"http",
"opentelemetry",
"opentelemetry-http",
"opentelemetry-proto",
"opentelemetry_sdk",
"prost",
"reqwest",
"thiserror 2.0.18",
"tracing",
]
[[package]]
name = "opentelemetry-proto"
version = "0.29.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8c40da242381435e18570d5b9d50aca2a4f4f4d8e146231adb4e7768023309b3"
dependencies = [
"opentelemetry",
"opentelemetry_sdk",
"prost",
"tonic",
]
[[package]]
name = "opentelemetry_sdk"
version = "0.29.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "afdefb21d1d47394abc1ba6c57363ab141be19e27cc70d0e422b7f303e4d290b"
dependencies = [
"futures-channel",
"futures-executor",
"futures-util",
"glob",
"opentelemetry",
"percent-encoding",
"rand 0.9.2",
"serde_json",
"thiserror 2.0.18",
"tokio",
"tokio-stream",
"tracing",
]
[[package]]
name = "ownedbytes"
version = "0.7.0"
@@ -3551,6 +3744,12 @@ dependencies = [
"windows-link",
]
[[package]]
name = "pastey"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b867cad97c0791bbd3aaa6472142568c6c9e8f71937e98379f584cfb0cf35bec"
[[package]]
name = "pbkdf2"
version = "0.12.2"
@@ -3752,6 +3951,29 @@ dependencies = [
"version_check",
]
[[package]]
name = "prost"
version = "0.13.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2796faa41db3ec313a31f7624d9286acf277b52de526150b7e69f3debf891ee5"
dependencies = [
"bytes",
"prost-derive",
]
[[package]]
name = "prost-derive"
version = "0.13.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a56d757972c98b346a9b766e3f02746cde6dd1cd1d1d563472929fdd74bec4d"
dependencies = [
"anyhow",
"itertools",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "psl-types"
version = "2.0.11"
@@ -3865,6 +4087,17 @@ dependencies = [
"rand_core 0.9.5",
]
[[package]]
name = "rand"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc266eb313df6c5c09c1c7b1fbe2510961e5bcd3add930c1e31f7ed9da0feff8"
dependencies = [
"chacha20",
"getrandom 0.4.1",
"rand_core 0.10.0",
]
[[package]]
name = "rand_chacha"
version = "0.3.1"
@@ -3903,6 +4136,12 @@ dependencies = [
"getrandom 0.3.4",
]
[[package]]
name = "rand_core"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c8d0fd677905edcbeedbf2edb6494d676f0e98d54d5cf9bda0b061cb8fb8aba"
[[package]]
name = "rand_distr"
version = "0.4.3"
@@ -4007,6 +4246,7 @@ dependencies = [
"bytes",
"cookie",
"cookie_store",
"futures-channel",
"futures-core",
"futures-util",
"http",
@@ -4022,6 +4262,7 @@ dependencies = [
"pin-project-lite",
"quinn",
"rustls",
"rustls-native-certs",
"rustls-pki-types",
"serde",
"serde_json",
@@ -4061,6 +4302,50 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "rmcp"
version = "0.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cc4c9c94680f75470ee8083a0667988b5d7b5beb70b9f998a8e51de7c682ce60"
dependencies = [
"async-trait",
"base64",
"bytes",
"chrono",
"futures",
"http",
"http-body",
"http-body-util",
"pastey",
"pin-project-lite",
"rand 0.10.0",
"rmcp-macros",
"schemars 1.2.1",
"serde",
"serde_json",
"sse-stream",
"thiserror 2.0.18",
"tokio",
"tokio-stream",
"tokio-util",
"tower-service",
"tracing",
"uuid",
]
[[package]]
name = "rmcp-macros"
version = "0.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "90c23c8f26cae4da838fbc3eadfaecf2d549d97c04b558e7bd90526a9c28b42a"
dependencies = [
"darling 0.23.0",
"proc-macro2",
"quote",
"serde_json",
"syn",
]
[[package]]
name = "rust-stemmers"
version = "1.2.0"
@@ -4224,12 +4509,26 @@ version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2b42f36aa1cd011945615b92222f6bf73c599a102a300334cd7f8dbeec726cc"
dependencies = [
"chrono",
"dyn-clone",
"ref-cast",
"schemars_derive",
"serde",
"serde_json",
]
[[package]]
name = "schemars_derive"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7d115b50f4aaeea07e79c1912f645c7513d81715d0420f8bc77a18c6260b307f"
dependencies = [
"proc-macro2",
"quote",
"serde_derive_internals",
"syn",
]
[[package]]
name = "scopeguard"
version = "1.2.0"
@@ -4369,6 +4668,17 @@ dependencies = [
"syn",
]
[[package]]
name = "serde_derive_internals"
version = "0.29.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "18d26a20a969b9e3fdf2fc2d9f21eda6c40e2de84c9408bb5d3b05d499aae711"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "serde_json"
version = "1.0.149"
@@ -4453,7 +4763,7 @@ version = "3.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a6d4e30573c8cb306ed6ab1dca8423eec9a463ea0e155f45399455e0368b27e0"
dependencies = [
"darling",
"darling 0.21.3",
"proc-macro2",
"quote",
"syn",
@@ -4475,7 +4785,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e3bf829a2d51ab4a5ddf1352d8470c140cadc8301b2ae1789db023f01cedd6ba"
dependencies = [
"cfg-if",
"cpufeatures",
"cpufeatures 0.2.17",
"digest",
]
@@ -4486,7 +4796,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a7507d819769d01a365ab707794a4084392c824f54a7a6a7862f8c3d0892b283"
dependencies = [
"cfg-if",
"cpufeatures",
"cpufeatures 0.2.17",
"digest",
]
@@ -4640,6 +4950,19 @@ version = "0.9.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6980e8d7511241f8acf4aebddbb1ff938df5eebe98691418c4468d0b72a96a67"
[[package]]
name = "sse-stream"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eb4dc4d33c68ec1f27d386b5610a351922656e1fdf5c05bbaad930cd1519479a"
dependencies = [
"bytes",
"futures-util",
"http-body",
"http-body-util",
"pin-project-lite",
]
[[package]]
name = "stable_deref_trait"
version = "1.2.1"
@@ -5211,6 +5534,27 @@ dependencies = [
"winnow",
]
[[package]]
name = "tonic"
version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "877c5b330756d856ffcc4553ab34a5684481ade925ecc54bcd1bf02b1d0d4d52"
dependencies = [
"async-trait",
"base64",
"bytes",
"http",
"http-body",
"http-body-util",
"percent-encoding",
"pin-project",
"prost",
"tokio-stream",
"tower-layer",
"tower-service",
"tracing",
]
[[package]]
name = "tower"
version = "0.5.3"
@@ -5228,6 +5572,22 @@ dependencies = [
"tracing",
]
[[package]]
name = "tower-cookies"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "151b5a3e3c45df17466454bb74e9ecedecc955269bdedbf4d150dfa393b55a36"
dependencies = [
"axum-core",
"cookie",
"futures-util",
"http",
"parking_lot",
"pin-project-lite",
"tower-layer",
"tower-service",
]
[[package]]
name = "tower-http"
version = "0.6.8"
@@ -5268,6 +5628,57 @@ version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8df9b6e13f2d32c91b9bd719c00d1958837bc7dec474d94952798cc8e69eeec3"
[[package]]
name = "tower-sessions"
version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "518dca34b74a17cadfcee06e616a09d2bd0c3984eff1769e1e76d58df978fc78"
dependencies = [
"async-trait",
"http",
"time",
"tokio",
"tower-cookies",
"tower-layer",
"tower-service",
"tower-sessions-core",
"tower-sessions-memory-store",
"tracing",
]
[[package]]
name = "tower-sessions-core"
version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "568531ec3dfcf3ffe493de1958ae5662a0284ac5d767476ecdb6a34ff8c6b06c"
dependencies = [
"async-trait",
"axum-core",
"base64",
"futures",
"http",
"parking_lot",
"rand 0.9.2",
"serde",
"serde_json",
"thiserror 2.0.18",
"time",
"tokio",
"tracing",
]
[[package]]
name = "tower-sessions-memory-store"
version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "713fabf882b6560a831e2bbed6204048b35bdd60e50bbb722902c74f8df33460"
dependencies = [
"async-trait",
"time",
"tokio",
"tower-sessions-core",
]
[[package]]
name = "tracing"
version = "0.1.44"
@@ -5322,6 +5733,24 @@ dependencies = [
"tracing-core",
]
[[package]]
name = "tracing-opentelemetry"
version = "0.30.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd8e764bd6f5813fd8bebc3117875190c5b0415be8f7f8059bffb6ecd979c444"
dependencies = [
"js-sys",
"once_cell",
"opentelemetry",
"opentelemetry_sdk",
"smallvec",
"tracing",
"tracing-core",
"tracing-log",
"tracing-subscriber",
"web-time",
]
[[package]]
name = "tracing-subscriber"
version = "0.3.22"

View File

@@ -5,6 +5,7 @@ members = [
"compliance-dashboard",
"compliance-graph",
"compliance-dast",
"compliance-mcp",
]
resolver = "2"

View File

@@ -5,7 +5,10 @@ COPY . .
RUN cargo build --release -p compliance-agent
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates libssl3 git && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y ca-certificates libssl3 git curl && rm -rf /var/lib/apt/lists/*
# Install syft for SBOM generation
RUN curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
COPY --from=builder /app/target/release/compliance-agent /usr/local/bin/compliance-agent

View File

@@ -2,8 +2,11 @@ FROM rust:1.89-bookworm AS builder
RUN cargo install dioxus-cli --version 0.7.3
ARG DOCS_URL=/docs
WORKDIR /app
COPY . .
ENV DOCS_URL=${DOCS_URL}
RUN dx build --release --package compliance-dashboard
FROM debian:bookworm-slim
@@ -13,6 +16,7 @@ WORKDIR /app
COPY --from=builder /app/target/dx/compliance-dashboard/release/web/compliance-dashboard /app/compliance-dashboard
COPY --from=builder /app/target/dx/compliance-dashboard/release/web/public /app/public
ENV IP=0.0.0.0
EXPOSE 8080
ENTRYPOINT ["./compliance-dashboard"]

14
Dockerfile.docs Normal file
View File

@@ -0,0 +1,14 @@
FROM node:22-alpine AS builder
WORKDIR /app
COPY docs/package.json docs/package-lock.json ./
RUN npm ci
COPY docs/ .
RUN npm run build
FROM nginx:alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY docs/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/.vitepress/dist /usr/share/nginx/html
EXPOSE 80

16
Dockerfile.mcp Normal file
View File

@@ -0,0 +1,16 @@
FROM rust:1.89-bookworm AS builder
WORKDIR /app
COPY . .
RUN cargo build --release -p compliance-mcp
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates libssl3 && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/compliance-mcp /usr/local/bin/compliance-mcp
EXPOSE 8090
ENV MCP_PORT=8090
ENTRYPOINT ["compliance-mcp"]

View File

@@ -300,6 +300,84 @@ tr:hover {
color: var(--text-secondary);
}
/* Sidebar User Section */
.sidebar-user {
display: flex;
align-items: center;
gap: 10px;
padding: 12px 14px;
margin: 8px;
border-top: 1px solid var(--border);
padding-top: 16px;
}
.sidebar-user-collapsed {
flex-direction: column;
gap: 8px;
padding: 12px 4px;
margin: 8px 4px;
}
.user-avatar {
width: 34px;
height: 34px;
border-radius: 10px;
background: linear-gradient(135deg, rgba(56, 189, 248, 0.2), rgba(56, 189, 248, 0.08));
border: 1px solid rgba(56, 189, 248, 0.15);
display: flex;
align-items: center;
justify-content: center;
flex-shrink: 0;
}
.avatar-initials {
font-size: 13px;
font-weight: 700;
color: var(--accent);
line-height: 1;
}
.avatar-img {
width: 100%;
height: 100%;
border-radius: 10px;
object-fit: cover;
}
.user-name {
flex: 1;
font-size: 13px;
font-weight: 500;
color: var(--text-primary);
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
min-width: 0;
}
.logout-btn {
display: flex;
align-items: center;
justify-content: center;
width: 32px;
height: 32px;
border-radius: 8px;
color: var(--text-secondary);
text-decoration: none;
flex-shrink: 0;
transition: all 0.15s;
}
.logout-btn:hover {
background: rgba(239, 68, 68, 0.12);
color: #fca5a5;
}
.logout-btn-collapsed {
width: 34px;
height: 34px;
}
@media (max-width: 768px) {
.sidebar {
transform: translateX(-100%);
@@ -313,3 +391,216 @@ tr:hover {
padding: 16px;
}
}
/* ── Utility classes ────────────────────────────────────── */
.mb-3 { margin-bottom: 12px; }
.mb-4 { margin-bottom: 16px; }
.text-secondary { color: var(--text-secondary); }
.btn-sm {
padding: 4px 10px;
font-size: 12px;
}
.btn-danger {
background: var(--danger);
color: #fff;
}
.btn-danger:hover {
background: #dc2626;
}
.btn-secondary {
background: var(--bg-secondary);
color: var(--text-primary);
border: 1px solid var(--border);
}
.btn-secondary:hover {
background: var(--bg-primary);
}
/* ── Modal ──────────────────────────────────────────────── */
.modal-overlay {
position: fixed;
inset: 0;
background: rgba(0, 0, 0, 0.6);
backdrop-filter: blur(4px);
display: flex;
align-items: center;
justify-content: center;
z-index: 1000;
}
.modal-dialog {
background: var(--bg-secondary);
border: 1px solid var(--border);
border-radius: 12px;
padding: 24px;
max-width: 440px;
width: 90%;
}
.modal-dialog h3 {
margin-bottom: 12px;
}
.modal-dialog p {
margin-bottom: 8px;
font-size: 14px;
color: var(--text-secondary);
}
.modal-warning {
color: var(--warning) !important;
font-size: 13px !important;
}
.modal-actions {
display: flex;
gap: 8px;
justify-content: flex-end;
margin-top: 16px;
}
/* ── MCP Servers ────────────────────────────────────────── */
.mcp-server-card {
padding: 20px;
}
.mcp-server-header {
display: flex;
justify-content: space-between;
align-items: flex-start;
margin-bottom: 12px;
}
.mcp-server-title {
display: flex;
align-items: center;
gap: 10px;
}
.mcp-server-title h3 {
font-size: 16px;
font-weight: 600;
margin: 0;
}
.mcp-server-actions {
display: flex;
gap: 6px;
}
.mcp-status {
display: inline-flex;
align-items: center;
padding: 2px 10px;
border-radius: 20px;
font-size: 11px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.5px;
}
.mcp-status-running {
background: rgba(34, 197, 94, 0.15);
color: var(--success);
}
.mcp-status-stopped {
background: rgba(148, 163, 184, 0.15);
color: var(--text-secondary);
}
.mcp-status-error {
background: rgba(239, 68, 68, 0.15);
color: var(--danger);
}
.mcp-config-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
gap: 12px;
margin-bottom: 16px;
}
.mcp-config-item {
display: flex;
flex-direction: column;
gap: 4px;
}
.mcp-config-label {
font-size: 11px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.5px;
color: var(--text-secondary);
}
.mcp-config-value {
font-size: 13px;
color: var(--text-primary);
word-break: break-all;
}
.mcp-form-grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 0 16px;
}
.mcp-tools-section {
margin-bottom: 16px;
}
.mcp-tools-list {
display: flex;
flex-wrap: wrap;
gap: 6px;
margin-top: 6px;
}
.mcp-tool-badge {
display: inline-block;
padding: 3px 10px;
background: rgba(56, 189, 248, 0.1);
border: 1px solid rgba(56, 189, 248, 0.2);
border-radius: 6px;
font-size: 12px;
font-family: 'JetBrains Mono', monospace;
color: var(--accent);
}
.mcp-token-section {
margin-bottom: 12px;
}
.mcp-token-row {
display: flex;
align-items: center;
gap: 8px;
margin-top: 6px;
}
.mcp-token-value {
flex: 1;
padding: 6px 10px;
background: var(--bg-primary);
border: 1px solid var(--border);
border-radius: 6px;
font-size: 12px;
font-family: 'JetBrains Mono', monospace;
color: var(--text-secondary);
word-break: break-all;
}
.mcp-meta {
padding-top: 12px;
border-top: 1px solid var(--border);
font-size: 12px;
}

View File

@@ -2,10 +2,9 @@
#[allow(clippy::expect_used)]
fn main() {
dioxus_logger::init(tracing::Level::DEBUG).expect("Failed to init logger");
#[cfg(feature = "web")]
{
dioxus_logger::init(tracing::Level::DEBUG).expect("Failed to init logger");
dioxus::web::launch::launch_cfg(
compliance_dashboard::App,
dioxus::web::Config::new().hydrate(true),
@@ -14,6 +13,9 @@ fn main() {
#[cfg(feature = "server")]
{
dotenvy::dotenv().ok();
let _telemetry_guard = compliance_core::telemetry::init_telemetry("compliance-dashboard");
compliance_dashboard::infrastructure::server_start(compliance_dashboard::App)
.map_err(|e| {
tracing::error!("Unable to start server: {e}");

View File

@@ -7,7 +7,7 @@ edition = "2021"
workspace = true
[dependencies]
compliance-core = { workspace = true, features = ["mongodb"] }
compliance-core = { workspace = true, features = ["mongodb", "telemetry"] }
compliance-graph = { path = "../compliance-graph" }
compliance-dast = { path = "../compliance-dast" }
serde = { workspace = true }
@@ -35,3 +35,4 @@ walkdir = "2"
base64 = "0.22"
urlencoding = "2"
futures-util = "0.3"
jsonwebtoken = "9"

View File

@@ -20,6 +20,7 @@ impl ComplianceAgent {
config.litellm_url.clone(),
config.litellm_api_key.clone(),
config.litellm_model.clone(),
config.litellm_embed_model.clone(),
));
Self {
config,

View File

@@ -0,0 +1,113 @@
use std::sync::Arc;
use axum::{
extract::Request,
middleware::Next,
response::{IntoResponse, Response},
};
use jsonwebtoken::{decode, decode_header, jwk::JwkSet, DecodingKey, Validation};
use reqwest::StatusCode;
use serde::Deserialize;
use tokio::sync::RwLock;
/// Cached JWKS from Keycloak for token validation.
#[derive(Clone)]
pub struct JwksState {
pub jwks: Arc<RwLock<Option<JwkSet>>>,
pub jwks_url: String,
}
#[derive(Debug, Deserialize)]
struct Claims {
#[allow(dead_code)]
sub: String,
}
const PUBLIC_ENDPOINTS: &[&str] = &["/api/v1/health"];
/// Middleware that validates Bearer JWT tokens against Keycloak's JWKS.
///
/// Skips validation for health check endpoints.
/// If `JwksState` is not present as an extension (keycloak not configured),
/// all requests pass through.
pub async fn require_jwt_auth(request: Request, next: Next) -> Response {
let path = request.uri().path();
if PUBLIC_ENDPOINTS.contains(&path) {
return next.run(request).await;
}
let jwks_state = match request.extensions().get::<JwksState>() {
Some(s) => s.clone(),
None => return next.run(request).await,
};
let auth_header = match request.headers().get("authorization") {
Some(h) => h,
None => return (StatusCode::UNAUTHORIZED, "Missing authorization header").into_response(),
};
let token = match auth_header.to_str() {
Ok(s) if s.starts_with("Bearer ") => &s[7..],
_ => return (StatusCode::UNAUTHORIZED, "Invalid authorization header").into_response(),
};
match validate_token(token, &jwks_state).await {
Ok(()) => next.run(request).await,
Err(e) => {
tracing::warn!("JWT validation failed: {e}");
(StatusCode::UNAUTHORIZED, "Invalid token").into_response()
}
}
}
async fn validate_token(token: &str, state: &JwksState) -> Result<(), String> {
let header = decode_header(token).map_err(|e| format!("failed to decode JWT header: {e}"))?;
let kid = header
.kid
.ok_or_else(|| "JWT missing kid header".to_string())?;
let jwks = fetch_or_get_jwks(state).await?;
let jwk = jwks
.keys
.iter()
.find(|k| k.common.key_id.as_deref() == Some(&kid))
.ok_or_else(|| "no matching key found in JWKS".to_string())?;
let decoding_key =
DecodingKey::from_jwk(jwk).map_err(|e| format!("failed to create decoding key: {e}"))?;
let mut validation = Validation::new(header.alg);
validation.validate_exp = true;
validation.validate_aud = false;
decode::<Claims>(token, &decoding_key, &validation)
.map_err(|e| format!("token validation failed: {e}"))?;
Ok(())
}
async fn fetch_or_get_jwks(state: &JwksState) -> Result<JwkSet, String> {
{
let cached = state.jwks.read().await;
if let Some(ref jwks) = *cached {
return Ok(jwks.clone());
}
}
let resp = reqwest::get(&state.jwks_url)
.await
.map_err(|e| format!("failed to fetch JWKS: {e}"))?;
let jwks: JwkSet = resp
.json()
.await
.map_err(|e| format!("failed to parse JWKS: {e}"))?;
let mut cached = state.jwks.write().await;
*cached = Some(jwks.clone());
Ok(jwks)
}

View File

@@ -0,0 +1,238 @@
use std::sync::Arc;
use axum::extract::{Extension, Path};
use axum::http::StatusCode;
use axum::Json;
use mongodb::bson::doc;
use compliance_core::models::chat::{ChatRequest, ChatResponse, SourceReference};
use compliance_core::models::embedding::EmbeddingBuildRun;
use compliance_graph::graph::embedding_store::EmbeddingStore;
use crate::agent::ComplianceAgent;
use crate::rag::pipeline::RagPipeline;
use super::ApiResponse;
type AgentExt = Extension<Arc<ComplianceAgent>>;
/// POST /api/v1/chat/:repo_id — Send a chat message with RAG context
pub async fn chat(
Extension(agent): AgentExt,
Path(repo_id): Path<String>,
Json(req): Json<ChatRequest>,
) -> Result<Json<ApiResponse<ChatResponse>>, StatusCode> {
let pipeline = RagPipeline::new(agent.llm.clone(), agent.db.inner());
// Step 1: Embed the user's message
let query_vectors = agent
.llm
.embed(vec![req.message.clone()])
.await
.map_err(|e| {
tracing::error!("Failed to embed query: {e}");
StatusCode::INTERNAL_SERVER_ERROR
})?;
let query_embedding = query_vectors.into_iter().next().ok_or_else(|| {
tracing::error!("Empty embedding response");
StatusCode::INTERNAL_SERVER_ERROR
})?;
// Step 2: Vector search — retrieve top 8 chunks
let search_results = pipeline
.store()
.vector_search(&repo_id, query_embedding, 8, 0.5)
.await
.map_err(|e| {
tracing::error!("Vector search failed: {e}");
StatusCode::INTERNAL_SERVER_ERROR
})?;
// Step 3: Build system prompt with code context
let mut context_parts = Vec::new();
let mut sources = Vec::new();
for (embedding, score) in &search_results {
context_parts.push(format!(
"--- {} ({}, {}:L{}-L{}) ---\n{}",
embedding.qualified_name,
embedding.kind,
embedding.file_path,
embedding.start_line,
embedding.end_line,
embedding.content,
));
// Truncate snippet for the response
let snippet: String = embedding
.content
.lines()
.take(10)
.collect::<Vec<_>>()
.join("\n");
sources.push(SourceReference {
file_path: embedding.file_path.clone(),
qualified_name: embedding.qualified_name.clone(),
start_line: embedding.start_line,
end_line: embedding.end_line,
language: embedding.language.clone(),
snippet,
score: *score,
});
}
let code_context = if context_parts.is_empty() {
"No relevant code context found.".to_string()
} else {
context_parts.join("\n\n")
};
let system_prompt = format!(
"You are an expert code assistant for a software repository. \
Answer the user's question based on the code context below. \
Reference specific files and functions when relevant. \
If the context doesn't contain enough information, say so.\n\n\
## Code Context\n\n{code_context}"
);
// Step 4: Build messages array with history
let mut messages: Vec<(String, String)> = Vec::new();
messages.push(("system".to_string(), system_prompt));
for msg in &req.history {
messages.push((msg.role.clone(), msg.content.clone()));
}
messages.push(("user".to_string(), req.message));
// Step 5: Call LLM
let response_text = agent
.llm
.chat_with_messages(messages, Some(0.3))
.await
.map_err(|e| {
tracing::error!("LLM chat failed: {e}");
StatusCode::INTERNAL_SERVER_ERROR
})?;
Ok(Json(ApiResponse {
data: ChatResponse {
message: response_text,
sources,
},
total: None,
page: None,
}))
}
/// POST /api/v1/chat/:repo_id/build-embeddings — Trigger embedding build
pub async fn build_embeddings(
Extension(agent): AgentExt,
Path(repo_id): Path<String>,
) -> Result<Json<serde_json::Value>, StatusCode> {
let agent_clone = (*agent).clone();
tokio::spawn(async move {
let repo = match agent_clone
.db
.repositories()
.find_one(doc! { "_id": mongodb::bson::oid::ObjectId::parse_str(&repo_id).ok() })
.await
{
Ok(Some(r)) => r,
_ => {
tracing::error!("Repository {repo_id} not found for embedding build");
return;
}
};
// Get latest graph build
let build = match agent_clone
.db
.graph_builds()
.find_one(doc! { "repo_id": &repo_id })
.sort(doc! { "started_at": -1 })
.await
{
Ok(Some(b)) => b,
_ => {
tracing::error!("[{repo_id}] No graph build found — build graph first");
return;
}
};
let graph_build_id = build
.id
.map(|id| id.to_hex())
.unwrap_or_else(|| "unknown".to_string());
// Get nodes
let nodes: Vec<compliance_core::models::graph::CodeNode> = match agent_clone
.db
.graph_nodes()
.find(doc! { "repo_id": &repo_id })
.await
{
Ok(cursor) => {
use futures_util::StreamExt;
let mut items = Vec::new();
let mut cursor = cursor;
while let Some(Ok(item)) = cursor.next().await {
items.push(item);
}
items
}
Err(e) => {
tracing::error!("[{repo_id}] Failed to fetch nodes: {e}");
return;
}
};
let git_ops = crate::pipeline::git::GitOps::new(&agent_clone.config.git_clone_base_path);
let repo_path = match git_ops.clone_or_fetch(&repo.git_url, &repo.name) {
Ok(p) => p,
Err(e) => {
tracing::error!("Failed to clone repo for embedding build: {e}");
return;
}
};
let pipeline = RagPipeline::new(agent_clone.llm.clone(), agent_clone.db.inner());
match pipeline
.build_embeddings(&repo_id, &repo_path, &graph_build_id, &nodes)
.await
{
Ok(run) => {
tracing::info!(
"[{repo_id}] Embedding build complete: {}/{} chunks",
run.embedded_chunks,
run.total_chunks
);
}
Err(e) => {
tracing::error!("[{repo_id}] Embedding build failed: {e}");
}
}
});
Ok(Json(
serde_json::json!({ "status": "embedding_build_triggered" }),
))
}
/// GET /api/v1/chat/:repo_id/status — Get latest embedding build status
pub async fn embedding_status(
Extension(agent): AgentExt,
Path(repo_id): Path<String>,
) -> Result<Json<ApiResponse<Option<EmbeddingBuildRun>>>, StatusCode> {
let store = EmbeddingStore::new(agent.db.inner());
let build = store.get_latest_build(&repo_id).await.map_err(|e| {
tracing::error!("Failed to get embedding status: {e}");
StatusCode::INTERNAL_SERVER_ERROR
})?;
Ok(Json(ApiResponse {
data: build,
total: None,
page: None,
}))
}

View File

@@ -103,8 +103,7 @@ pub async fn trigger_scan(
Extension(agent): AgentExt,
Path(id): Path<String>,
) -> Result<Json<serde_json::Value>, StatusCode> {
let oid =
mongodb::bson::oid::ObjectId::parse_str(&id).map_err(|_| StatusCode::BAD_REQUEST)?;
let oid = mongodb::bson::oid::ObjectId::parse_str(&id).map_err(|_| StatusCode::BAD_REQUEST)?;
let target = agent
.db
@@ -207,8 +206,7 @@ pub async fn get_finding(
Extension(agent): AgentExt,
Path(id): Path<String>,
) -> Result<Json<ApiResponse<DastFinding>>, StatusCode> {
let oid =
mongodb::bson::oid::ObjectId::parse_str(&id).map_err(|_| StatusCode::BAD_REQUEST)?;
let oid = mongodb::bson::oid::ObjectId::parse_str(&id).map_err(|_| StatusCode::BAD_REQUEST)?;
let finding = agent
.db

View File

@@ -52,7 +52,7 @@ pub async fn get_graph(
// so there is only one set of nodes/edges per repo.
let filter = doc! { "repo_id": &repo_id };
let nodes: Vec<CodeNode> = match db.graph_nodes().find(filter.clone()).await {
let all_nodes: Vec<CodeNode> = match db.graph_nodes().find(filter.clone()).await {
Ok(cursor) => collect_cursor_async(cursor).await,
Err(_) => Vec::new(),
};
@@ -60,6 +60,17 @@ pub async fn get_graph(
Ok(cursor) => collect_cursor_async(cursor).await,
Err(_) => Vec::new(),
};
// Remove disconnected nodes (no edges) to keep the graph clean
let connected: std::collections::HashSet<&str> = edges
.iter()
.flat_map(|e| [e.source.as_str(), e.target.as_str()])
.collect();
let nodes = all_nodes
.into_iter()
.filter(|n| connected.contains(n.qualified_name.as_str()))
.collect();
(nodes, edges)
} else {
(Vec::new(), Vec::new())
@@ -235,12 +246,7 @@ pub async fn get_file_content(
// Cap at 10,000 lines
let truncated: String = content.lines().take(10_000).collect::<Vec<_>>().join("\n");
let language = params
.path
.rsplit('.')
.next()
.unwrap_or("")
.to_string();
let language = params.path.rsplit('.').next().unwrap_or("").to_string();
Ok(Json(ApiResponse {
data: FileContent {

View File

@@ -1,3 +1,4 @@
pub mod chat;
pub mod dast;
pub mod graph;
@@ -5,7 +6,8 @@ use std::sync::Arc;
#[allow(unused_imports)]
use axum::extract::{Extension, Path, Query};
use axum::http::StatusCode;
use axum::http::{header, StatusCode};
use axum::response::IntoResponse;
use axum::Json;
use mongodb::bson::doc;
use serde::{Deserialize, Serialize};
@@ -89,6 +91,72 @@ pub struct UpdateStatusRequest {
pub status: String,
}
#[derive(Deserialize)]
pub struct SbomFilter {
#[serde(default)]
pub repo_id: Option<String>,
#[serde(default)]
pub package_manager: Option<String>,
#[serde(default)]
pub q: Option<String>,
#[serde(default)]
pub has_vulns: Option<bool>,
#[serde(default)]
pub license: Option<String>,
#[serde(default = "default_page")]
pub page: u64,
#[serde(default = "default_limit")]
pub limit: i64,
}
#[derive(Deserialize)]
pub struct SbomExportParams {
pub repo_id: String,
#[serde(default = "default_export_format")]
pub format: String,
}
fn default_export_format() -> String {
"cyclonedx".to_string()
}
#[derive(Deserialize)]
pub struct SbomDiffParams {
pub repo_a: String,
pub repo_b: String,
}
#[derive(Serialize)]
pub struct LicenseSummary {
pub license: String,
pub count: u64,
pub is_copyleft: bool,
pub packages: Vec<String>,
}
#[derive(Serialize)]
pub struct SbomDiffResult {
pub only_in_a: Vec<SbomDiffEntry>,
pub only_in_b: Vec<SbomDiffEntry>,
pub version_changed: Vec<SbomVersionDiff>,
pub common_count: u64,
}
#[derive(Serialize)]
pub struct SbomDiffEntry {
pub name: String,
pub version: String,
pub package_manager: String,
}
#[derive(Serialize)]
pub struct SbomVersionDiff {
pub name: String,
pub package_manager: String,
pub version_a: String,
pub version_b: String,
}
type AgentExt = Extension<Arc<ComplianceAgent>>;
type ApiResult<T> = Result<Json<ApiResponse<T>>, StatusCode>;
@@ -235,6 +303,52 @@ pub async fn trigger_scan(
Ok(Json(serde_json::json!({ "status": "scan_triggered" })))
}
pub async fn delete_repository(
Extension(agent): AgentExt,
Path(id): Path<String>,
) -> Result<Json<serde_json::Value>, StatusCode> {
let oid = mongodb::bson::oid::ObjectId::parse_str(&id).map_err(|_| StatusCode::BAD_REQUEST)?;
let db = &agent.db;
// Delete the repository
let result = db
.repositories()
.delete_one(doc! { "_id": oid })
.await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
if result.deleted_count == 0 {
return Err(StatusCode::NOT_FOUND);
}
// Cascade delete all related data
let _ = db.findings().delete_many(doc! { "repo_id": &id }).await;
let _ = db.sbom_entries().delete_many(doc! { "repo_id": &id }).await;
let _ = db.scan_runs().delete_many(doc! { "repo_id": &id }).await;
let _ = db.cve_alerts().delete_many(doc! { "repo_id": &id }).await;
let _ = db
.tracker_issues()
.delete_many(doc! { "repo_id": &id })
.await;
let _ = db.graph_nodes().delete_many(doc! { "repo_id": &id }).await;
let _ = db.graph_edges().delete_many(doc! { "repo_id": &id }).await;
let _ = db.graph_builds().delete_many(doc! { "repo_id": &id }).await;
let _ = db
.impact_analyses()
.delete_many(doc! { "repo_id": &id })
.await;
let _ = db
.code_embeddings()
.delete_many(doc! { "repo_id": &id })
.await;
let _ = db
.embedding_builds()
.delete_many(doc! { "repo_id": &id })
.await;
Ok(Json(serde_json::json!({ "status": "deleted" })))
}
pub async fn list_findings(
Extension(agent): AgentExt,
Query(filter): Query<FindingsFilter>,
@@ -322,21 +436,46 @@ pub async fn update_finding_status(
pub async fn list_sbom(
Extension(agent): AgentExt,
Query(params): Query<PaginationParams>,
Query(filter): Query<SbomFilter>,
) -> ApiResult<Vec<SbomEntry>> {
let db = &agent.db;
let skip = (params.page.saturating_sub(1)) * params.limit as u64;
let mut query = doc! {};
if let Some(repo_id) = &filter.repo_id {
query.insert("repo_id", repo_id);
}
if let Some(pm) = &filter.package_manager {
query.insert("package_manager", pm);
}
if let Some(q) = &filter.q {
if !q.is_empty() {
query.insert("name", doc! { "$regex": q, "$options": "i" });
}
}
if let Some(has_vulns) = filter.has_vulns {
if has_vulns {
query.insert("known_vulnerabilities", doc! { "$exists": true, "$ne": [] });
} else {
query.insert("known_vulnerabilities", doc! { "$size": 0 });
}
}
if let Some(license) = &filter.license {
query.insert("license", license);
}
let skip = (filter.page.saturating_sub(1)) * filter.limit as u64;
let total = db
.sbom_entries()
.count_documents(doc! {})
.count_documents(query.clone())
.await
.unwrap_or(0);
let entries = match db
.sbom_entries()
.find(doc! {})
.find(query)
.sort(doc! { "name": 1 })
.skip(skip)
.limit(params.limit)
.limit(filter.limit)
.await
{
Ok(cursor) => collect_cursor_async(cursor).await,
@@ -346,7 +485,272 @@ pub async fn list_sbom(
Ok(Json(ApiResponse {
data: entries,
total: Some(total),
page: Some(params.page),
page: Some(filter.page),
}))
}
pub async fn export_sbom(
Extension(agent): AgentExt,
Query(params): Query<SbomExportParams>,
) -> Result<impl IntoResponse, StatusCode> {
let db = &agent.db;
let entries: Vec<SbomEntry> = match db
.sbom_entries()
.find(doc! { "repo_id": &params.repo_id })
.await
{
Ok(cursor) => collect_cursor_async(cursor).await,
Err(_) => Vec::new(),
};
let body = if params.format == "spdx" {
// SPDX 2.3 format
let packages: Vec<serde_json::Value> = entries
.iter()
.enumerate()
.map(|(i, e)| {
serde_json::json!({
"SPDXID": format!("SPDXRef-Package-{i}"),
"name": e.name,
"versionInfo": e.version,
"downloadLocation": "NOASSERTION",
"licenseConcluded": e.license.as_deref().unwrap_or("NOASSERTION"),
"externalRefs": e.purl.as_ref().map(|p| vec![serde_json::json!({
"referenceCategory": "PACKAGE-MANAGER",
"referenceType": "purl",
"referenceLocator": p,
})]).unwrap_or_default(),
})
})
.collect();
serde_json::json!({
"spdxVersion": "SPDX-2.3",
"dataLicense": "CC0-1.0",
"SPDXID": "SPDXRef-DOCUMENT",
"name": format!("sbom-{}", params.repo_id),
"documentNamespace": format!("https://compliance-scanner/sbom/{}", params.repo_id),
"packages": packages,
})
} else {
// CycloneDX 1.5 format
let components: Vec<serde_json::Value> = entries
.iter()
.map(|e| {
let mut comp = serde_json::json!({
"type": "library",
"name": e.name,
"version": e.version,
"group": e.package_manager,
});
if let Some(purl) = &e.purl {
comp["purl"] = serde_json::Value::String(purl.clone());
}
if let Some(license) = &e.license {
comp["licenses"] = serde_json::json!([{ "license": { "id": license } }]);
}
if !e.known_vulnerabilities.is_empty() {
comp["vulnerabilities"] = serde_json::json!(
e.known_vulnerabilities.iter().map(|v| serde_json::json!({
"id": v.id,
"source": { "name": v.source },
"ratings": v.severity.as_ref().map(|s| vec![serde_json::json!({"severity": s})]).unwrap_or_default(),
})).collect::<Vec<_>>()
);
}
comp
})
.collect();
serde_json::json!({
"bomFormat": "CycloneDX",
"specVersion": "1.5",
"version": 1,
"metadata": {
"component": {
"type": "application",
"name": format!("repo-{}", params.repo_id),
}
},
"components": components,
})
};
let json_str =
serde_json::to_string_pretty(&body).map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
let filename = if params.format == "spdx" {
format!("sbom-{}-spdx.json", params.repo_id)
} else {
format!("sbom-{}-cyclonedx.json", params.repo_id)
};
let disposition = format!("attachment; filename=\"{filename}\"");
Ok((
[
(
header::CONTENT_TYPE,
header::HeaderValue::from_static("application/json"),
),
(
header::CONTENT_DISPOSITION,
header::HeaderValue::from_str(&disposition)
.unwrap_or_else(|_| header::HeaderValue::from_static("attachment")),
),
],
json_str,
))
}
const COPYLEFT_LICENSES: &[&str] = &[
"GPL-2.0",
"GPL-2.0-only",
"GPL-2.0-or-later",
"GPL-3.0",
"GPL-3.0-only",
"GPL-3.0-or-later",
"AGPL-3.0",
"AGPL-3.0-only",
"AGPL-3.0-or-later",
"LGPL-2.1",
"LGPL-2.1-only",
"LGPL-2.1-or-later",
"LGPL-3.0",
"LGPL-3.0-only",
"LGPL-3.0-or-later",
"MPL-2.0",
];
pub async fn license_summary(
Extension(agent): AgentExt,
Query(params): Query<SbomFilter>,
) -> ApiResult<Vec<LicenseSummary>> {
let db = &agent.db;
let mut query = doc! {};
if let Some(repo_id) = &params.repo_id {
query.insert("repo_id", repo_id);
}
let entries: Vec<SbomEntry> = match db.sbom_entries().find(query).await {
Ok(cursor) => collect_cursor_async(cursor).await,
Err(_) => Vec::new(),
};
let mut license_map: std::collections::HashMap<String, Vec<String>> =
std::collections::HashMap::new();
for entry in &entries {
let lic = entry.license.as_deref().unwrap_or("Unknown").to_string();
license_map.entry(lic).or_default().push(entry.name.clone());
}
let mut summaries: Vec<LicenseSummary> = license_map
.into_iter()
.map(|(license, packages)| {
let is_copyleft = COPYLEFT_LICENSES
.iter()
.any(|c| license.to_uppercase().contains(&c.to_uppercase()));
LicenseSummary {
license,
count: packages.len() as u64,
is_copyleft,
packages,
}
})
.collect();
summaries.sort_by(|a, b| b.count.cmp(&a.count));
Ok(Json(ApiResponse {
data: summaries,
total: None,
page: None,
}))
}
pub async fn sbom_diff(
Extension(agent): AgentExt,
Query(params): Query<SbomDiffParams>,
) -> ApiResult<SbomDiffResult> {
let db = &agent.db;
let entries_a: Vec<SbomEntry> = match db
.sbom_entries()
.find(doc! { "repo_id": &params.repo_a })
.await
{
Ok(cursor) => collect_cursor_async(cursor).await,
Err(_) => Vec::new(),
};
let entries_b: Vec<SbomEntry> = match db
.sbom_entries()
.find(doc! { "repo_id": &params.repo_b })
.await
{
Ok(cursor) => collect_cursor_async(cursor).await,
Err(_) => Vec::new(),
};
// Build maps by (name, package_manager) -> version
let map_a: std::collections::HashMap<(String, String), String> = entries_a
.iter()
.map(|e| {
(
(e.name.clone(), e.package_manager.clone()),
e.version.clone(),
)
})
.collect();
let map_b: std::collections::HashMap<(String, String), String> = entries_b
.iter()
.map(|e| {
(
(e.name.clone(), e.package_manager.clone()),
e.version.clone(),
)
})
.collect();
let mut only_in_a = Vec::new();
let mut version_changed = Vec::new();
let mut common_count: u64 = 0;
for (key, ver_a) in &map_a {
match map_b.get(key) {
None => only_in_a.push(SbomDiffEntry {
name: key.0.clone(),
version: ver_a.clone(),
package_manager: key.1.clone(),
}),
Some(ver_b) if ver_a != ver_b => {
version_changed.push(SbomVersionDiff {
name: key.0.clone(),
package_manager: key.1.clone(),
version_a: ver_a.clone(),
version_b: ver_b.clone(),
});
}
Some(_) => common_count += 1,
}
}
let only_in_b: Vec<SbomDiffEntry> = map_b
.iter()
.filter(|(key, _)| !map_a.contains_key(key))
.map(|(key, ver)| SbomDiffEntry {
name: key.0.clone(),
version: ver.clone(),
package_manager: key.1.clone(),
})
.collect();
Ok(Json(ApiResponse {
data: SbomDiffResult {
only_in_a,
only_in_b,
version_changed,
common_count,
},
total: None,
page: None,
}))
}

View File

@@ -1,3 +1,4 @@
pub mod auth_middleware;
pub mod handlers;
pub mod routes;
pub mod server;

View File

@@ -1,4 +1,4 @@
use axum::routing::{get, patch, post};
use axum::routing::{delete, get, patch, post};
use axum::Router;
use crate::api::handlers;
@@ -13,6 +13,10 @@ pub fn build_router() -> Router {
"/api/v1/repositories/{id}/scan",
post(handlers::trigger_scan),
)
.route(
"/api/v1/repositories/{id}",
delete(handlers::delete_repository),
)
.route("/api/v1/findings", get(handlers::list_findings))
.route("/api/v1/findings/{id}", get(handlers::get_finding))
.route(
@@ -20,13 +24,13 @@ pub fn build_router() -> Router {
patch(handlers::update_finding_status),
)
.route("/api/v1/sbom", get(handlers::list_sbom))
.route("/api/v1/sbom/export", get(handlers::export_sbom))
.route("/api/v1/sbom/licenses", get(handlers::license_summary))
.route("/api/v1/sbom/diff", get(handlers::sbom_diff))
.route("/api/v1/issues", get(handlers::list_issues))
.route("/api/v1/scan-runs", get(handlers::list_scan_runs))
// Graph API endpoints
.route(
"/api/v1/graph/{repo_id}",
get(handlers::graph::get_graph),
)
.route("/api/v1/graph/{repo_id}", get(handlers::graph::get_graph))
.route(
"/api/v1/graph/{repo_id}/nodes",
get(handlers::graph::get_nodes),
@@ -52,14 +56,8 @@ pub fn build_router() -> Router {
post(handlers::graph::trigger_build),
)
// DAST API endpoints
.route(
"/api/v1/dast/targets",
get(handlers::dast::list_targets),
)
.route(
"/api/v1/dast/targets",
post(handlers::dast::add_target),
)
.route("/api/v1/dast/targets", get(handlers::dast::list_targets))
.route("/api/v1/dast/targets", post(handlers::dast::add_target))
.route(
"/api/v1/dast/targets/{id}/scan",
post(handlers::dast::trigger_scan),
@@ -68,12 +66,19 @@ pub fn build_router() -> Router {
"/api/v1/dast/scan-runs",
get(handlers::dast::list_scan_runs),
)
.route(
"/api/v1/dast/findings",
get(handlers::dast::list_findings),
)
.route("/api/v1/dast/findings", get(handlers::dast::list_findings))
.route(
"/api/v1/dast/findings/{id}",
get(handlers::dast::get_finding),
)
// Chat / RAG API endpoints
.route("/api/v1/chat/{repo_id}", post(handlers::chat::chat))
.route(
"/api/v1/chat/{repo_id}/build-embeddings",
post(handlers::chat::build_embeddings),
)
.route(
"/api/v1/chat/{repo_id}/status",
get(handlers::chat::embedding_status),
)
}

View File

@@ -1,19 +1,37 @@
use std::sync::Arc;
use axum::Extension;
use axum::{middleware, Extension};
use tokio::sync::RwLock;
use tower_http::cors::CorsLayer;
use tower_http::trace::TraceLayer;
use crate::agent::ComplianceAgent;
use crate::api::auth_middleware::{require_jwt_auth, JwksState};
use crate::api::routes;
use crate::error::AgentError;
pub async fn start_api_server(agent: ComplianceAgent, port: u16) -> Result<(), AgentError> {
let app = routes::build_router()
.layer(Extension(Arc::new(agent)))
let mut app = routes::build_router()
.layer(Extension(Arc::new(agent.clone())))
.layer(CorsLayer::permissive())
.layer(TraceLayer::new_for_http());
if let (Some(kc_url), Some(kc_realm)) =
(&agent.config.keycloak_url, &agent.config.keycloak_realm)
{
let jwks_url = format!("{kc_url}/realms/{kc_realm}/protocol/openid-connect/certs");
let jwks_state = JwksState {
jwks: Arc::new(RwLock::new(None)),
jwks_url,
};
tracing::info!("Keycloak JWT auth enabled for realm '{kc_realm}'");
app = app
.layer(Extension(jwks_state))
.layer(middleware::from_fn(require_jwt_auth));
} else {
tracing::warn!("Keycloak not configured - API endpoints are unprotected");
}
let addr = format!("0.0.0.0:{port}");
let listener = tokio::net::TcpListener::bind(&addr)
.await

View File

@@ -24,6 +24,8 @@ pub fn load_config() -> Result<AgentConfig, AgentError> {
.unwrap_or_else(|| "http://localhost:4000".to_string()),
litellm_api_key: SecretString::from(env_var_opt("LITELLM_API_KEY").unwrap_or_default()),
litellm_model: env_var_opt("LITELLM_MODEL").unwrap_or_else(|| "gpt-4o".to_string()),
litellm_embed_model: env_var_opt("LITELLM_EMBED_MODEL")
.unwrap_or_else(|| "text-embedding-3-small".to_string()),
github_token: env_secret_opt("GITHUB_TOKEN"),
github_webhook_secret: env_secret_opt("GITHUB_WEBHOOK_SECRET"),
gitlab_url: env_var_opt("GITLAB_URL"),
@@ -43,5 +45,7 @@ pub fn load_config() -> Result<AgentConfig, AgentError> {
.unwrap_or_else(|| "0 0 0 * * *".to_string()),
git_clone_base_path: env_var_opt("GIT_CLONE_BASE_PATH")
.unwrap_or_else(|| "/tmp/compliance-scanner/repos".to_string()),
keycloak_url: env_var_opt("KEYCLOAK_URL"),
keycloak_realm: env_var_opt("KEYCLOAK_REALM"),
})
}

View File

@@ -127,11 +127,7 @@ impl Database {
// dast_targets: index on repo_id
self.dast_targets()
.create_index(
IndexModel::builder()
.keys(doc! { "repo_id": 1 })
.build(),
)
.create_index(IndexModel::builder().keys(doc! { "repo_id": 1 }).build())
.await?;
// dast_scan_runs: compound (target_id, started_at DESC)
@@ -152,6 +148,24 @@ impl Database {
)
.await?;
// code_embeddings: compound (repo_id, graph_build_id)
self.code_embeddings()
.create_index(
IndexModel::builder()
.keys(doc! { "repo_id": 1, "graph_build_id": 1 })
.build(),
)
.await?;
// embedding_builds: compound (repo_id, started_at DESC)
self.embedding_builds()
.create_index(
IndexModel::builder()
.keys(doc! { "repo_id": 1, "started_at": -1 })
.build(),
)
.await?;
tracing::info!("Database indexes ensured");
Ok(())
}
@@ -210,6 +224,17 @@ impl Database {
self.inner.collection("dast_findings")
}
// Embedding collections
pub fn code_embeddings(&self) -> Collection<compliance_core::models::embedding::CodeEmbedding> {
self.inner.collection("code_embeddings")
}
pub fn embedding_builds(
&self,
) -> Collection<compliance_core::models::embedding::EmbeddingBuildRun> {
self.inner.collection("embedding_builds")
}
#[allow(dead_code)]
pub fn raw_collection(&self, name: &str) -> Collection<mongodb::bson::Document> {
self.inner.collection(name)

View File

@@ -8,6 +8,7 @@ pub struct LlmClient {
base_url: String,
api_key: SecretString,
model: String,
embed_model: String,
http: reqwest::Client,
}
@@ -42,16 +43,46 @@ struct ChatResponseMessage {
content: String,
}
/// Request body for the embeddings API
#[derive(Serialize)]
struct EmbeddingRequest {
model: String,
input: Vec<String>,
}
/// Response from the embeddings API
#[derive(Deserialize)]
struct EmbeddingResponse {
data: Vec<EmbeddingData>,
}
/// A single embedding result
#[derive(Deserialize)]
struct EmbeddingData {
embedding: Vec<f64>,
index: usize,
}
impl LlmClient {
pub fn new(base_url: String, api_key: SecretString, model: String) -> Self {
pub fn new(
base_url: String,
api_key: SecretString,
model: String,
embed_model: String,
) -> Self {
Self {
base_url,
api_key,
model,
embed_model,
http: reqwest::Client::new(),
}
}
pub fn embed_model(&self) -> &str {
&self.embed_model
}
pub async fn chat(
&self,
system_prompt: &str,
@@ -169,4 +200,49 @@ impl LlmClient {
.map(|c| c.message.content.clone())
.ok_or_else(|| AgentError::Other("Empty response from LiteLLM".to_string()))
}
/// Generate embeddings for a batch of texts
pub async fn embed(&self, texts: Vec<String>) -> Result<Vec<Vec<f64>>, AgentError> {
let url = format!("{}/v1/embeddings", self.base_url.trim_end_matches('/'));
let request_body = EmbeddingRequest {
model: self.embed_model.clone(),
input: texts,
};
let mut req = self
.http
.post(&url)
.header("content-type", "application/json")
.json(&request_body);
let key = self.api_key.expose_secret();
if !key.is_empty() {
req = req.header("Authorization", format!("Bearer {key}"));
}
let resp = req
.send()
.await
.map_err(|e| AgentError::Other(format!("Embedding request failed: {e}")))?;
if !resp.status().is_success() {
let status = resp.status();
let body = resp.text().await.unwrap_or_default();
return Err(AgentError::Other(format!(
"Embedding API returned {status}: {body}"
)));
}
let body: EmbeddingResponse = resp
.json()
.await
.map_err(|e| AgentError::Other(format!("Failed to parse embedding response: {e}")))?;
// Sort by index to maintain input order
let mut data = body.data;
data.sort_by_key(|d| d.index);
Ok(data.into_iter().map(|d| d.embedding).collect())
}
}

View File

@@ -1,5 +1,3 @@
use tracing_subscriber::EnvFilter;
mod agent;
mod api;
mod config;
@@ -7,6 +5,7 @@ mod database;
mod error;
mod llm;
mod pipeline;
mod rag;
mod scheduler;
#[allow(dead_code)]
mod trackers;
@@ -14,14 +13,10 @@ mod webhooks;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
tracing_subscriber::fmt()
.with_env_filter(
EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new("info")),
)
.init();
dotenvy::dotenv().ok();
let _telemetry_guard = compliance_core::telemetry::init_telemetry("compliance-agent");
tracing::info!("Loading configuration...");
let config = config::load_config()?;

View File

@@ -185,7 +185,9 @@ impl PipelineOrchestrator {
// Stage 4.5: Graph Building
tracing::info!("[{repo_id}] Stage 4.5: Graph Building");
self.update_phase(scan_run_id, "graph_building").await;
let graph_context = match self.build_code_graph(&repo_path, &repo_id, &all_findings).await
let graph_context = match self
.build_code_graph(&repo_path, &repo_id, &all_findings)
.await
{
Ok(ctx) => Some(ctx),
Err(e) => {
@@ -296,9 +298,10 @@ impl PipelineOrchestrator {
let graph_build_id = uuid::Uuid::new_v4().to_string();
let engine = compliance_graph::GraphEngine::new(50_000);
let (mut code_graph, build_run) = engine
.build_graph(repo_path, repo_id, &graph_build_id)
.map_err(|e| AgentError::Other(format!("Graph build error: {e}")))?;
let (mut code_graph, build_run) =
engine
.build_graph(repo_path, repo_id, &graph_build_id)
.map_err(|e| AgentError::Other(format!("Graph build error: {e}")))?;
// Apply community detection
compliance_graph::graph::community::apply_communities(&mut code_graph);
@@ -348,15 +351,11 @@ impl PipelineOrchestrator {
use futures_util::TryStreamExt;
let filter = mongodb::bson::doc! { "repo_id": repo_id };
let targets: Vec<compliance_core::models::DastTarget> = match self
.db
.dast_targets()
.find(filter)
.await
{
Ok(cursor) => cursor.try_collect().await.unwrap_or_default(),
Err(_) => return,
};
let targets: Vec<compliance_core::models::DastTarget> =
match self.db.dast_targets().find(filter).await {
Ok(cursor) => cursor.try_collect().await.unwrap_or_default(),
Err(_) => return,
};
if targets.is_empty() {
tracing::info!("[{repo_id}] No DAST targets configured, skipping");
@@ -379,10 +378,7 @@ impl PipelineOrchestrator {
tracing::error!("Failed to store DAST finding: {e}");
}
}
tracing::info!(
"DAST scan complete: {} findings",
findings.len()
);
tracing::info!("DAST scan complete: {} findings", findings.len());
}
Err(e) => {
tracing::error!("DAST scan failed: {e}");

View File

@@ -0,0 +1 @@
pub mod pipeline;

View File

@@ -0,0 +1,164 @@
use std::path::Path;
use std::sync::Arc;
use chrono::Utc;
use compliance_core::models::embedding::{CodeEmbedding, EmbeddingBuildRun, EmbeddingBuildStatus};
use compliance_core::models::graph::CodeNode;
use compliance_graph::graph::chunking::extract_chunks;
use compliance_graph::graph::embedding_store::EmbeddingStore;
use tracing::{error, info};
use crate::error::AgentError;
use crate::llm::LlmClient;
/// RAG pipeline for building embeddings and performing retrieval
pub struct RagPipeline {
llm: Arc<LlmClient>,
embedding_store: EmbeddingStore,
}
impl RagPipeline {
pub fn new(llm: Arc<LlmClient>, db: &mongodb::Database) -> Self {
Self {
llm,
embedding_store: EmbeddingStore::new(db),
}
}
pub fn store(&self) -> &EmbeddingStore {
&self.embedding_store
}
/// Build embeddings for all code nodes in a repository
pub async fn build_embeddings(
&self,
repo_id: &str,
repo_path: &Path,
graph_build_id: &str,
nodes: &[CodeNode],
) -> Result<EmbeddingBuildRun, AgentError> {
let embed_model = self.llm.embed_model().to_string();
let mut build =
EmbeddingBuildRun::new(repo_id.to_string(), graph_build_id.to_string(), embed_model);
// Step 1: Extract chunks
let chunks = extract_chunks(repo_path, nodes, 2048);
build.total_chunks = chunks.len() as u32;
info!(
"[{repo_id}] Extracted {} chunks for embedding",
chunks.len()
);
// Store the initial build record
self.embedding_store
.store_build(&build)
.await
.map_err(|e| AgentError::Other(format!("Failed to store build: {e}")))?;
if chunks.is_empty() {
build.status = EmbeddingBuildStatus::Completed;
build.completed_at = Some(Utc::now());
self.embedding_store
.update_build(
repo_id,
graph_build_id,
EmbeddingBuildStatus::Completed,
0,
None,
)
.await
.map_err(|e| AgentError::Other(format!("Failed to update build: {e}")))?;
return Ok(build);
}
// Step 2: Delete old embeddings for this repo
self.embedding_store
.delete_repo_embeddings(repo_id)
.await
.map_err(|e| AgentError::Other(format!("Failed to delete old embeddings: {e}")))?;
// Step 3: Batch embed (small batches to stay within model limits)
let batch_size = 20;
let mut all_embeddings = Vec::new();
let mut embedded_count = 0u32;
for batch_start in (0..chunks.len()).step_by(batch_size) {
let batch_end = (batch_start + batch_size).min(chunks.len());
let batch_chunks = &chunks[batch_start..batch_end];
// Prepare texts: context_header + content
let texts: Vec<String> = batch_chunks
.iter()
.map(|c| format!("{}\n{}", c.context_header, c.content))
.collect();
match self.llm.embed(texts).await {
Ok(vectors) => {
for (chunk, embedding) in batch_chunks.iter().zip(vectors) {
all_embeddings.push(CodeEmbedding {
id: None,
repo_id: repo_id.to_string(),
graph_build_id: graph_build_id.to_string(),
qualified_name: chunk.qualified_name.clone(),
kind: chunk.kind.clone(),
file_path: chunk.file_path.clone(),
start_line: chunk.start_line,
end_line: chunk.end_line,
language: chunk.language.clone(),
content: chunk.content.clone(),
context_header: chunk.context_header.clone(),
embedding,
token_estimate: chunk.token_estimate,
created_at: Utc::now(),
});
}
embedded_count += batch_chunks.len() as u32;
}
Err(e) => {
error!("[{repo_id}] Embedding batch failed: {e}");
build.status = EmbeddingBuildStatus::Failed;
build.error_message = Some(e.to_string());
build.completed_at = Some(Utc::now());
let _ = self
.embedding_store
.update_build(
repo_id,
graph_build_id,
EmbeddingBuildStatus::Failed,
embedded_count,
Some(e.to_string()),
)
.await;
return Ok(build);
}
}
}
// Step 4: Store all embeddings
self.embedding_store
.store_embeddings(&all_embeddings)
.await
.map_err(|e| AgentError::Other(format!("Failed to store embeddings: {e}")))?;
// Step 5: Update build status
build.status = EmbeddingBuildStatus::Completed;
build.embedded_chunks = embedded_count;
build.completed_at = Some(Utc::now());
self.embedding_store
.update_build(
repo_id,
graph_build_id,
EmbeddingBuildStatus::Completed,
embedded_count,
None,
)
.await
.map_err(|e| AgentError::Other(format!("Failed to update build: {e}")))?;
info!(
"[{repo_id}] Embedding build complete: {embedded_count}/{} chunks",
build.total_chunks
);
Ok(build)
}
}

View File

@@ -9,6 +9,15 @@ workspace = true
[features]
default = ["mongodb"]
mongodb = ["dep:mongodb"]
telemetry = [
"dep:opentelemetry",
"dep:opentelemetry_sdk",
"dep:opentelemetry-otlp",
"dep:opentelemetry-appender-tracing",
"dep:tracing-opentelemetry",
"dep:tracing-subscriber",
"dep:tracing",
]
[dependencies]
serde = { workspace = true }
@@ -21,3 +30,10 @@ uuid = { workspace = true }
secrecy = { workspace = true }
bson = { version = "2", features = ["chrono-0_4"] }
mongodb = { workspace = true, optional = true }
opentelemetry = { version = "0.29", optional = true }
opentelemetry_sdk = { version = "0.29", features = ["rt-tokio"], optional = true }
opentelemetry-otlp = { version = "0.29", features = ["http", "reqwest-rustls"], optional = true }
opentelemetry-appender-tracing = { version = "0.29", optional = true }
tracing-opentelemetry = { version = "0.30", optional = true }
tracing-subscriber = { workspace = true, optional = true }
tracing = { workspace = true, optional = true }

View File

@@ -8,6 +8,7 @@ pub struct AgentConfig {
pub litellm_url: String,
pub litellm_api_key: SecretString,
pub litellm_model: String,
pub litellm_embed_model: String,
pub github_token: Option<SecretString>,
pub github_webhook_secret: Option<SecretString>,
pub gitlab_url: Option<String>,
@@ -23,6 +24,8 @@ pub struct AgentConfig {
pub scan_schedule: String,
pub cve_monitor_schedule: String,
pub git_clone_base_path: String,
pub keycloak_url: Option<String>,
pub keycloak_realm: Option<String>,
}
#[derive(Clone, Debug, Serialize, Deserialize)]

View File

@@ -1,6 +1,8 @@
pub mod config;
pub mod error;
pub mod models;
#[cfg(feature = "telemetry")]
pub mod telemetry;
pub mod traits;
pub use config::{AgentConfig, DashboardConfig};

View File

@@ -0,0 +1,14 @@
use serde::{Deserialize, Serialize};
/// Authentication state returned by the `check_auth` server function.
///
/// When no valid session exists, `authenticated` is `false` and all
/// other fields are empty strings.
#[derive(Debug, Clone, Serialize, Deserialize, Default, PartialEq)]
pub struct AuthInfo {
pub authenticated: bool,
pub sub: String,
pub email: String,
pub name: String,
pub avatar_url: String,
}

View File

@@ -0,0 +1,35 @@
use serde::{Deserialize, Serialize};
/// A message in the chat history
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChatMessage {
pub role: String,
pub content: String,
}
/// Request body for the chat endpoint
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChatRequest {
pub message: String,
#[serde(default)]
pub history: Vec<ChatMessage>,
}
/// A source reference from the RAG retrieval
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SourceReference {
pub file_path: String,
pub qualified_name: String,
pub start_line: u32,
pub end_line: u32,
pub language: String,
pub snippet: String,
pub score: f64,
}
/// Response from the chat endpoint
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChatResponse {
pub message: String,
pub sources: Vec<SourceReference>,
}

View File

@@ -244,6 +244,7 @@ pub struct DastFinding {
}
impl DastFinding {
#[allow(clippy::too_many_arguments)]
pub fn new(
scan_run_id: String,
target_id: String,

View File

@@ -0,0 +1,100 @@
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
/// Status of an embedding build operation
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum EmbeddingBuildStatus {
Running,
Completed,
Failed,
}
/// A code embedding stored in MongoDB Atlas Vector Search
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CodeEmbedding {
#[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
pub id: Option<bson::oid::ObjectId>,
pub repo_id: String,
pub graph_build_id: String,
pub qualified_name: String,
pub kind: String,
pub file_path: String,
pub start_line: u32,
pub end_line: u32,
pub language: String,
pub content: String,
pub context_header: String,
pub embedding: Vec<f64>,
pub token_estimate: u32,
#[serde(with = "bson::serde_helpers::chrono_datetime_as_bson_datetime")]
pub created_at: DateTime<Utc>,
}
/// Tracks an embedding build operation for a repository
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EmbeddingBuildRun {
#[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
pub id: Option<bson::oid::ObjectId>,
pub repo_id: String,
pub graph_build_id: String,
pub status: EmbeddingBuildStatus,
pub total_chunks: u32,
pub embedded_chunks: u32,
pub embedding_model: String,
pub error_message: Option<String>,
#[serde(with = "bson::serde_helpers::chrono_datetime_as_bson_datetime")]
pub started_at: DateTime<Utc>,
#[serde(
default,
skip_serializing_if = "Option::is_none",
with = "opt_chrono_as_bson"
)]
pub completed_at: Option<DateTime<Utc>>,
}
impl EmbeddingBuildRun {
pub fn new(repo_id: String, graph_build_id: String, embedding_model: String) -> Self {
Self {
id: None,
repo_id,
graph_build_id,
status: EmbeddingBuildStatus::Running,
total_chunks: 0,
embedded_chunks: 0,
embedding_model,
error_message: None,
started_at: Utc::now(),
completed_at: None,
}
}
}
/// Serde helper for Option<DateTime<Utc>> as BSON DateTime
mod opt_chrono_as_bson {
use chrono::{DateTime, Utc};
use serde::{Deserialize, Deserializer, Serialize, Serializer};
#[derive(Serialize, Deserialize)]
struct BsonDt(
#[serde(with = "bson::serde_helpers::chrono_datetime_as_bson_datetime")] DateTime<Utc>,
);
pub fn serialize<S>(value: &Option<DateTime<Utc>>, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match value {
Some(dt) => BsonDt(*dt).serialize(serializer),
None => serializer.serialize_none(),
}
}
pub fn deserialize<'de, D>(deserializer: D) -> Result<Option<DateTime<Utc>>, D::Error>
where
D: Deserializer<'de>,
{
let opt: Option<BsonDt> = Option::deserialize(deserializer)?;
Ok(opt.map(|d| d.0))
}
}

View File

@@ -0,0 +1,67 @@
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
/// Transport mode for MCP server
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum McpTransport {
Stdio,
Http,
}
impl std::fmt::Display for McpTransport {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Stdio => write!(f, "stdio"),
Self::Http => write!(f, "http"),
}
}
}
/// Status of a running MCP server
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum McpServerStatus {
Running,
Stopped,
Error,
}
impl std::fmt::Display for McpServerStatus {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Running => write!(f, "running"),
Self::Stopped => write!(f, "stopped"),
Self::Error => write!(f, "error"),
}
}
}
/// Configuration for a registered MCP server instance
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct McpServerConfig {
#[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
pub id: Option<bson::oid::ObjectId>,
/// Display name for this MCP server
pub name: String,
/// Endpoint URL (e.g. https://mcp.example.com/mcp)
pub endpoint_url: String,
/// Transport type
pub transport: McpTransport,
/// Port number (for HTTP transport)
pub port: Option<u16>,
/// Current status
pub status: McpServerStatus,
/// Bearer access token for authentication
pub access_token: String,
/// Which tools are enabled on this server
pub tools_enabled: Vec<String>,
/// Optional description / notes
pub description: Option<String>,
/// MongoDB URI this server connects to
pub mongodb_uri: Option<String>,
/// Database name
pub mongodb_database: Option<String>,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
}

View File

@@ -1,23 +1,30 @@
pub mod auth;
pub mod chat;
pub mod cve;
pub mod dast;
pub mod embedding;
pub mod finding;
pub mod graph;
pub mod issue;
pub mod mcp;
pub mod repository;
pub mod sbom;
pub mod scan;
pub use auth::AuthInfo;
pub use chat::{ChatMessage, ChatRequest, ChatResponse, SourceReference};
pub use cve::{CveAlert, CveSource};
pub use dast::{
DastAuthConfig, DastEvidence, DastFinding, DastScanPhase, DastScanRun, DastScanStatus,
DastTarget, DastTargetType, DastVulnType,
};
pub use embedding::{CodeEmbedding, EmbeddingBuildRun, EmbeddingBuildStatus};
pub use finding::{Finding, FindingStatus, Severity};
pub use graph::{
CodeEdge, CodeEdgeKind, CodeNode, CodeNodeKind, GraphBuildRun, GraphBuildStatus,
ImpactAnalysis,
CodeEdge, CodeEdgeKind, CodeNode, CodeNodeKind, GraphBuildRun, GraphBuildStatus, ImpactAnalysis,
};
pub use issue::{IssueStatus, TrackerIssue, TrackerType};
pub use mcp::{McpServerConfig, McpServerStatus, McpTransport};
pub use repository::{ScanTrigger, TrackedRepository};
pub use sbom::{SbomEntry, VulnRef};
pub use scan::{ScanPhase, ScanRun, ScanRunStatus, ScanType};

View File

@@ -31,9 +31,15 @@ pub struct TrackedRepository {
pub last_scanned_commit: Option<String>,
#[serde(default, deserialize_with = "deserialize_findings_count")]
pub findings_count: u32,
#[serde(default = "chrono::Utc::now", deserialize_with = "deserialize_datetime")]
#[serde(
default = "chrono::Utc::now",
deserialize_with = "deserialize_datetime"
)]
pub created_at: DateTime<Utc>,
#[serde(default = "chrono::Utc::now", deserialize_with = "deserialize_datetime")]
#[serde(
default = "chrono::Utc::now",
deserialize_with = "deserialize_datetime"
)]
pub updated_at: DateTime<Utc>,
}
@@ -51,9 +57,7 @@ where
let bson = bson::Bson::deserialize(deserializer)?;
match bson {
bson::Bson::DateTime(dt) => Ok(dt.into()),
bson::Bson::String(s) => s
.parse::<DateTime<Utc>>()
.map_err(serde::de::Error::custom),
bson::Bson::String(s) => s.parse::<DateTime<Utc>>().map_err(serde::de::Error::custom),
other => Err(serde::de::Error::custom(format!(
"expected DateTime or string, got: {other:?}"
))),

View File

@@ -0,0 +1,152 @@
//! OpenTelemetry initialization for traces and logs.
//!
//! Exports traces and logs via OTLP/HTTP when `OTEL_EXPORTER_OTLP_ENDPOINT`
//! is set. Always includes a `tracing_subscriber::fmt` layer for console output.
//!
//! Compatible with SigNoz, Grafana Tempo/Loki, Jaeger, and any OTLP-compatible
//! collector.
//!
//! # Environment Variables
//!
//! | Variable | Description | Default |
//! |---|---|---|
//! | `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP collector endpoint (e.g. `https://otel.example.com`) | *(disabled)* |
//! | `OTEL_SERVICE_NAME` | Service name for resource | `service_name` param |
//! | `RUST_LOG` / standard `EnvFilter` | Log level filter | `info` |
use opentelemetry::global;
use opentelemetry::trace::TracerProvider as _;
use opentelemetry::KeyValue;
use opentelemetry_appender_tracing::layer::OpenTelemetryTracingBridge;
use opentelemetry_otlp::{LogExporter, SpanExporter, WithExportConfig};
use opentelemetry_sdk::{logs::SdkLoggerProvider, trace::SdkTracerProvider, Resource};
use tracing_opentelemetry::OpenTelemetryLayer;
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt, EnvFilter, Layer as _};
/// Guard that shuts down OTel providers on drop.
///
/// Must be held for the lifetime of the application. When dropped,
/// flushes and shuts down the tracer and logger providers.
pub struct TelemetryGuard {
tracer_provider: Option<SdkTracerProvider>,
logger_provider: Option<SdkLoggerProvider>,
}
impl Drop for TelemetryGuard {
fn drop(&mut self) {
if let Some(tp) = self.tracer_provider.take() {
if let Err(e) = tp.shutdown() {
eprintln!("Failed to shutdown tracer provider: {e}");
}
}
if let Some(lp) = self.logger_provider.take() {
if let Err(e) = lp.shutdown() {
eprintln!("Failed to shutdown logger provider: {e}");
}
}
}
}
fn build_resource(service_name: &str) -> Resource {
let name = std::env::var("OTEL_SERVICE_NAME").unwrap_or_else(|_| service_name.to_string());
Resource::builder()
.with_service_name(name)
.with_attributes([KeyValue::new("service.version", env!("CARGO_PKG_VERSION"))])
.build()
}
/// Initialize telemetry (tracing + logging).
///
/// If `OTEL_EXPORTER_OTLP_ENDPOINT` is set, traces and logs are exported
/// via OTLP/HTTP. Console fmt output is always enabled.
///
/// Returns a [`TelemetryGuard`] that must be held alive for the application
/// lifetime. Dropping it triggers a graceful shutdown of OTel providers.
///
/// # Panics
///
/// Panics if the tracing subscriber cannot be initialized (e.g. called twice).
pub fn init_telemetry(service_name: &str) -> TelemetryGuard {
let otel_endpoint = std::env::var("OTEL_EXPORTER_OTLP_ENDPOINT").ok();
let env_filter = EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new("info"));
let fmt_layer = tracing_subscriber::fmt::layer();
match otel_endpoint {
Some(ref endpoint) => {
let resource = build_resource(service_name);
let traces_endpoint = format!("{endpoint}/v1/traces");
let logs_endpoint = format!("{endpoint}/v1/logs");
// Traces
#[allow(clippy::expect_used)]
let span_exporter = SpanExporter::builder()
.with_http()
.with_endpoint(&traces_endpoint)
.build()
.expect("failed to create OTLP span exporter");
let tracer_provider = SdkTracerProvider::builder()
.with_batch_exporter(span_exporter)
.with_resource(resource.clone())
.build();
global::set_tracer_provider(tracer_provider.clone());
let tracer = tracer_provider.tracer(service_name.to_string());
let otel_trace_layer = OpenTelemetryLayer::new(tracer);
// Logs
#[allow(clippy::expect_used)]
let log_exporter = LogExporter::builder()
.with_http()
.with_endpoint(&logs_endpoint)
.build()
.expect("failed to create OTLP log exporter");
let logger_provider = SdkLoggerProvider::builder()
.with_batch_exporter(log_exporter)
.with_resource(resource)
.build();
let otel_log_layer = OpenTelemetryTracingBridge::new(&logger_provider);
// Filter to prevent telemetry-induced-telemetry loops
let otel_filter = EnvFilter::new("info")
.add_directive("hyper=off".parse().unwrap_or_default())
.add_directive("h2=off".parse().unwrap_or_default())
.add_directive("reqwest=off".parse().unwrap_or_default());
tracing_subscriber::registry()
.with(env_filter)
.with(fmt_layer)
.with(otel_trace_layer)
.with(otel_log_layer.with_filter(otel_filter))
.init();
tracing::info!(
endpoint = endpoint.as_str(),
service = service_name,
"OpenTelemetry OTLP/HTTP export enabled"
);
TelemetryGuard {
tracer_provider: Some(tracer_provider),
logger_provider: Some(logger_provider),
}
}
None => {
tracing_subscriber::registry()
.with(env_filter)
.with(fmt_layer)
.init();
tracing::info!("OpenTelemetry disabled (set OTEL_EXPORTER_OTLP_ENDPOINT to enable)");
TelemetryGuard {
tracer_provider: None,
logger_provider: None,
}
}
}
}

View File

@@ -18,6 +18,7 @@ server = [
"dioxus/router",
"dioxus/fullstack",
"compliance-core/mongodb",
"compliance-core/telemetry",
"dep:axum",
"dep:mongodb",
"dep:reqwest",
@@ -27,6 +28,14 @@ server = [
"dep:dioxus-cli-config",
"dep:dioxus-fullstack",
"dep:tokio",
"dep:tower-sessions",
"dep:time",
"dep:rand",
"dep:url",
"dep:sha2",
"dep:base64",
"dep:uuid",
"dep:bson",
]
[dependencies]
@@ -54,3 +63,11 @@ dotenvy = { version = "0.15", optional = true }
tokio = { workspace = true, optional = true }
dioxus-cli-config = { version = "=0.7.3", optional = true }
dioxus-fullstack = { version = "=0.7.3", optional = true }
tower-sessions = { version = "0.15", default-features = false, features = ["axum-core", "memory-store", "signed"], optional = true }
time = { version = "0.3", default-features = false, optional = true }
rand = { version = "0.9", optional = true }
url = { version = "2", optional = true }
sha2 = { workspace = true, optional = true }
base64 = { version = "0.22", optional = true }
uuid = { workspace = true, optional = true }
bson = { version = "2", features = ["chrono-0_4"], optional = true }

View File

@@ -169,20 +169,20 @@
enabled: true,
solver: "forceAtlas2Based",
forceAtlas2Based: {
gravitationalConstant: -60,
centralGravity: 0.012,
springLength: 80,
springConstant: 0.06,
damping: 0.4,
avoidOverlap: 0.5,
gravitationalConstant: -80,
centralGravity: 0.005,
springLength: 120,
springConstant: 0.04,
damping: 0.5,
avoidOverlap: 0.6,
},
stabilization: {
enabled: true,
iterations: 1000,
iterations: 1500,
updateInterval: 25,
},
maxVelocity: 40,
minVelocity: 0.1,
maxVelocity: 50,
minVelocity: 0.75,
},
interaction: {
hover: true,
@@ -252,7 +252,24 @@
overlay.style.display = "none";
}, 900);
}
network.setOptions({ physics: { enabled: false } });
// Keep physics running so nodes float and respond to dragging,
// but reduce forces for a calm, settled feel
network.setOptions({
physics: {
enabled: true,
solver: "forceAtlas2Based",
forceAtlas2Based: {
gravitationalConstant: -40,
centralGravity: 0.003,
springLength: 120,
springConstant: 0.03,
damping: 0.7,
avoidOverlap: 0.6,
},
maxVelocity: 20,
minVelocity: 0.75,
},
});
});
console.log(

View File

@@ -603,6 +603,76 @@ tbody tr:last-child td {
background: var(--accent-muted);
}
.btn-ghost-danger:hover {
color: var(--danger);
border-color: var(--danger);
background: var(--danger-bg);
}
.btn-danger {
background: var(--danger);
color: #fff;
border: 1px solid var(--danger);
}
.btn-danger:hover {
background: #e0334f;
box-shadow: 0 0 12px rgba(255, 59, 92, 0.3);
}
/* ── Modal ── */
.modal-overlay {
position: fixed;
inset: 0;
z-index: 1000;
background: rgba(0, 0, 0, 0.6);
backdrop-filter: blur(4px);
display: flex;
align-items: center;
justify-content: center;
}
.modal-dialog {
background: var(--bg-card-solid);
border: 1px solid var(--border-bright);
border-radius: var(--radius-lg);
padding: 24px 28px;
max-width: 460px;
width: 90%;
box-shadow: 0 16px 48px rgba(0, 0, 0, 0.5);
}
.modal-dialog h3 {
font-family: var(--font-display);
font-size: 18px;
font-weight: 600;
margin-bottom: 12px;
}
.modal-dialog p {
font-size: 14px;
color: var(--text-secondary);
line-height: 1.5;
margin-bottom: 8px;
}
.modal-warning {
color: var(--danger) !important;
font-size: 13px !important;
background: var(--danger-bg);
border-radius: var(--radius-sm);
padding: 10px 12px;
margin-top: 4px;
}
.modal-actions {
display: flex;
gap: 10px;
justify-content: flex-end;
margin-top: 20px;
}
.btn-secondary {
background: transparent;
color: var(--accent);
@@ -1710,3 +1780,660 @@ tbody tr:last-child td {
white-space: nowrap;
margin-left: auto;
}
/* ── AI Chat ── */
.chat-embedding-banner {
display: flex;
align-items: center;
justify-content: space-between;
padding: 12px 20px;
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius);
margin-bottom: 16px;
font-size: 13px;
color: var(--text-secondary);
}
.chat-embedding-building {
border-color: var(--border-accent);
background: rgba(0, 200, 255, 0.04);
}
.chat-embedding-status {
display: flex;
align-items: center;
gap: 10px;
flex: 1;
}
.chat-spinner {
width: 16px;
height: 16px;
border: 2px solid var(--border-bright);
border-top-color: var(--accent);
border-radius: 50%;
animation: spin 0.8s linear infinite;
flex-shrink: 0;
}
@keyframes spin {
to { transform: rotate(360deg); }
}
.chat-progress-bar {
width: 120px;
height: 6px;
background: var(--bg-secondary);
border-radius: 3px;
overflow: hidden;
flex-shrink: 0;
}
.chat-progress-fill {
height: 100%;
background: var(--accent);
border-radius: 3px;
transition: width 0.5s var(--ease-out);
min-width: 2%;
}
.chat-embedding-banner .btn-sm {
padding: 6px 14px;
font-size: 12px;
background: var(--accent-muted);
color: var(--accent);
border: 1px solid var(--border-accent);
border-radius: var(--radius-sm);
cursor: pointer;
transition: all 0.2s var(--ease-out);
}
.chat-embedding-banner .btn-sm:hover:not(:disabled) {
background: var(--accent);
color: var(--bg-primary);
}
.chat-embedding-banner .btn-sm:disabled {
opacity: 0.5;
cursor: not-allowed;
}
.chat-container {
display: flex;
flex-direction: column;
height: calc(100vh - 240px);
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius);
overflow: hidden;
}
.chat-messages {
flex: 1;
overflow-y: auto;
padding: 20px;
display: flex;
flex-direction: column;
gap: 16px;
}
.chat-empty {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
height: 100%;
color: var(--text-tertiary);
text-align: center;
}
.chat-empty h3 {
font-family: var(--font-display);
font-size: 18px;
font-weight: 600;
color: var(--text-secondary);
margin-bottom: 8px;
}
.chat-empty p {
font-size: 13px;
max-width: 400px;
}
.chat-message {
max-width: 80%;
padding: 12px 16px;
border-radius: var(--radius);
font-size: 14px;
line-height: 1.6;
}
.chat-message-user {
align-self: flex-end;
background: var(--accent-muted);
border: 1px solid var(--border-accent);
color: var(--text-primary);
}
.chat-message-assistant {
align-self: flex-start;
background: var(--bg-elevated);
border: 1px solid var(--border);
color: var(--text-primary);
}
.chat-message-role {
font-family: var(--font-display);
font-size: 11px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.5px;
color: var(--text-tertiary);
margin-bottom: 6px;
}
.chat-message-content {
white-space: pre-wrap;
word-break: break-word;
}
.chat-typing {
color: var(--text-tertiary);
font-style: italic;
}
.chat-sources {
margin-top: 12px;
border-top: 1px solid var(--border);
padding-top: 10px;
}
.chat-sources-label {
font-size: 11px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.5px;
color: var(--text-tertiary);
display: block;
margin-bottom: 8px;
}
.chat-source-card {
background: var(--bg-secondary);
border: 1px solid var(--border);
border-radius: var(--radius-sm);
padding: 10px 12px;
margin-bottom: 6px;
}
.chat-source-header {
display: flex;
align-items: center;
justify-content: space-between;
margin-bottom: 6px;
}
.chat-source-name {
font-family: var(--font-mono);
font-size: 12px;
font-weight: 500;
color: var(--accent);
}
.chat-source-location {
font-family: var(--font-mono);
font-size: 11px;
color: var(--text-tertiary);
}
.chat-source-snippet {
margin: 0;
padding: 8px;
background: var(--bg-primary);
border-radius: 4px;
overflow-x: auto;
max-height: 120px;
}
.chat-source-snippet code {
font-family: var(--font-mono);
font-size: 11px;
color: var(--text-secondary);
white-space: pre;
}
.chat-input-area {
display: flex;
gap: 10px;
padding: 16px 20px;
border-top: 1px solid var(--border);
background: var(--bg-secondary);
}
.chat-input {
flex: 1;
background: var(--bg-primary);
border: 1px solid var(--border);
border-radius: var(--radius-sm);
color: var(--text-primary);
font-family: var(--font-body);
font-size: 14px;
padding: 10px 14px;
resize: none;
min-height: 42px;
max-height: 120px;
outline: none;
transition: border-color 0.2s var(--ease-out);
}
.chat-input:focus {
border-color: var(--accent);
}
.chat-input:disabled {
opacity: 0.5;
cursor: not-allowed;
}
.chat-send-btn {
padding: 10px 20px;
background: var(--accent);
color: var(--bg-primary);
border: none;
border-radius: var(--radius-sm);
font-family: var(--font-display);
font-weight: 600;
font-size: 13px;
cursor: pointer;
transition: all 0.2s var(--ease-out);
align-self: flex-end;
}
.chat-send-btn:hover:not(:disabled) {
background: var(--accent-hover);
box-shadow: var(--accent-glow);
}
.chat-send-btn:disabled {
opacity: 0.5;
cursor: not-allowed;
}
/* ── SBOM Enhancements ── */
.sbom-tab-bar {
display: flex;
gap: 4px;
margin-bottom: 20px;
border-bottom: 1px solid var(--border);
padding-bottom: 0;
}
.sbom-tab {
padding: 10px 20px;
background: none;
border: none;
border-bottom: 2px solid transparent;
color: var(--text-secondary);
font-family: var(--font-display);
font-size: 14px;
font-weight: 500;
cursor: pointer;
transition: all 0.2s var(--ease-out);
}
.sbom-tab:hover {
color: var(--text-primary);
}
.sbom-tab.active {
color: var(--accent);
border-bottom-color: var(--accent);
}
.sbom-filter-bar {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin-bottom: 16px;
align-items: center;
}
.sbom-filter-select {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius-sm);
color: var(--text-primary);
font-family: var(--font-body);
font-size: 13px;
padding: 8px 12px;
outline: none;
transition: border-color 0.2s var(--ease-out);
min-width: 140px;
}
.sbom-filter-select:focus {
border-color: var(--accent);
}
.sbom-filter-input {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius-sm);
color: var(--text-primary);
font-family: var(--font-body);
font-size: 13px;
padding: 8px 14px;
outline: none;
min-width: 200px;
transition: border-color 0.2s var(--ease-out);
}
.sbom-filter-input:focus {
border-color: var(--accent);
}
.sbom-filter-input::placeholder {
color: var(--text-tertiary);
}
.sbom-result-count {
font-size: 13px;
color: var(--text-secondary);
margin-bottom: 12px;
}
/* Export */
.sbom-export-wrapper {
position: relative;
margin-left: auto;
}
.sbom-export-btn {
font-size: 13px;
}
.sbom-export-dropdown {
position: absolute;
top: 100%;
right: 0;
z-index: 50;
background: var(--bg-elevated);
border: 1px solid var(--border-bright);
border-radius: var(--radius);
padding: 12px;
display: flex;
flex-direction: column;
gap: 8px;
min-width: 200px;
box-shadow: 0 8px 24px rgba(0, 0, 0, 0.4);
margin-top: 4px;
}
.sbom-export-hint {
font-size: 11px;
color: var(--text-tertiary);
}
.sbom-export-result {
margin-bottom: 16px;
}
.sbom-export-result-header {
display: flex;
align-items: center;
justify-content: space-between;
margin-bottom: 12px;
}
/* Vulnerability drill-down */
.sbom-vuln-toggle {
cursor: pointer;
user-select: none;
}
.sbom-vuln-detail-row td {
padding: 0 !important;
background: var(--bg-secondary);
}
.sbom-vuln-detail {
padding: 12px 16px;
display: flex;
flex-wrap: wrap;
gap: 10px;
}
.sbom-vuln-card {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius-sm);
padding: 10px 14px;
min-width: 240px;
flex: 1;
max-width: 400px;
}
.sbom-vuln-card-header {
display: flex;
align-items: center;
gap: 8px;
margin-bottom: 6px;
flex-wrap: wrap;
}
.sbom-vuln-id {
font-family: var(--font-mono);
font-size: 13px;
font-weight: 600;
color: var(--text-primary);
}
.sbom-vuln-source {
font-size: 11px;
color: var(--text-tertiary);
text-transform: uppercase;
letter-spacing: 0.04em;
}
.sbom-vuln-link {
font-size: 12px;
color: var(--accent);
text-decoration: none;
transition: color 0.15s;
}
.sbom-vuln-link:hover {
color: var(--accent-hover);
text-decoration: underline;
}
/* License compliance */
.sbom-license-badge {
font-size: 12px;
padding: 2px 8px;
border-radius: var(--radius-sm);
font-weight: 500;
white-space: nowrap;
}
.license-permissive {
background: var(--success-bg);
color: var(--success);
border: 1px solid rgba(0, 230, 118, 0.2);
}
.license-weak-copyleft {
background: var(--warning-bg);
color: var(--warning);
border: 1px solid rgba(255, 176, 32, 0.2);
}
.license-copyleft {
background: var(--danger-bg);
color: var(--danger);
border: 1px solid rgba(255, 59, 92, 0.2);
}
.license-copyleft-warning {
background: var(--danger-bg);
border: 1px solid rgba(255, 59, 92, 0.3);
border-radius: var(--radius);
padding: 16px 20px;
margin-bottom: 16px;
}
.license-copyleft-warning strong {
color: var(--danger);
font-size: 15px;
display: block;
margin-bottom: 6px;
}
.license-copyleft-warning p {
color: var(--text-secondary);
font-size: 13px;
margin-bottom: 10px;
}
.license-copyleft-item {
padding: 6px 0;
font-size: 13px;
color: var(--text-secondary);
}
.license-pkg-list {
font-family: var(--font-mono);
font-size: 12px;
color: var(--text-tertiary);
}
.license-bar-chart {
display: flex;
flex-direction: column;
gap: 8px;
}
.license-bar-row {
display: flex;
align-items: center;
gap: 12px;
}
.license-bar-label {
font-size: 13px;
color: var(--text-secondary);
min-width: 120px;
text-align: right;
flex-shrink: 0;
}
.license-bar-track {
flex: 1;
height: 20px;
background: var(--bg-secondary);
border-radius: var(--radius-sm);
overflow: hidden;
}
.license-bar {
height: 100%;
border-radius: var(--radius-sm);
transition: width 0.3s var(--ease-out);
}
.license-bar.license-permissive {
background: var(--success);
border: none;
}
.license-bar.license-copyleft {
background: var(--danger);
border: none;
}
.license-bar-count {
font-family: var(--font-mono);
font-size: 12px;
color: var(--text-tertiary);
min-width: 32px;
}
/* SBOM Diff */
.sbom-diff-controls {
display: flex;
gap: 16px;
flex-wrap: wrap;
}
.sbom-diff-select-group {
display: flex;
flex-direction: column;
gap: 6px;
flex: 1;
min-width: 200px;
}
.sbom-diff-select-group label {
font-size: 12px;
font-weight: 600;
color: var(--text-secondary);
text-transform: uppercase;
letter-spacing: 0.04em;
}
.sbom-diff-summary {
display: flex;
gap: 12px;
margin: 16px 0;
flex-wrap: wrap;
}
.sbom-diff-stat {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius);
padding: 12px 20px;
display: flex;
flex-direction: column;
align-items: center;
gap: 4px;
flex: 1;
min-width: 100px;
text-align: center;
font-size: 12px;
color: var(--text-secondary);
}
.sbom-diff-stat-num {
font-family: var(--font-display);
font-size: 24px;
font-weight: 700;
color: var(--text-primary);
}
.sbom-diff-added .sbom-diff-stat-num {
color: var(--success);
}
.sbom-diff-removed .sbom-diff-stat-num {
color: var(--danger);
}
.sbom-diff-changed .sbom-diff-stat-num {
color: var(--warning);
}
.sbom-diff-row-added {
border-left: 3px solid var(--success);
}
.sbom-diff-row-removed {
border-left: 3px solid var(--danger);
}
.sbom-diff-row-changed {
border-left: 3px solid var(--warning);
}

View File

@@ -26,6 +26,10 @@ pub enum Route {
GraphExplorerPage { repo_id: String },
#[route("/graph/:repo_id/impact/:finding_id")]
ImpactAnalysisPage { repo_id: String, finding_id: String },
#[route("/chat")]
ChatIndexPage {},
#[route("/chat/:repo_id")]
ChatPage { repo_id: String },
#[route("/dast")]
DastOverviewPage {},
#[route("/dast/targets")]
@@ -34,6 +38,8 @@ pub enum Route {
DastFindingsPage {},
#[route("/dast/findings/:id")]
DastFindingDetailPage { id: String },
#[route("/mcp-servers")]
McpServersPage {},
#[route("/settings")]
SettingsPage {},
}

View File

@@ -3,17 +3,41 @@ use dioxus::prelude::*;
use crate::app::Route;
use crate::components::sidebar::Sidebar;
use crate::components::toast::{ToastContainer, Toasts};
use crate::infrastructure::auth_check::check_auth;
#[component]
pub fn AppShell() -> Element {
use_context_provider(Toasts::new);
rsx! {
div { class: "app-shell",
Sidebar {}
main { class: "main-content",
Outlet::<Route> {}
let auth = use_server_future(check_auth)?;
match auth() {
Some(Ok(info)) if info.authenticated => {
use_context_provider(|| Signal::new(info.clone()));
rsx! {
div { class: "app-shell",
Sidebar {}
main { class: "main-content",
Outlet::<Route> {}
}
ToastContainer {}
}
}
}
Some(Ok(_)) | Some(Err(_)) => {
// Not authenticated — redirect to Keycloak login
rsx! {
document::Script {
dangerous_inner_html: "window.location.href = '/auth';"
}
}
}
None => {
rsx! {
div { class: "flex items-center justify-center h-screen bg-gray-950",
p { class: "text-gray-400", "Loading..." }
}
}
ToastContainer {}
}
}
}

View File

@@ -47,17 +47,19 @@ fn insert_path(
let name = parts[0].to_string();
let is_leaf = parts.len() == 1;
let entry = children.entry(name.clone()).or_insert_with(|| FileTreeNode {
name: name.clone(),
path: if is_leaf {
full_path.to_string()
} else {
String::new()
},
is_dir: !is_leaf,
node_count: 0,
children: Vec::new(),
});
let entry = children
.entry(name.clone())
.or_insert_with(|| FileTreeNode {
name: name.clone(),
path: if is_leaf {
full_path.to_string()
} else {
String::new()
},
is_dir: !is_leaf,
node_count: 0,
children: Vec::new(),
});
if is_leaf {
entry.node_count = node_count;

View File

@@ -1,3 +1,4 @@
use compliance_core::models::auth::AuthInfo;
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
@@ -46,11 +47,21 @@ pub fn Sidebar() -> Element {
route: Route::GraphIndexPage {},
icon: rsx! { Icon { icon: BsDiagram3, width: 18, height: 18 } },
},
NavItem {
label: "AI Chat",
route: Route::ChatIndexPage {},
icon: rsx! { Icon { icon: BsChatDots, width: 18, height: 18 } },
},
NavItem {
label: "DAST",
route: Route::DastOverviewPage {},
icon: rsx! { Icon { icon: BsBug, width: 18, height: 18 } },
},
NavItem {
label: "MCP Servers",
route: Route::McpServersPage {},
icon: rsx! { Icon { icon: BsPlug, width: 18, height: 18 } },
},
NavItem {
label: "Settings",
route: Route::SettingsPage {},
@@ -58,7 +69,13 @@ pub fn Sidebar() -> Element {
},
];
let sidebar_class = if collapsed() { "sidebar collapsed" } else { "sidebar" };
let docs_url = option_env!("DOCS_URL").unwrap_or("/docs");
let sidebar_class = if collapsed() {
"sidebar collapsed"
} else {
"sidebar"
};
rsx! {
nav { class: "{sidebar_class}",
@@ -76,6 +93,7 @@ pub fn Sidebar() -> Element {
(Route::GraphIndexPage {}, Route::GraphIndexPage {}) => true,
(Route::GraphExplorerPage { .. }, Route::GraphIndexPage {}) => true,
(Route::ImpactAnalysisPage { .. }, Route::GraphIndexPage {}) => true,
(Route::ChatPage { .. }, Route::ChatIndexPage {}) => true,
(Route::DastTargetsPage {}, Route::DastOverviewPage {}) => true,
(Route::DastFindingsPage {}, Route::DastOverviewPage {}) => true,
(Route::DastFindingDetailPage { .. }, Route::DastOverviewPage {}) => true,
@@ -95,6 +113,15 @@ pub fn Sidebar() -> Element {
}
}
}
a {
href: "{docs_url}",
target: "_blank",
class: "nav-item",
Icon { icon: BsBook, width: 18, height: 18 }
if !collapsed() {
span { "Docs" }
}
}
button {
class: "sidebar-toggle",
onclick: move |_| collapsed.set(!collapsed()),
@@ -104,8 +131,31 @@ pub fn Sidebar() -> Element {
Icon { icon: BsChevronLeft, width: 14, height: 14 }
}
}
if !collapsed() {
div { class: "sidebar-footer", "v0.1.0" }
{
let auth_info = use_context::<Signal<AuthInfo>>();
let info = auth_info();
let initials = info.name.chars().next().unwrap_or('U').to_uppercase().to_string();
let user_class = if collapsed() { "sidebar-user sidebar-user-collapsed" } else { "sidebar-user" };
rsx! {
div { class: "{user_class}",
div { class: "user-avatar",
if info.avatar_url.is_empty() {
span { class: "avatar-initials", "{initials}" }
} else {
img { src: "{info.avatar_url}", alt: "avatar", class: "avatar-img" }
}
}
if !collapsed() {
span { class: "user-name", "{info.name}" }
}
a {
href: "/logout",
class: if collapsed() { "logout-btn logout-btn-collapsed" } else { "logout-btn" },
title: "Sign out",
Icon { icon: BsBoxArrowRight, width: 16, height: 16 }
}
}
}
}
}
}

View File

@@ -20,6 +20,12 @@ pub struct Toasts {
next_id: Signal<usize>,
}
impl Default for Toasts {
fn default() -> Self {
Self::new()
}
}
impl Toasts {
pub fn new() -> Self {
Self {
@@ -39,11 +45,11 @@ impl Toasts {
#[cfg(feature = "web")]
{
let mut items = self.items;
spawn(async move {
gloo_timers::future::TimeoutFuture::new(4_000).await;
items.write().retain(|t| t.id != id);
});
let mut items = self.items;
spawn(async move {
gloo_timers::future::TimeoutFuture::new(4_000).await;
items.write().retain(|t| t.id != id);
});
}
}

View File

@@ -0,0 +1,234 @@
use std::{
collections::HashMap,
sync::{Arc, RwLock},
};
use axum::{
extract::Query,
response::{IntoResponse, Redirect},
Extension,
};
use rand::Rng;
use tower_sessions::Session;
use url::Url;
use super::{
error::DashboardError,
server_state::ServerState,
user_state::{User, UserStateInner},
};
pub const LOGGED_IN_USER_SESS_KEY: &str = "logged-in-user";
#[derive(Debug, Clone)]
pub(crate) struct PendingOAuthEntry {
pub(crate) redirect_url: Option<String>,
pub(crate) code_verifier: String,
}
#[derive(Debug, Clone, Default)]
pub struct PendingOAuthStore(Arc<RwLock<HashMap<String, PendingOAuthEntry>>>);
impl PendingOAuthStore {
pub(crate) fn insert(&self, state: String, entry: PendingOAuthEntry) {
#[allow(clippy::expect_used)]
self.0
.write()
.expect("pending oauth store lock poisoned")
.insert(state, entry);
}
pub(crate) fn take(&self, state: &str) -> Option<PendingOAuthEntry> {
#[allow(clippy::expect_used)]
self.0
.write()
.expect("pending oauth store lock poisoned")
.remove(state)
}
}
pub(crate) fn generate_state() -> String {
let bytes: [u8; 32] = rand::rng().random();
bytes.iter().fold(String::with_capacity(64), |mut acc, b| {
use std::fmt::Write;
let _ = write!(acc, "{b:02x}");
acc
})
}
pub(crate) fn generate_code_verifier() -> String {
use base64::{engine::general_purpose::URL_SAFE_NO_PAD, Engine};
let bytes: [u8; 32] = rand::rng().random();
URL_SAFE_NO_PAD.encode(bytes)
}
pub(crate) fn derive_code_challenge(verifier: &str) -> String {
use base64::{engine::general_purpose::URL_SAFE_NO_PAD, Engine};
use sha2::{Digest, Sha256};
let digest = Sha256::digest(verifier.as_bytes());
URL_SAFE_NO_PAD.encode(digest)
}
#[axum::debug_handler]
pub async fn auth_login(
Extension(state): Extension<ServerState>,
Extension(pending): Extension<PendingOAuthStore>,
Query(params): Query<HashMap<String, String>>,
) -> Result<impl IntoResponse, DashboardError> {
let kc = state
.keycloak
.ok_or(DashboardError::Other("Keycloak not configured".into()))?;
let csrf_state = generate_state();
let code_verifier = generate_code_verifier();
let code_challenge = derive_code_challenge(&code_verifier);
let redirect_url = params.get("redirect_url").cloned();
pending.insert(
csrf_state.clone(),
PendingOAuthEntry {
redirect_url,
code_verifier,
},
);
let mut url = Url::parse(&kc.auth_endpoint())
.map_err(|e| DashboardError::Other(format!("invalid auth endpoint URL: {e}")))?;
url.query_pairs_mut()
.append_pair("client_id", &kc.client_id)
.append_pair("redirect_uri", &kc.redirect_uri)
.append_pair("response_type", "code")
.append_pair("scope", "openid profile email")
.append_pair("state", &csrf_state)
.append_pair("code_challenge", &code_challenge)
.append_pair("code_challenge_method", "S256");
Ok(Redirect::temporary(url.as_str()))
}
#[derive(serde::Deserialize)]
struct TokenResponse {
access_token: String,
refresh_token: Option<String>,
}
#[derive(serde::Deserialize)]
struct UserinfoResponse {
sub: String,
email: Option<String>,
preferred_username: Option<String>,
name: Option<String>,
picture: Option<String>,
}
#[axum::debug_handler]
pub async fn auth_callback(
session: Session,
Extension(state): Extension<ServerState>,
Extension(pending): Extension<PendingOAuthStore>,
Query(params): Query<HashMap<String, String>>,
) -> Result<impl IntoResponse, DashboardError> {
let kc = state
.keycloak
.ok_or(DashboardError::Other("Keycloak not configured".into()))?;
let returned_state = params
.get("state")
.ok_or_else(|| DashboardError::Other("missing state parameter".into()))?;
let entry = pending
.take(returned_state)
.ok_or_else(|| DashboardError::Other("unknown or expired oauth state".into()))?;
let code = params
.get("code")
.ok_or_else(|| DashboardError::Other("missing code parameter".into()))?;
let client = reqwest::Client::new();
let token_resp = client
.post(kc.token_endpoint())
.form(&[
("grant_type", "authorization_code"),
("client_id", kc.client_id.as_str()),
("redirect_uri", kc.redirect_uri.as_str()),
("code", code),
("code_verifier", &entry.code_verifier),
])
.send()
.await
.map_err(|e| DashboardError::Other(format!("token request failed: {e}")))?;
if !token_resp.status().is_success() {
let body = token_resp.text().await.unwrap_or_default();
return Err(DashboardError::Other(format!(
"token exchange failed: {body}"
)));
}
let tokens: TokenResponse = token_resp
.json()
.await
.map_err(|e| DashboardError::Other(format!("token parse failed: {e}")))?;
let userinfo: UserinfoResponse = client
.get(kc.userinfo_endpoint())
.bearer_auth(&tokens.access_token)
.send()
.await
.map_err(|e| DashboardError::Other(format!("userinfo request failed: {e}")))?
.json()
.await
.map_err(|e| DashboardError::Other(format!("userinfo parse failed: {e}")))?;
let display_name = userinfo
.name
.or(userinfo.preferred_username)
.unwrap_or_default();
let user_state = UserStateInner {
sub: userinfo.sub,
access_token: tokens.access_token,
refresh_token: tokens.refresh_token.unwrap_or_default(),
user: User {
email: userinfo.email.unwrap_or_default(),
name: display_name,
avatar_url: userinfo.picture.unwrap_or_default(),
},
};
session
.insert(LOGGED_IN_USER_SESS_KEY, user_state)
.await
.map_err(|e| DashboardError::Other(format!("session insert failed: {e}")))?;
let target = entry
.redirect_url
.filter(|u| !u.is_empty())
.unwrap_or_else(|| "/".into());
Ok(Redirect::temporary(&target))
}
#[axum::debug_handler]
pub async fn logout(
session: Session,
Extension(state): Extension<ServerState>,
) -> Result<impl IntoResponse, DashboardError> {
let kc = state
.keycloak
.ok_or(DashboardError::Other("Keycloak not configured".into()))?;
session
.flush()
.await
.map_err(|e| DashboardError::Other(format!("session flush failed: {e}")))?;
let mut url = Url::parse(&kc.logout_endpoint())
.map_err(|e| DashboardError::Other(format!("invalid logout endpoint URL: {e}")))?;
url.query_pairs_mut()
.append_pair("client_id", &kc.client_id)
.append_pair("post_logout_redirect_uri", &kc.app_url);
Ok(Redirect::temporary(url.as_str()))
}

View File

@@ -0,0 +1,44 @@
use compliance_core::models::auth::AuthInfo;
use dioxus::prelude::*;
/// Check the current user's authentication state.
///
/// Reads the tower-sessions session on the server and returns an
/// [`AuthInfo`] describing the logged-in user. When no valid session
/// exists, `authenticated` is `false` and all other fields are empty.
#[server(endpoint = "check-auth")]
pub async fn check_auth() -> Result<AuthInfo, ServerFnError> {
use super::auth::LOGGED_IN_USER_SESS_KEY;
use super::server_state::ServerState;
use super::user_state::UserStateInner;
use dioxus_fullstack::FullstackContext;
let state: ServerState = FullstackContext::extract().await?;
// When Keycloak is not configured, treat as always authenticated
if state.keycloak.is_none() {
return Ok(AuthInfo {
authenticated: true,
name: "Local User".into(),
..Default::default()
});
}
let session: tower_sessions::Session = FullstackContext::extract().await?;
let user_state: Option<UserStateInner> = session
.get(LOGGED_IN_USER_SESS_KEY)
.await
.map_err(|e| ServerFnError::new(format!("session read failed: {e}")))?;
match user_state {
Some(u) => Ok(AuthInfo {
authenticated: true,
sub: u.sub,
email: u.user.email,
name: u.user.name,
avatar_url: u.user.avatar_url,
}),
None => Ok(AuthInfo::default()),
}
}

View File

@@ -0,0 +1,45 @@
use axum::{
extract::Request,
middleware::Next,
response::{IntoResponse, Response},
Extension,
};
use reqwest::StatusCode;
use tower_sessions::Session;
use super::auth::LOGGED_IN_USER_SESS_KEY;
use super::server_state::ServerState;
use super::user_state::UserStateInner;
const PUBLIC_API_ENDPOINTS: &[&str] = &["/api/check-auth"];
/// Axum middleware that enforces authentication on `/api/` server
/// function endpoints. Skips auth entirely when Keycloak is not configured.
pub async fn require_auth(
Extension(state): Extension<ServerState>,
session: Session,
request: Request,
next: Next,
) -> Response {
// Skip auth when Keycloak is not configured
if state.keycloak.is_none() {
return next.run(request).await;
}
let path = request.uri().path();
if path.starts_with("/api/") && !PUBLIC_API_ENDPOINTS.contains(&path) {
let is_authed = session
.get::<UserStateInner>(LOGGED_IN_USER_SESS_KEY)
.await
.ok()
.flatten()
.is_some();
if !is_authed {
return (StatusCode::UNAUTHORIZED, "Authentication required").into_response();
}
}
next.run(request).await
}

View File

@@ -0,0 +1,126 @@
use dioxus::prelude::*;
use serde::{Deserialize, Serialize};
// ── Response types ──
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct ChatApiResponse {
pub data: ChatResponseData,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct ChatResponseData {
pub message: String,
#[serde(default)]
pub sources: Vec<SourceRef>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct SourceRef {
pub file_path: String,
pub qualified_name: String,
pub start_line: u32,
pub end_line: u32,
pub language: String,
pub snippet: String,
pub score: f64,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct EmbeddingStatusResponse {
pub data: Option<EmbeddingBuildData>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct EmbeddingBuildData {
pub repo_id: String,
pub status: String,
pub total_chunks: u32,
pub embedded_chunks: u32,
pub embedding_model: String,
pub error_message: Option<String>,
#[serde(default)]
pub started_at: Option<serde_json::Value>,
#[serde(default)]
pub completed_at: Option<serde_json::Value>,
}
// ── Chat message history type ──
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChatHistoryMessage {
pub role: String,
pub content: String,
}
// ── Server functions ──
#[server]
pub async fn send_chat_message(
repo_id: String,
message: String,
history: Vec<ChatHistoryMessage>,
) -> Result<ChatApiResponse, ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!("{}/api/v1/chat/{repo_id}", state.agent_api_url);
let client = reqwest::Client::builder()
.timeout(std::time::Duration::from_secs(120))
.build()
.map_err(|e| ServerFnError::new(e.to_string()))?;
let resp = client
.post(&url)
.json(&serde_json::json!({
"message": message,
"history": history,
}))
.send()
.await
.map_err(|e| ServerFnError::new(format!("Request failed: {e}")))?;
let text = resp
.text()
.await
.map_err(|e| ServerFnError::new(format!("Failed to read response: {e}")))?;
let body: ChatApiResponse = serde_json::from_str(&text)
.map_err(|e| ServerFnError::new(format!("Failed to parse response: {e} — body: {text}")))?;
Ok(body)
}
#[server]
pub async fn trigger_embedding_build(repo_id: String) -> Result<(), ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!(
"{}/api/v1/chat/{repo_id}/build-embeddings",
state.agent_api_url
);
let client = reqwest::Client::new();
client
.post(&url)
.send()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(())
}
#[server]
pub async fn fetch_embedding_status(
repo_id: String,
) -> Result<EmbeddingStatusResponse, ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!("{}/api/v1/chat/{repo_id}/status", state.agent_api_url);
let resp = reqwest::get(&url)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
let body: EmbeddingStatusResponse = resp
.json()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(body)
}

View File

@@ -87,10 +87,7 @@ pub async fn fetch_dast_finding_detail(
}
#[server]
pub async fn add_dast_target(
name: String,
base_url: String,
) -> Result<(), ServerFnError> {
pub async fn add_dast_target(name: String, base_url: String) -> Result<(), ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!("{}/api/v1/dast/targets", state.agent_api_url);

View File

@@ -42,4 +42,8 @@ impl Database {
pub fn tracker_issues(&self) -> Collection<TrackerIssue> {
self.inner.collection("tracker_issues")
}
pub fn mcp_servers(&self) -> Collection<McpServerConfig> {
self.inner.collection("mcp_servers")
}
}

View File

@@ -24,3 +24,14 @@ impl From<DashboardError> for ServerFnError {
ServerFnError::new(err.to_string())
}
}
#[cfg(feature = "server")]
impl axum::response::IntoResponse for DashboardError {
fn into_response(self) -> axum::response::Response {
(
axum::http::StatusCode::INTERNAL_SERVER_ERROR,
self.to_string(),
)
.into_response()
}
}

View File

@@ -121,10 +121,7 @@ pub async fn fetch_file_content(
}
#[server]
pub async fn search_nodes(
repo_id: String,
query: String,
) -> Result<SearchResponse, ServerFnError> {
pub async fn search_nodes(repo_id: String, query: String) -> Result<SearchResponse, ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!(

View File

@@ -0,0 +1,54 @@
/// Keycloak OpenID Connect settings.
#[derive(Debug)]
pub struct KeycloakConfig {
pub url: String,
pub realm: String,
pub client_id: String,
pub redirect_uri: String,
pub app_url: String,
}
impl KeycloakConfig {
pub fn from_env() -> Option<Self> {
let url = std::env::var("KEYCLOAK_URL").ok()?;
let realm = std::env::var("KEYCLOAK_REALM").ok()?;
let client_id = std::env::var("KEYCLOAK_CLIENT_ID").ok()?;
let redirect_uri = std::env::var("REDIRECT_URI").ok()?;
let app_url = std::env::var("APP_URL").ok()?;
Some(Self {
url,
realm,
client_id,
redirect_uri,
app_url,
})
}
pub fn auth_endpoint(&self) -> String {
format!(
"{}/realms/{}/protocol/openid-connect/auth",
self.url, self.realm
)
}
pub fn token_endpoint(&self) -> String {
format!(
"{}/realms/{}/protocol/openid-connect/token",
self.url, self.realm
)
}
pub fn userinfo_endpoint(&self) -> String {
format!(
"{}/realms/{}/protocol/openid-connect/userinfo",
self.url, self.realm
)
}
pub fn logout_endpoint(&self) -> String {
format!(
"{}/realms/{}/protocol/openid-connect/logout",
self.url, self.realm
)
}
}

View File

@@ -0,0 +1,160 @@
use dioxus::prelude::*;
use serde::{Deserialize, Serialize};
use compliance_core::models::McpServerConfig;
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct McpServersResponse {
pub data: Vec<McpServerConfig>,
}
#[server]
pub async fn fetch_mcp_servers() -> Result<McpServersResponse, ServerFnError> {
use mongodb::bson::doc;
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let mut cursor = state
.db
.mcp_servers()
.find(doc! {})
.sort(doc! { "created_at": -1 })
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
let mut data = Vec::new();
while cursor
.advance()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?
{
let server = cursor
.deserialize_current()
.map_err(|e| ServerFnError::new(e.to_string()))?;
data.push(server);
}
Ok(McpServersResponse { data })
}
#[server]
pub async fn add_mcp_server(
name: String,
endpoint_url: String,
transport: String,
port: String,
description: String,
mongodb_uri: String,
mongodb_database: String,
) -> Result<(), ServerFnError> {
use chrono::Utc;
use compliance_core::models::{McpServerStatus, McpTransport};
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let transport = match transport.as_str() {
"http" => McpTransport::Http,
_ => McpTransport::Stdio,
};
let port_num: Option<u16> = port.parse().ok();
// Generate a random access token
let token = format!("mcp_{}", uuid::Uuid::new_v4().to_string().replace('-', ""));
let all_tools = vec![
"list_findings".to_string(),
"get_finding".to_string(),
"findings_summary".to_string(),
"list_sbom_packages".to_string(),
"sbom_vuln_report".to_string(),
"list_dast_findings".to_string(),
"dast_scan_summary".to_string(),
];
let now = Utc::now();
let server = McpServerConfig {
id: None,
name,
endpoint_url,
transport,
port: port_num,
status: McpServerStatus::Stopped,
access_token: token,
tools_enabled: all_tools,
description: if description.is_empty() {
None
} else {
Some(description)
},
mongodb_uri: if mongodb_uri.is_empty() {
None
} else {
Some(mongodb_uri)
},
mongodb_database: if mongodb_database.is_empty() {
None
} else {
Some(mongodb_database)
},
created_at: now,
updated_at: now,
};
state
.db
.mcp_servers()
.insert_one(server)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(())
}
#[server]
pub async fn delete_mcp_server(server_id: String) -> Result<(), ServerFnError> {
use mongodb::bson::doc;
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let oid = bson::oid::ObjectId::parse_str(&server_id)
.map_err(|e| ServerFnError::new(e.to_string()))?;
state
.db
.mcp_servers()
.delete_one(doc! { "_id": oid })
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(())
}
#[server]
pub async fn regenerate_mcp_token(server_id: String) -> Result<String, ServerFnError> {
use chrono::Utc;
use mongodb::bson::doc;
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let oid = bson::oid::ObjectId::parse_str(&server_id)
.map_err(|e| ServerFnError::new(e.to_string()))?;
let new_token = format!("mcp_{}", uuid::Uuid::new_v4().to_string().replace('-', ""));
state
.db
.mcp_servers()
.update_one(
doc! { "_id": oid },
doc! { "$set": { "access_token": &new_token, "updated_at": Utc::now().to_rfc3339() } },
)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(new_token)
}

View File

@@ -1,9 +1,12 @@
// Server function modules (compiled for both web and server;
// the #[server] macro generates client stubs for the web target)
pub mod auth_check;
pub mod chat;
pub mod dast;
pub mod findings;
pub mod graph;
pub mod issues;
pub mod mcp;
pub mod repositories;
pub mod sbom;
pub mod scans;
@@ -11,15 +14,27 @@ pub mod stats;
// Server-only modules
#[cfg(feature = "server")]
mod auth;
#[cfg(feature = "server")]
mod auth_middleware;
#[cfg(feature = "server")]
pub mod config;
#[cfg(feature = "server")]
pub mod database;
#[cfg(feature = "server")]
pub mod error;
#[cfg(feature = "server")]
pub mod server;
pub mod keycloak_config;
#[cfg(feature = "server")]
mod server;
#[cfg(feature = "server")]
pub mod server_state;
#[cfg(feature = "server")]
mod user_state;
#[cfg(feature = "server")]
pub use auth::{auth_callback, auth_login, logout, PendingOAuthStore};
#[cfg(feature = "server")]
pub use auth_middleware::require_auth;
#[cfg(feature = "server")]
pub use server::server_start;

View File

@@ -61,6 +61,29 @@ pub async fn add_repository(
Ok(())
}
#[server]
pub async fn delete_repository(repo_id: String) -> Result<(), ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!("{}/api/v1/repositories/{repo_id}", state.agent_api_url);
let client = reqwest::Client::new();
let resp = client
.delete(&url)
.send()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
if !resp.status().is_success() {
let body = resp.text().await.unwrap_or_default();
return Err(ServerFnError::new(format!(
"Failed to delete repository: {body}"
)));
}
Ok(())
}
#[server]
pub async fn trigger_repo_scan(repo_id: String) -> Result<(), ServerFnError> {
let state: super::server_state::ServerState =

View File

@@ -1,27 +1,202 @@
use dioxus::prelude::*;
use serde::{Deserialize, Serialize};
use compliance_core::models::SbomEntry;
// ── Local types (no bson dependency, WASM-safe) ──
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct VulnRefData {
pub id: String,
pub source: String,
pub severity: Option<String>,
pub url: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct SbomEntryData {
#[serde(rename = "_id", default)]
pub id: Option<serde_json::Value>,
pub repo_id: String,
pub name: String,
pub version: String,
pub package_manager: String,
pub license: Option<String>,
pub purl: Option<String>,
#[serde(default)]
pub known_vulnerabilities: Vec<VulnRefData>,
#[serde(default)]
pub created_at: Option<serde_json::Value>,
#[serde(default)]
pub updated_at: Option<serde_json::Value>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct SbomListResponse {
pub data: Vec<SbomEntry>,
pub data: Vec<SbomEntryData>,
pub total: Option<u64>,
pub page: Option<u64>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct LicenseSummaryData {
pub license: String,
pub count: u64,
pub is_copyleft: bool,
pub packages: Vec<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct LicenseSummaryResponse {
pub data: Vec<LicenseSummaryData>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct SbomDiffEntryData {
pub name: String,
pub version: String,
pub package_manager: String,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct SbomVersionDiffData {
pub name: String,
pub package_manager: String,
pub version_a: String,
pub version_b: String,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct SbomDiffResultData {
pub only_in_a: Vec<SbomDiffEntryData>,
pub only_in_b: Vec<SbomDiffEntryData>,
pub version_changed: Vec<SbomVersionDiffData>,
pub common_count: u64,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct SbomDiffResponse {
pub data: SbomDiffResultData,
}
// ── Server functions ──
#[server]
pub async fn fetch_sbom(page: u64) -> Result<SbomListResponse, ServerFnError> {
pub async fn fetch_sbom_filtered(
repo_id: Option<String>,
package_manager: Option<String>,
q: Option<String>,
has_vulns: Option<bool>,
license: Option<String>,
page: u64,
) -> Result<SbomListResponse, ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!("{}/api/v1/sbom?page={page}&limit=50", state.agent_api_url);
let mut params = vec![format!("page={page}"), "limit=50".to_string()];
if let Some(r) = &repo_id {
if !r.is_empty() {
params.push(format!("repo_id={r}"));
}
}
if let Some(pm) = &package_manager {
if !pm.is_empty() {
params.push(format!("package_manager={pm}"));
}
}
if let Some(q) = &q {
if !q.is_empty() {
params.push(format!("q={}", q.replace(' ', "%20")));
}
}
if let Some(hv) = has_vulns {
params.push(format!("has_vulns={hv}"));
}
if let Some(l) = &license {
if !l.is_empty() {
params.push(format!("license={}", l.replace(' ', "%20")));
}
}
let url = format!("{}/api/v1/sbom?{}", state.agent_api_url, params.join("&"));
let resp = reqwest::get(&url)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
let body: SbomListResponse = resp
.json()
let text = resp
.text()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
let body: SbomListResponse = serde_json::from_str(&text)
.map_err(|e| ServerFnError::new(format!("Parse error: {e} — body: {text}")))?;
Ok(body)
}
#[server]
pub async fn fetch_sbom_export(repo_id: String, format: String) -> Result<String, ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!(
"{}/api/v1/sbom/export?repo_id={}&format={}",
state.agent_api_url, repo_id, format
);
let resp = reqwest::get(&url)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
let text = resp
.text()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(text)
}
#[server]
pub async fn fetch_license_summary(
repo_id: Option<String>,
) -> Result<LicenseSummaryResponse, ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let mut url = format!("{}/api/v1/sbom/licenses", state.agent_api_url);
if let Some(r) = &repo_id {
if !r.is_empty() {
url = format!("{url}?repo_id={r}");
}
}
let resp = reqwest::get(&url)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
let text = resp
.text()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
let body: LicenseSummaryResponse = serde_json::from_str(&text)
.map_err(|e| ServerFnError::new(format!("Parse error: {e} — body: {text}")))?;
Ok(body)
}
#[server]
pub async fn fetch_sbom_diff(
repo_a: String,
repo_b: String,
) -> Result<SbomDiffResponse, ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!(
"{}/api/v1/sbom/diff?repo_a={}&repo_b={}",
state.agent_api_url, repo_a, repo_b
);
let resp = reqwest::get(&url)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
let text = resp
.text()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
let body: SbomDiffResponse = serde_json::from_str(&text)
.map_err(|e| ServerFnError::new(format!("Parse error: {e} — body: {text}")))?;
Ok(body)
}

View File

@@ -1,9 +1,15 @@
use axum::routing::get;
use axum::{middleware, Extension};
use dioxus::prelude::*;
use time::Duration;
use tower_sessions::{cookie::Key, MemoryStore, SessionManagerLayer};
use super::config;
use super::database::Database;
use super::error::DashboardError;
use super::keycloak_config::KeycloakConfig;
use super::server_state::{ServerState, ServerStateInner};
use super::{auth_callback, auth_login, logout, require_auth, PendingOAuthStore};
pub fn server_start(app: fn() -> Element) -> Result<(), DashboardError> {
tokio::runtime::Runtime::new()
@@ -12,16 +18,35 @@ pub fn server_start(app: fn() -> Element) -> Result<(), DashboardError> {
dotenvy::dotenv().ok();
let config = config::load_config()?;
let keycloak: Option<&'static KeycloakConfig> =
KeycloakConfig::from_env().map(|kc| &*Box::leak(Box::new(kc)));
let db = Database::connect(&config.mongodb_uri, &config.mongodb_database).await?;
if let Some(kc) = keycloak {
tracing::info!("Keycloak configured for realm '{}'", kc.realm);
} else {
tracing::warn!("Keycloak not configured - dashboard is unprotected");
}
let server_state: ServerState = ServerStateInner {
agent_api_url: config.agent_api_url.clone(),
db,
config,
keycloak,
}
.into();
let addr = dioxus_cli_config::fullstack_address_or_localhost();
// Session layer
let key = Key::generate();
let store = MemoryStore::default();
let session = SessionManagerLayer::new(store)
.with_secure(false)
.with_same_site(tower_sessions::cookie::SameSite::Lax)
.with_expiry(tower_sessions::Expiry::OnInactivity(Duration::hours(24)))
.with_signed(key);
let port = dioxus_cli_config::server_port().unwrap_or(8080);
let addr = std::net::SocketAddr::from(([0, 0, 0, 0], port));
let listener = tokio::net::TcpListener::bind(addr)
.await
.map_err(|e| DashboardError::Other(format!("Failed to bind: {e}")))?;
@@ -29,8 +54,14 @@ pub fn server_start(app: fn() -> Element) -> Result<(), DashboardError> {
tracing::info!("Dashboard server listening on {addr}");
let router = axum::Router::new()
.route("/auth", get(auth_login))
.route("/auth/callback", get(auth_callback))
.route("/logout", get(logout))
.serve_dioxus_application(ServeConfig::new(), app)
.layer(axum::Extension(server_state));
.layer(Extension(PendingOAuthStore::default()))
.layer(middleware::from_fn(require_auth))
.layer(Extension(server_state))
.layer(session);
axum::serve(listener, router.into_make_service())
.await

View File

@@ -4,6 +4,7 @@ use std::sync::Arc;
use compliance_core::DashboardConfig;
use super::database::Database;
use super::keycloak_config::KeycloakConfig;
#[derive(Clone)]
pub struct ServerState(Arc<ServerStateInner>);
@@ -19,6 +20,7 @@ pub struct ServerStateInner {
pub db: Database,
pub config: DashboardConfig,
pub agent_api_url: String,
pub keycloak: Option<&'static KeycloakConfig>,
}
impl From<ServerStateInner> for ServerState {

View File

@@ -0,0 +1,18 @@
use serde::{Deserialize, Serialize};
/// Per-session user data stored in the tower-sessions session store.
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct UserStateInner {
pub sub: String,
pub access_token: String,
pub refresh_token: String,
pub user: User,
}
/// Basic user profile stored alongside the session.
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct User {
pub email: String,
pub name: String,
pub avatar_url: String,
}

View File

@@ -0,0 +1,288 @@
use dioxus::prelude::*;
use crate::components::page_header::PageHeader;
use crate::infrastructure::chat::{
fetch_embedding_status, send_chat_message, trigger_embedding_build, ChatHistoryMessage,
SourceRef,
};
/// A UI-level chat message
#[derive(Clone, Debug)]
struct UiChatMessage {
role: String,
content: String,
sources: Vec<SourceRef>,
}
#[component]
pub fn ChatPage(repo_id: String) -> Element {
let mut messages: Signal<Vec<UiChatMessage>> = use_signal(Vec::new);
let mut input_text = use_signal(String::new);
let mut loading = use_signal(|| false);
let mut building = use_signal(|| false);
let repo_id_for_status = repo_id.clone();
let mut embedding_status = use_resource(move || {
let rid = repo_id_for_status.clone();
async move { fetch_embedding_status(rid).await.ok() }
});
let has_embeddings = {
let status = embedding_status.read();
match &*status {
Some(Some(resp)) => resp
.data
.as_ref()
.map(|d| d.status == "completed")
.unwrap_or(false),
_ => false,
}
};
let is_running = {
let status = embedding_status.read();
match &*status {
Some(Some(resp)) => resp
.data
.as_ref()
.map(|d| d.status == "running")
.unwrap_or(false),
_ => false,
}
};
let embed_progress = {
let status = embedding_status.read();
match &*status {
Some(Some(resp)) => resp
.data
.as_ref()
.map(|d| {
if d.total_chunks > 0 {
(d.embedded_chunks as f64 / d.total_chunks as f64 * 100.0) as u32
} else {
0
}
})
.unwrap_or(0),
_ => 0,
}
};
let embedding_status_text = {
let status = embedding_status.read();
match &*status {
Some(Some(resp)) => match &resp.data {
Some(d) => match d.status.as_str() {
"completed" => format!(
"Embeddings ready: {}/{} chunks",
d.embedded_chunks, d.total_chunks
),
"running" => format!(
"Building embeddings: {}/{} chunks ({}%)",
d.embedded_chunks, d.total_chunks, embed_progress
),
"failed" => format!(
"Embedding build failed: {}",
d.error_message.as_deref().unwrap_or("unknown error")
),
s => format!("Status: {s}"),
},
None => "No embeddings built yet".to_string(),
},
Some(None) => "Failed to check embedding status".to_string(),
None => "Checking embedding status...".to_string(),
}
};
// Auto-poll embedding status every 3s while building/running
use_effect(move || {
if is_running || *building.read() {
spawn(async move {
#[cfg(feature = "web")]
gloo_timers::future::TimeoutFuture::new(3_000).await;
#[cfg(not(feature = "web"))]
tokio::time::sleep(std::time::Duration::from_secs(3)).await;
embedding_status.restart();
});
}
});
let repo_id_for_build = repo_id.clone();
let on_build = move |_| {
let rid = repo_id_for_build.clone();
building.set(true);
spawn(async move {
let _ = trigger_embedding_build(rid).await;
building.set(false);
embedding_status.restart();
});
};
let repo_id_for_send = repo_id.clone();
let mut do_send = move || {
let text = input_text.read().trim().to_string();
if text.is_empty() || *loading.read() {
return;
}
let rid = repo_id_for_send.clone();
let user_msg = text.clone();
// Add user message to UI
messages.write().push(UiChatMessage {
role: "user".to_string(),
content: user_msg.clone(),
sources: Vec::new(),
});
input_text.set(String::new());
loading.set(true);
spawn(async move {
// Build history from existing messages
let history: Vec<ChatHistoryMessage> = messages
.read()
.iter()
.filter(|m| m.role == "user" || m.role == "assistant")
.rev()
.skip(1) // skip the message we just added
.take(10) // limit history
.collect::<Vec<_>>()
.into_iter()
.rev()
.map(|m| ChatHistoryMessage {
role: m.role.clone(),
content: m.content.clone(),
})
.collect();
match send_chat_message(rid, user_msg, history).await {
Ok(resp) => {
messages.write().push(UiChatMessage {
role: "assistant".to_string(),
content: resp.data.message,
sources: resp.data.sources,
});
}
Err(e) => {
messages.write().push(UiChatMessage {
role: "assistant".to_string(),
content: format!("Error: {e}"),
sources: Vec::new(),
});
}
}
loading.set(false);
});
};
let mut do_send_click = do_send.clone();
rsx! {
PageHeader { title: "AI Chat" }
// Embedding status banner
div { class: if is_running || *building.read() { "chat-embedding-banner chat-embedding-building" } else { "chat-embedding-banner" },
div { class: "chat-embedding-status",
if is_running || *building.read() {
span { class: "chat-spinner" }
}
span { "{embedding_status_text}" }
}
if is_running || *building.read() {
div { class: "chat-progress-bar",
div {
class: "chat-progress-fill",
style: "width: {embed_progress}%;",
}
}
}
button {
class: "btn btn-sm",
disabled: *building.read() || is_running,
onclick: on_build,
if *building.read() || is_running { "Building..." } else { "Build Embeddings" }
}
}
div { class: "chat-container",
// Message list
div { class: "chat-messages",
if messages.read().is_empty() && !*loading.read() {
div { class: "chat-empty",
h3 { "Ask anything about your codebase" }
p { "Build embeddings first, then ask questions about functions, architecture, patterns, and more." }
}
}
for (i, msg) in messages.read().iter().enumerate() {
{
let class = if msg.role == "user" {
"chat-message chat-message-user"
} else {
"chat-message chat-message-assistant"
};
let content = msg.content.clone();
let sources = msg.sources.clone();
rsx! {
div { class: class, key: "{i}",
div { class: "chat-message-role",
if msg.role == "user" { "You" } else { "Assistant" }
}
div { class: "chat-message-content", "{content}" }
if !sources.is_empty() {
div { class: "chat-sources",
span { class: "chat-sources-label", "Sources:" }
for src in sources {
div { class: "chat-source-card",
div { class: "chat-source-header",
span { class: "chat-source-name",
"{src.qualified_name}"
}
span { class: "chat-source-location",
"{src.file_path}:{src.start_line}-{src.end_line}"
}
}
pre { class: "chat-source-snippet",
code { "{src.snippet}" }
}
}
}
}
}
}
}
}
}
if *loading.read() {
div { class: "chat-message chat-message-assistant",
div { class: "chat-message-role", "Assistant" }
div { class: "chat-message-content chat-typing", "Thinking..." }
}
}
}
// Input area
div { class: "chat-input-area",
textarea {
class: "chat-input",
placeholder: "Ask about your codebase...",
value: "{input_text}",
disabled: !has_embeddings,
oninput: move |e| input_text.set(e.value()),
onkeydown: move |e: Event<KeyboardData>| {
if e.key() == Key::Enter && !e.modifiers().shift() {
e.prevent_default();
do_send();
}
},
}
button {
class: "btn chat-send-btn",
disabled: *loading.read() || !has_embeddings,
onclick: move |_| do_send_click(),
"Send"
}
}
}
}
}

View File

@@ -0,0 +1,70 @@
use dioxus::prelude::*;
use crate::app::Route;
use crate::components::page_header::PageHeader;
use crate::infrastructure::repositories::fetch_repositories;
#[component]
pub fn ChatIndexPage() -> Element {
let repos = use_resource(|| async { fetch_repositories(1).await.ok() });
rsx! {
PageHeader {
title: "AI Chat",
description: "Ask questions about your codebase using RAG-augmented AI",
}
match &*repos.read() {
Some(Some(data)) => {
let repo_list = &data.data;
if repo_list.is_empty() {
rsx! {
div { class: "card",
p { "No repositories found. Add a repository first." }
}
}
} else {
rsx! {
div { class: "graph-index-grid",
for repo in repo_list {
{
let repo_id = repo.id.map(|id| id.to_hex()).unwrap_or_default();
let name = repo.name.clone();
let url = repo.git_url.clone();
let branch = repo.default_branch.clone();
rsx! {
Link {
to: Route::ChatPage { repo_id },
class: "graph-repo-card",
div { class: "graph-repo-card-header",
div { class: "graph-repo-card-icon", "\u{1F4AC}" }
h3 { class: "graph-repo-card-name", "{name}" }
}
if !url.is_empty() {
p { class: "graph-repo-card-url", "{url}" }
}
div { class: "graph-repo-card-meta",
span { class: "graph-repo-card-tag",
"\u{E0A0} {branch}"
}
span { class: "graph-repo-card-tag",
"AI Chat"
}
}
}
}
}
}
}
}
}
},
Some(None) => rsx! {
div { class: "card", p { "Failed to load repositories." } }
},
None => rsx! {
div { class: "loading", "Loading repositories..." }
},
}
}
}

View File

@@ -49,7 +49,7 @@ pub fn DastFindingsPage() -> Element {
}
td {
Link {
to: Route::DastFindingDetailPage { id: id },
to: Route::DastFindingDetailPage { id },
"{finding.get(\"title\").and_then(|v| v.as_str()).unwrap_or(\"-\")}"
}
}

View File

@@ -14,7 +14,9 @@ pub fn FindingsPage() -> Element {
let mut repo_filter = use_signal(String::new);
let repos = use_resource(|| async {
crate::infrastructure::repositories::fetch_repositories(1).await.ok()
crate::infrastructure::repositories::fetch_repositories(1)
.await
.ok()
});
let findings = use_resource(move || {

View File

@@ -27,13 +27,13 @@ pub fn GraphExplorerPage(repo_id: String) -> Element {
let mut inspector_open = use_signal(|| false);
// Search state
let mut search_query = use_signal(|| String::new());
let mut search_results = use_signal(|| Vec::<serde_json::Value>::new());
let mut file_filter = use_signal(|| String::new());
let mut search_query = use_signal(String::new);
let mut search_results = use_signal(Vec::<serde_json::Value>::new);
let mut file_filter = use_signal(String::new);
// Store serialized graph JSON in signals so use_effect can react to them
let mut nodes_json = use_signal(|| String::new());
let mut edges_json = use_signal(|| String::new());
let mut nodes_json = use_signal(String::new);
let mut edges_json = use_signal(String::new);
let mut graph_ready = use_signal(|| false);
// When resource resolves, serialize the data into signals
@@ -404,7 +404,7 @@ pub fn GraphExplorerPage(repo_id: String) -> Element {
} else if node_count > 0 {
// Data exists but nodes array was empty (shouldn't happen)
div { class: "loading", "Loading graph visualization..." }
} else if matches!(&*graph_data.read(), None) {
} else if (*graph_data.read()).is_none() {
div { class: "loading", "Loading graph data..." }
} else {
div { class: "graph-empty-state",

View File

@@ -0,0 +1,328 @@
use dioxus::prelude::*;
use crate::components::page_header::PageHeader;
use crate::components::toast::{ToastType, Toasts};
use crate::infrastructure::mcp::{
add_mcp_server, delete_mcp_server, fetch_mcp_servers, regenerate_mcp_token,
};
#[component]
pub fn McpServersPage() -> Element {
let mut servers = use_resource(|| async { fetch_mcp_servers().await.ok() });
let mut toasts = use_context::<Toasts>();
let mut show_form = use_signal(|| false);
let mut new_name = use_signal(String::new);
let mut new_endpoint = use_signal(String::new);
let mut new_transport = use_signal(|| "http".to_string());
let mut new_port = use_signal(|| "8090".to_string());
let mut new_description = use_signal(String::new);
let mut new_mongo_uri = use_signal(String::new);
let mut new_mongo_db = use_signal(String::new);
// Track which server's token is visible
let mut visible_token: Signal<Option<String>> = use_signal(|| None);
// Track which server is pending delete confirmation
let mut confirm_delete: Signal<Option<(String, String)>> = use_signal(|| None);
rsx! {
PageHeader {
title: "MCP Servers",
description: "Manage Model Context Protocol servers for LLM integrations",
}
div { class: "mb-4",
button {
class: "btn btn-primary",
onclick: move |_| show_form.set(!show_form()),
if show_form() { "Cancel" } else { "Register Server" }
}
}
if show_form() {
div { class: "card mb-4",
div { class: "card-header", "Register MCP Server" }
div { class: "mcp-form-grid",
div { class: "form-group",
label { "Name" }
input {
r#type: "text",
placeholder: "Production MCP",
value: "{new_name}",
oninput: move |e| new_name.set(e.value()),
}
}
div { class: "form-group",
label { "Endpoint URL" }
input {
r#type: "text",
placeholder: "https://mcp.example.com/mcp",
value: "{new_endpoint}",
oninput: move |e| new_endpoint.set(e.value()),
}
}
div { class: "form-group",
label { "Transport" }
select {
value: "{new_transport}",
oninput: move |e| new_transport.set(e.value()),
option { value: "http", "HTTP (Streamable)" }
option { value: "stdio", "Stdio" }
}
}
div { class: "form-group",
label { "Port" }
input {
r#type: "text",
placeholder: "8090",
value: "{new_port}",
oninput: move |e| new_port.set(e.value()),
}
}
div { class: "form-group",
label { "MongoDB URI" }
input {
r#type: "text",
placeholder: "mongodb://localhost:27017",
value: "{new_mongo_uri}",
oninput: move |e| new_mongo_uri.set(e.value()),
}
}
div { class: "form-group",
label { "Database Name" }
input {
r#type: "text",
placeholder: "compliance_scanner",
value: "{new_mongo_db}",
oninput: move |e| new_mongo_db.set(e.value()),
}
}
}
div { class: "form-group",
label { "Description" }
input {
r#type: "text",
placeholder: "Optional notes about this server",
value: "{new_description}",
oninput: move |e| new_description.set(e.value()),
}
}
button {
class: "btn btn-primary",
onclick: move |_| {
let name = new_name();
let endpoint = new_endpoint();
let transport = new_transport();
let port = new_port();
let desc = new_description();
let mongo_uri = new_mongo_uri();
let mongo_db = new_mongo_db();
spawn(async move {
match add_mcp_server(name, endpoint, transport, port, desc, mongo_uri, mongo_db).await {
Ok(_) => {
toasts.push(ToastType::Success, "MCP server registered");
servers.restart();
}
Err(e) => toasts.push(ToastType::Error, e.to_string()),
}
});
show_form.set(false);
new_name.set(String::new());
new_endpoint.set(String::new());
new_transport.set("http".to_string());
new_port.set("8090".to_string());
new_description.set(String::new());
new_mongo_uri.set(String::new());
new_mongo_db.set(String::new());
},
"Register"
}
}
}
// Delete confirmation modal
if let Some((ref del_id, ref del_name)) = *confirm_delete.read() {
div { class: "modal-overlay",
onclick: move |_| confirm_delete.set(None),
div { class: "modal-dialog",
onclick: move |e| e.stop_propagation(),
h3 { "Delete MCP Server" }
p { "Are you sure you want to remove " strong { "{del_name}" } "?" }
p { class: "text-secondary", "Connected LLM clients will lose access." }
div { class: "modal-actions",
button {
class: "btn btn-ghost",
onclick: move |_| confirm_delete.set(None),
"Cancel"
}
button {
class: "btn btn-danger",
onclick: {
let id = del_id.clone();
move |_| {
let id = id.clone();
spawn(async move {
match delete_mcp_server(id).await {
Ok(_) => {
toasts.push(ToastType::Success, "Server removed");
servers.restart();
}
Err(e) => toasts.push(ToastType::Error, e.to_string()),
}
});
confirm_delete.set(None);
}
},
"Delete"
}
}
}
}
}
match &*servers.read() {
Some(Some(resp)) => {
if resp.data.is_empty() {
rsx! {
div { class: "card",
p { class: "text-secondary", "No MCP servers registered. Add one to get started." }
}
}
} else {
rsx! {
for server in resp.data.iter() {
{
let sid = server.id.map(|id| id.to_hex()).unwrap_or_default();
let name = server.name.clone();
let status_class = match server.status {
compliance_core::models::McpServerStatus::Running => "mcp-status-running",
compliance_core::models::McpServerStatus::Stopped => "mcp-status-stopped",
compliance_core::models::McpServerStatus::Error => "mcp-status-error",
};
let is_token_visible = visible_token().as_deref() == Some(sid.as_str());
let created_str = server.created_at.format("%Y-%m-%d %H:%M").to_string();
rsx! {
div { class: "card mcp-server-card mb-4",
div { class: "mcp-server-header",
div { class: "mcp-server-title",
h3 { "{server.name}" }
span { class: "mcp-status {status_class}",
"{server.status}"
}
}
div { class: "mcp-server-actions",
button {
class: "btn btn-sm btn-ghost",
title: "Delete server",
onclick: {
let id = sid.clone();
let name = name.clone();
move |_| {
confirm_delete.set(Some((id.clone(), name.clone())));
}
},
"Delete"
}
}
}
if let Some(ref desc) = server.description {
p { class: "text-secondary mb-3", "{desc}" }
}
div { class: "mcp-config-grid",
div { class: "mcp-config-item",
span { class: "mcp-config-label", "Endpoint" }
code { class: "mcp-config-value", "{server.endpoint_url}" }
}
div { class: "mcp-config-item",
span { class: "mcp-config-label", "Transport" }
span { class: "mcp-config-value", "{server.transport}" }
}
if let Some(port) = server.port {
div { class: "mcp-config-item",
span { class: "mcp-config-label", "Port" }
span { class: "mcp-config-value", "{port}" }
}
}
if let Some(ref db) = server.mongodb_database {
div { class: "mcp-config-item",
span { class: "mcp-config-label", "Database" }
span { class: "mcp-config-value", "{db}" }
}
}
}
div { class: "mcp-tools-section",
span { class: "mcp-config-label", "Enabled Tools" }
div { class: "mcp-tools-list",
for tool in server.tools_enabled.iter() {
span { class: "mcp-tool-badge", "{tool}" }
}
}
}
div { class: "mcp-token-section",
span { class: "mcp-config-label", "Access Token" }
div { class: "mcp-token-row",
code { class: "mcp-token-value",
if is_token_visible {
"{server.access_token}"
} else {
"mcp_••••••••••••••••••••••••••••"
}
}
button {
class: "btn btn-sm btn-ghost",
onclick: {
let id = sid.clone();
move |_| {
if visible_token().as_deref() == Some(id.as_str()) {
visible_token.set(None);
} else {
visible_token.set(Some(id.clone()));
}
}
},
if is_token_visible { "Hide" } else { "Reveal" }
}
button {
class: "btn btn-sm btn-ghost",
onclick: {
let id = sid.clone();
move |_| {
let id = id.clone();
spawn(async move {
match regenerate_mcp_token(id).await {
Ok(_) => {
toasts.push(ToastType::Success, "Token regenerated");
servers.restart();
}
Err(e) => toasts.push(ToastType::Error, e.to_string()),
}
});
}
},
"Regenerate"
}
}
}
div { class: "mcp-meta",
span { class: "text-secondary",
"Created {created_str}"
}
}
}
}
}
}
}
}
},
Some(None) => rsx! { div { class: "card", p { "Failed to load MCP servers." } } },
None => rsx! { div { class: "card", p { "Loading..." } } },
}
}
}

View File

@@ -1,3 +1,5 @@
pub mod chat;
pub mod chat_index;
pub mod dast_finding_detail;
pub mod dast_findings;
pub mod dast_overview;
@@ -8,11 +10,14 @@ pub mod graph_explorer;
pub mod graph_index;
pub mod impact_analysis;
pub mod issues;
pub mod mcp_servers;
pub mod overview;
pub mod repositories;
pub mod sbom;
pub mod settings;
pub use chat::ChatPage;
pub use chat_index::ChatIndexPage;
pub use dast_finding_detail::DastFindingDetailPage;
pub use dast_findings::DastFindingsPage;
pub use dast_overview::DastOverviewPage;
@@ -23,6 +28,7 @@ pub use graph_explorer::GraphExplorerPage;
pub use graph_index::GraphIndexPage;
pub use impact_analysis::ImpactAnalysisPage;
pub use issues::IssuesPage;
pub use mcp_servers::McpServersPage;
pub use overview::OverviewPage;
pub use repositories::RepositoriesPage;
pub use sbom::SbomPage;

View File

@@ -13,6 +13,7 @@ pub fn RepositoriesPage() -> Element {
let mut git_url = use_signal(String::new);
let mut branch = use_signal(|| "main".to_string());
let mut toasts = use_context::<Toasts>();
let mut confirm_delete = use_signal(|| Option::<(String, String)>::None); // (id, name)
let mut repos = use_resource(move || {
let p = page();
@@ -91,6 +92,48 @@ pub fn RepositoriesPage() -> Element {
}
}
// ── Delete confirmation dialog ──
if let Some((del_id, del_name)) = confirm_delete() {
div { class: "modal-overlay",
div { class: "modal-dialog",
h3 { "Delete Repository" }
p {
"Are you sure you want to delete "
strong { "{del_name}" }
"?"
}
p { class: "modal-warning",
"This will permanently remove all associated findings, SBOM entries, scan runs, graph data, embeddings, and CVE alerts."
}
div { class: "modal-actions",
button {
class: "btn btn-secondary",
onclick: move |_| confirm_delete.set(None),
"Cancel"
}
button {
class: "btn btn-danger",
onclick: move |_| {
let id = del_id.clone();
let name = del_name.clone();
confirm_delete.set(None);
spawn(async move {
match crate::infrastructure::repositories::delete_repository(id).await {
Ok(_) => {
toasts.push(ToastType::Success, format!("{name} deleted"));
repos.restart();
}
Err(e) => toasts.push(ToastType::Error, e.to_string()),
}
});
},
"Delete"
}
}
}
}
}
match &*repos.read() {
Some(Some(resp)) => {
let total_pages = resp.total.unwrap_or(0).div_ceil(20).max(1);
@@ -112,7 +155,9 @@ pub fn RepositoriesPage() -> Element {
for repo in &resp.data {
{
let repo_id = repo.id.as_ref().map(|id| id.to_hex()).unwrap_or_default();
let repo_id_clone = repo_id.clone();
let repo_id_scan = repo_id.clone();
let repo_id_del = repo_id.clone();
let repo_name_del = repo.name.clone();
rsx! {
tr {
td { "{repo.name}" }
@@ -149,7 +194,7 @@ pub fn RepositoriesPage() -> Element {
button {
class: "btn btn-ghost",
onclick: move |_| {
let id = repo_id_clone.clone();
let id = repo_id_scan.clone();
spawn(async move {
match crate::infrastructure::repositories::trigger_repo_scan(id).await {
Ok(_) => toasts.push(ToastType::Success, "Scan triggered"),
@@ -159,6 +204,13 @@ pub fn RepositoriesPage() -> Element {
},
"Scan"
}
button {
class: "btn btn-ghost btn-ghost-danger",
onclick: move |_| {
confirm_delete.set(Some((repo_id_del.clone(), repo_name_del.clone())));
},
"Delete"
}
}
}
}

View File

@@ -2,60 +2,335 @@ use dioxus::prelude::*;
use crate::components::page_header::PageHeader;
use crate::components::pagination::Pagination;
use crate::infrastructure::sbom::*;
#[component]
pub fn SbomPage() -> Element {
// ── Filter signals ──
let mut page = use_signal(|| 1u64);
let mut repo_filter = use_signal(String::new);
let mut pm_filter = use_signal(String::new);
let mut search_q = use_signal(String::new);
let mut vuln_toggle = use_signal(|| Option::<bool>::None);
let mut license_filter = use_signal(String::new);
// ── Active tab: "packages" | "licenses" | "diff" ──
let mut active_tab = use_signal(|| "packages".to_string());
// ── Vuln drill-down: track expanded row by (name, version) ──
let mut expanded_row = use_signal(|| Option::<String>::None);
// ── Export state ──
let mut show_export = use_signal(|| false);
let mut export_format = use_signal(|| "cyclonedx".to_string());
let mut export_result = use_signal(|| Option::<String>::None);
// ── Diff state ──
let mut diff_repo_a = use_signal(String::new);
let mut diff_repo_b = use_signal(String::new);
// ── Repos for dropdowns ──
let repos = use_resource(|| async {
crate::infrastructure::repositories::fetch_repositories(1)
.await
.ok()
});
// ── SBOM list (filtered) ──
let sbom = use_resource(move || {
let p = page();
async move { crate::infrastructure::sbom::fetch_sbom(p).await.ok() }
let repo = repo_filter();
let pm = pm_filter();
let q = search_q();
let hv = vuln_toggle();
let lic = license_filter();
async move {
fetch_sbom_filtered(
if repo.is_empty() { None } else { Some(repo) },
if pm.is_empty() { None } else { Some(pm) },
if q.is_empty() { None } else { Some(q) },
hv,
if lic.is_empty() { None } else { Some(lic) },
p,
)
.await
.ok()
}
});
// ── License summary ──
let license_data = use_resource(move || {
let repo = repo_filter();
async move {
fetch_license_summary(if repo.is_empty() { None } else { Some(repo) })
.await
.ok()
}
});
// ── Diff data ──
let diff_data = use_resource(move || {
let a = diff_repo_a();
let b = diff_repo_b();
async move {
if a.is_empty() || b.is_empty() {
return None;
}
fetch_sbom_diff(a, b).await.ok()
}
});
rsx! {
PageHeader {
title: "SBOM",
description: "Software Bill of Materials - dependency inventory across all repositories",
description: "Software Bill of Materials dependency inventory, license compliance, and vulnerability analysis",
}
match &*sbom.read() {
Some(Some(resp)) => {
let total_pages = resp.total.unwrap_or(0).div_ceil(50).max(1);
rsx! {
div { class: "card",
div { class: "table-wrapper",
table {
thead {
tr {
th { "Package" }
th { "Version" }
th { "Manager" }
th { "License" }
th { "Vulnerabilities" }
// ── Tab bar ──
div { class: "sbom-tab-bar",
button {
class: if active_tab() == "packages" { "sbom-tab active" } else { "sbom-tab" },
onclick: move |_| active_tab.set("packages".to_string()),
"Packages"
}
button {
class: if active_tab() == "licenses" { "sbom-tab active" } else { "sbom-tab" },
onclick: move |_| active_tab.set("licenses".to_string()),
"License Compliance"
}
button {
class: if active_tab() == "diff" { "sbom-tab active" } else { "sbom-tab" },
onclick: move |_| active_tab.set("diff".to_string()),
"Compare"
}
}
// ═══════════════ PACKAGES TAB ═══════════════
if active_tab() == "packages" {
// ── Filter bar ──
div { class: "sbom-filter-bar",
select {
class: "sbom-filter-select",
onchange: move |e| { repo_filter.set(e.value()); page.set(1); },
option { value: "", "All Repositories" }
{
match &*repos.read() {
Some(Some(resp)) => rsx! {
for repo in &resp.data {
{
let id = repo.id.as_ref().map(|id| id.to_hex()).unwrap_or_default();
let name = repo.name.clone();
rsx! { option { value: "{id}", "{name}" } }
}
}
tbody {
for entry in &resp.data {
},
_ => rsx! {},
}
}
}
select {
class: "sbom-filter-select",
onchange: move |e| { pm_filter.set(e.value()); page.set(1); },
option { value: "", "All Managers" }
option { value: "npm", "npm" }
option { value: "cargo", "Cargo" }
option { value: "pip", "pip" }
option { value: "go", "Go" }
option { value: "maven", "Maven" }
option { value: "nuget", "NuGet" }
option { value: "composer", "Composer" }
option { value: "gem", "RubyGems" }
}
input {
class: "sbom-filter-input",
r#type: "text",
placeholder: "Search packages...",
oninput: move |e| { search_q.set(e.value()); page.set(1); },
}
select {
class: "sbom-filter-select",
onchange: move |e| {
let val = e.value();
vuln_toggle.set(match val.as_str() {
"true" => Some(true),
"false" => Some(false),
_ => None,
});
page.set(1);
},
option { value: "", "All Packages" }
option { value: "true", "With Vulnerabilities" }
option { value: "false", "No Vulnerabilities" }
}
select {
class: "sbom-filter-select",
onchange: move |e| { license_filter.set(e.value()); page.set(1); },
option { value: "", "All Licenses" }
option { value: "MIT", "MIT" }
option { value: "Apache-2.0", "Apache 2.0" }
option { value: "BSD-3-Clause", "BSD 3-Clause" }
option { value: "ISC", "ISC" }
option { value: "GPL-3.0", "GPL 3.0" }
option { value: "GPL-2.0", "GPL 2.0" }
option { value: "LGPL-2.1", "LGPL 2.1" }
option { value: "MPL-2.0", "MPL 2.0" }
}
// ── Export button ──
div { class: "sbom-export-wrapper",
button {
class: "btn btn-secondary sbom-export-btn",
onclick: move |_| show_export.toggle(),
"Export"
}
if show_export() {
div { class: "sbom-export-dropdown",
select {
class: "sbom-filter-select",
value: "{export_format}",
onchange: move |e| export_format.set(e.value()),
option { value: "cyclonedx", "CycloneDX 1.5" }
option { value: "spdx", "SPDX 2.3" }
}
button {
class: "btn btn-primary",
disabled: repo_filter().is_empty(),
onclick: move |_| {
let repo = repo_filter();
let fmt = export_format();
spawn(async move {
match fetch_sbom_export(repo, fmt).await {
Ok(json) => export_result.set(Some(json)),
Err(e) => tracing::error!("Export failed: {e}"),
}
});
},
"Download"
}
if repo_filter().is_empty() {
span { class: "sbom-export-hint", "Select a repo first" }
}
}
}
}
}
// ── Export result display ──
if let Some(json) = export_result() {
div { class: "card sbom-export-result",
div { class: "sbom-export-result-header",
strong { "Exported SBOM" }
button {
class: "btn btn-secondary",
onclick: move |_| export_result.set(None),
"Close"
}
}
pre {
style: "max-height: 400px; overflow: auto; font-size: 12px;",
"{json}"
}
}
}
// ── SBOM table ──
match &*sbom.read() {
Some(Some(resp)) => {
let total_pages = resp.total.unwrap_or(0).div_ceil(50).max(1);
rsx! {
if let Some(total) = resp.total {
div { class: "sbom-result-count",
"{total} package(s) found"
}
}
div { class: "card",
div { class: "table-wrapper",
table {
thead {
tr {
td {
style: "font-weight: 500;",
"{entry.name}"
}
td {
style: "font-family: monospace; font-size: 13px;",
"{entry.version}"
}
td { "{entry.package_manager}" }
td { "{entry.license.as_deref().unwrap_or(\"-\")}" }
td {
if entry.known_vulnerabilities.is_empty() {
span {
style: "color: var(--success);",
"None"
th { "Package" }
th { "Version" }
th { "Manager" }
th { "License" }
th { "Vulnerabilities" }
}
}
tbody {
for entry in &resp.data {
{
let row_key = format!("{}@{}", entry.name, entry.version);
let is_expanded = expanded_row() == Some(row_key.clone());
let has_vulns = !entry.known_vulnerabilities.is_empty();
let license_class = license_css_class(entry.license.as_deref());
let row_key_click = row_key.clone();
rsx! {
tr {
td {
style: "font-weight: 500;",
"{entry.name}"
}
td {
style: "font-family: var(--font-mono, monospace); font-size: 13px;",
"{entry.version}"
}
td { "{entry.package_manager}" }
td {
span { class: "sbom-license-badge {license_class}",
"{entry.license.as_deref().unwrap_or(\"-\")}"
}
}
td {
if has_vulns {
span {
class: "badge badge-high sbom-vuln-toggle",
onclick: move |_| {
let key = row_key_click.clone();
if expanded_row() == Some(key.clone()) {
expanded_row.set(None);
} else {
expanded_row.set(Some(key));
}
},
"{entry.known_vulnerabilities.len()} vuln(s) ▾"
}
} else {
span {
style: "color: var(--success);",
"None"
}
}
}
}
} else {
span { class: "badge badge-high",
"{entry.known_vulnerabilities.len()} vuln(s)"
// ── Vulnerability drill-down row ──
if is_expanded && has_vulns {
tr { class: "sbom-vuln-detail-row",
td { colspan: "5",
div { class: "sbom-vuln-detail",
for vuln in &entry.known_vulnerabilities {
div { class: "sbom-vuln-card",
div { class: "sbom-vuln-card-header",
span { class: "sbom-vuln-id", "{vuln.id}" }
span { class: "sbom-vuln-source", "{vuln.source}" }
if let Some(sev) = &vuln.severity {
span {
class: "badge badge-{sev}",
"{sev}"
}
}
}
if let Some(url) = &vuln.url {
a {
href: "{url}",
target: "_blank",
class: "sbom-vuln-link",
"View Advisory →"
}
}
}
}
}
}
}
}
}
}
@@ -63,21 +338,321 @@ pub fn SbomPage() -> Element {
}
}
}
Pagination {
current_page: page(),
total_pages: total_pages,
on_page_change: move |p| page.set(p),
}
}
Pagination {
current_page: page(),
total_pages: total_pages,
on_page_change: move |p| page.set(p),
}
},
Some(None) => rsx! {
div { class: "card", p { "Failed to load SBOM." } }
},
None => rsx! {
div { class: "loading", "Loading SBOM..." }
},
}
}
// ═══════════════ LICENSE COMPLIANCE TAB ═══════════════
if active_tab() == "licenses" {
match &*license_data.read() {
Some(Some(resp)) => {
let total_pkgs: u64 = resp.data.iter().map(|l| l.count).sum();
let has_copyleft = resp.data.iter().any(|l| l.is_copyleft);
let copyleft_items: Vec<_> = resp.data.iter().filter(|l| l.is_copyleft).collect();
rsx! {
if has_copyleft {
div { class: "license-copyleft-warning",
strong { "⚠ Copyleft Licenses Detected" }
p { "The following copyleft-licensed packages may impose distribution requirements on your software." }
for item in &copyleft_items {
div { class: "license-copyleft-item",
span { class: "sbom-license-badge license-copyleft", "{item.license}" }
span { " — {item.count} package(s): " }
span { class: "license-pkg-list",
"{item.packages.join(\", \")}"
}
}
}
}
}
div { class: "card",
h3 { style: "margin-bottom: 16px;", "License Distribution" }
if total_pkgs > 0 {
div { class: "license-bar-chart",
for item in &resp.data {
{
let pct = (item.count as f64 / total_pkgs as f64 * 100.0).max(2.0);
let bar_class = if item.is_copyleft { "license-bar license-copyleft" } else { "license-bar license-permissive" };
rsx! {
div { class: "license-bar-row",
span { class: "license-bar-label", "{item.license}" }
div { class: "license-bar-track",
div {
class: "{bar_class}",
style: "width: {pct}%;",
}
}
span { class: "license-bar-count", "{item.count}" }
}
}
}
}
}
} else {
p { "No license data available." }
}
}
div { class: "card",
h3 { style: "margin-bottom: 16px;", "All Licenses" }
div { class: "table-wrapper",
table {
thead {
tr {
th { "License" }
th { "Type" }
th { "Packages" }
th { "Count" }
}
}
tbody {
for item in &resp.data {
tr {
td {
span {
class: "sbom-license-badge {license_type_class(item.is_copyleft)}",
"{item.license}"
}
}
td {
if item.is_copyleft {
span { class: "badge badge-high", "Copyleft" }
} else {
span { class: "badge badge-info", "Permissive" }
}
}
td {
style: "max-width: 400px; overflow: hidden; text-overflow: ellipsis; white-space: nowrap;",
"{item.packages.join(\", \")}"
}
td { "{item.count}" }
}
}
}
}
}
}
}
},
Some(None) => rsx! {
div { class: "card", p { "Failed to load license summary." } }
},
None => rsx! {
div { class: "loading", "Loading license data..." }
},
}
}
// ═══════════════ DIFF TAB ═══════════════
if active_tab() == "diff" {
div { class: "card",
h3 { style: "margin-bottom: 16px;", "Compare SBOMs Between Repositories" }
div { class: "sbom-diff-controls",
div { class: "sbom-diff-select-group",
label { "Repository A" }
select {
class: "sbom-filter-select",
onchange: move |e| diff_repo_a.set(e.value()),
option { value: "", "Select repository..." }
{
match &*repos.read() {
Some(Some(resp)) => rsx! {
for repo in &resp.data {
{
let id = repo.id.as_ref().map(|id| id.to_hex()).unwrap_or_default();
let name = repo.name.clone();
rsx! { option { value: "{id}", "{name}" } }
}
}
},
_ => rsx! {},
}
}
}
}
div { class: "sbom-diff-select-group",
label { "Repository B" }
select {
class: "sbom-filter-select",
onchange: move |e| diff_repo_b.set(e.value()),
option { value: "", "Select repository..." }
{
match &*repos.read() {
Some(Some(resp)) => rsx! {
for repo in &resp.data {
{
let id = repo.id.as_ref().map(|id| id.to_hex()).unwrap_or_default();
let name = repo.name.clone();
rsx! { option { value: "{id}", "{name}" } }
}
}
},
_ => rsx! {},
}
}
}
}
}
},
Some(None) => rsx! {
div { class: "card", p { "Failed to load SBOM." } }
},
None => rsx! {
div { class: "loading", "Loading SBOM..." }
},
}
if !diff_repo_a().is_empty() && !diff_repo_b().is_empty() {
match &*diff_data.read() {
Some(Some(resp)) => {
let d = &resp.data;
rsx! {
div { class: "sbom-diff-summary",
div { class: "sbom-diff-stat sbom-diff-added",
span { class: "sbom-diff-stat-num", "{d.only_in_a.len()}" }
span { "Only in A" }
}
div { class: "sbom-diff-stat sbom-diff-removed",
span { class: "sbom-diff-stat-num", "{d.only_in_b.len()}" }
span { "Only in B" }
}
div { class: "sbom-diff-stat sbom-diff-changed",
span { class: "sbom-diff-stat-num", "{d.version_changed.len()}" }
span { "Version Diffs" }
}
div { class: "sbom-diff-stat",
span { class: "sbom-diff-stat-num", "{d.common_count}" }
span { "Common" }
}
}
if !d.only_in_a.is_empty() {
div { class: "card",
h4 { style: "margin-bottom: 12px; color: var(--success);", "Only in Repository A" }
div { class: "table-wrapper",
table {
thead {
tr {
th { "Package" }
th { "Version" }
th { "Manager" }
}
}
tbody {
for e in &d.only_in_a {
tr { class: "sbom-diff-row-added",
td { "{e.name}" }
td { "{e.version}" }
td { "{e.package_manager}" }
}
}
}
}
}
}
}
if !d.only_in_b.is_empty() {
div { class: "card",
h4 { style: "margin-bottom: 12px; color: var(--danger);", "Only in Repository B" }
div { class: "table-wrapper",
table {
thead {
tr {
th { "Package" }
th { "Version" }
th { "Manager" }
}
}
tbody {
for e in &d.only_in_b {
tr { class: "sbom-diff-row-removed",
td { "{e.name}" }
td { "{e.version}" }
td { "{e.package_manager}" }
}
}
}
}
}
}
}
if !d.version_changed.is_empty() {
div { class: "card",
h4 { style: "margin-bottom: 12px; color: var(--warning);", "Version Differences" }
div { class: "table-wrapper",
table {
thead {
tr {
th { "Package" }
th { "Manager" }
th { "Version A" }
th { "Version B" }
}
}
tbody {
for e in &d.version_changed {
tr { class: "sbom-diff-row-changed",
td { "{e.name}" }
td { "{e.package_manager}" }
td { "{e.version_a}" }
td { "{e.version_b}" }
}
}
}
}
}
}
}
if d.only_in_a.is_empty() && d.only_in_b.is_empty() && d.version_changed.is_empty() {
div { class: "card",
p { "Both repositories have identical SBOM entries." }
}
}
}
},
Some(None) => rsx! {
div { class: "card", p { "Failed to load diff." } }
},
None => rsx! {
div { class: "loading", "Computing diff..." }
},
}
}
}
}
}
fn license_css_class(license: Option<&str>) -> &'static str {
match license {
Some(l) => {
let upper = l.to_uppercase();
if upper.contains("GPL") || upper.contains("AGPL") {
"license-copyleft"
} else if upper.contains("LGPL") || upper.contains("MPL") {
"license-weak-copyleft"
} else {
"license-permissive"
}
}
None => "",
}
}
fn license_type_class(is_copyleft: bool) -> &'static str {
if is_copyleft {
"license-copyleft"
} else {
"license-permissive"
}
}

View File

@@ -234,10 +234,7 @@ impl ApiFuzzerAgent {
.ok()?;
let headers = response.headers();
let acao = headers
.get("access-control-allow-origin")?
.to_str()
.ok()?;
let acao = headers.get("access-control-allow-origin")?.to_str().ok()?;
if acao == "*" || acao == "https://evil.com" {
let acac = headers
@@ -265,12 +262,9 @@ impl ApiFuzzerAgent {
request_body: None,
response_status: response.status().as_u16(),
response_headers: Some(
[(
"Access-Control-Allow-Origin".to_string(),
acao.to_string(),
)]
.into_iter()
.collect(),
[("Access-Control-Allow-Origin".to_string(), acao.to_string())]
.into_iter()
.collect(),
),
response_snippet: None,
screenshot_path: None,

View File

@@ -132,7 +132,10 @@ impl DastAgent for AuthBypassAgent {
String::new(),
target_id.clone(),
DastVulnType::AuthBypass,
format!("HTTP method tampering: {} accepted on {}", method, endpoint.url),
format!(
"HTTP method tampering: {} accepted on {}",
method, endpoint.url
),
format!(
"Endpoint {} accepts {} requests which may bypass access controls.",
endpoint.url, method

View File

@@ -20,10 +20,7 @@ impl SsrfAgent {
("http://[::1]", "localhost IPv6"),
("http://0.0.0.0", "zero address"),
("http://169.254.169.254/latest/meta-data/", "AWS metadata"),
(
"http://metadata.google.internal/",
"GCP metadata",
),
("http://metadata.google.internal/", "GCP metadata"),
("http://127.0.0.1:22", "SSH port probe"),
("http://127.0.0.1:3306", "MySQL port probe"),
("http://localhost/admin", "localhost admin"),
@@ -91,10 +88,7 @@ impl DastAgent for SsrfAgent {
.post(&endpoint.url)
.form(&[(param.name.as_str(), payload)])
} else {
let test_url = format!(
"{}?{}={}",
endpoint.url, param.name, payload
);
let test_url = format!("{}?{}={}", endpoint.url, param.name, payload);
self.http.get(&test_url)
};
@@ -133,10 +127,7 @@ impl DastAgent for SsrfAgent {
String::new(),
target_id.clone(),
DastVulnType::Ssrf,
format!(
"SSRF ({technique}) via parameter '{}'",
param.name
),
format!("SSRF ({technique}) via parameter '{}'", param.name),
format!(
"Server-side request forgery detected in parameter '{}' at {}. \
The application made a request to an internal resource ({}).",

View File

@@ -17,26 +17,11 @@ impl XssAgent {
fn payloads(&self) -> Vec<(&str, &str)> {
vec![
("<script>alert(1)</script>", "basic script injection"),
(
"<img src=x onerror=alert(1)>",
"event handler injection",
),
(
"<svg/onload=alert(1)>",
"svg event handler",
),
(
"javascript:alert(1)",
"javascript protocol",
),
(
"'\"><script>alert(1)</script>",
"attribute breakout",
),
(
"<body onload=alert(1)>",
"body event handler",
),
("<img src=x onerror=alert(1)>", "event handler injection"),
("<svg/onload=alert(1)>", "svg event handler"),
("javascript:alert(1)", "javascript protocol"),
("'\"><script>alert(1)</script>", "attribute breakout"),
("<body onload=alert(1)>", "body event handler"),
]
}
}
@@ -65,10 +50,7 @@ impl DastAgent for XssAgent {
for param in &endpoint.parameters {
for (payload, technique) in self.payloads() {
let test_url = if endpoint.method == "GET" {
format!(
"{}?{}={}",
endpoint.url, param.name, payload
)
format!("{}?{}={}", endpoint.url, param.name, payload)
} else {
endpoint.url.clone()
};

View File

@@ -28,8 +28,8 @@ impl WebCrawler {
base_url: &str,
excluded_paths: &[String],
) -> Result<Vec<DiscoveredEndpoint>, CoreError> {
let base = Url::parse(base_url)
.map_err(|e| CoreError::Dast(format!("Invalid base URL: {e}")))?;
let base =
Url::parse(base_url).map_err(|e| CoreError::Dast(format!("Invalid base URL: {e}")))?;
let mut visited: HashSet<String> = HashSet::new();
let mut endpoints: Vec<DiscoveredEndpoint> = Vec::new();
@@ -95,12 +95,15 @@ impl WebCrawler {
let document = Html::parse_document(&body);
// Extract links
let link_selector =
Selector::parse("a[href]").unwrap_or_else(|_| Selector::parse("a").expect("valid selector"));
let link_selector = match Selector::parse("a[href]") {
Ok(s) => s,
Err(_) => continue,
};
for element in document.select(&link_selector) {
if let Some(href) = element.value().attr("href") {
if let Some(absolute_url) = self.resolve_url(&base, &url, href) {
if self.is_same_origin(&base, &absolute_url) && !visited.contains(&absolute_url)
if self.is_same_origin(&base, &absolute_url)
&& !visited.contains(&absolute_url)
{
queue.push((absolute_url, depth + 1));
}
@@ -109,18 +112,18 @@ impl WebCrawler {
}
// Extract forms
let form_selector = Selector::parse("form")
.unwrap_or_else(|_| Selector::parse("form").expect("valid selector"));
let input_selector = Selector::parse("input, select, textarea")
.unwrap_or_else(|_| Selector::parse("input").expect("valid selector"));
let form_selector = match Selector::parse("form") {
Ok(s) => s,
Err(_) => continue,
};
let input_selector = match Selector::parse("input, select, textarea") {
Ok(s) => s,
Err(_) => continue,
};
for form in document.select(&form_selector) {
let action = form.value().attr("action").unwrap_or("");
let method = form
.value()
.attr("method")
.unwrap_or("GET")
.to_uppercase();
let method = form.value().attr("method").unwrap_or("GET").to_uppercase();
let form_url = self
.resolve_url(&base, &url, action)
@@ -128,20 +131,12 @@ impl WebCrawler {
let mut params = Vec::new();
for input in form.select(&input_selector) {
let name = input
.value()
.attr("name")
.unwrap_or("")
.to_string();
let name = input.value().attr("name").unwrap_or("").to_string();
if name.is_empty() {
continue;
}
let input_type = input
.value()
.attr("type")
.unwrap_or("text")
.to_string();
let input_type = input.value().attr("type").unwrap_or("text").to_string();
let location = if method == "GET" {
"query".to_string()

View File

@@ -149,11 +149,8 @@ impl DastOrchestrator {
let t2 = target.clone();
let c2 = context.clone();
let h2 = http.clone();
let xss_handle = tokio::spawn(async move {
crate::agents::xss::XssAgent::new(h2)
.run(&t2, &c2)
.await
});
let xss_handle =
tokio::spawn(async move { crate::agents::xss::XssAgent::new(h2).run(&t2, &c2).await });
let t3 = target.clone();
let c3 = context.clone();
@@ -167,11 +164,10 @@ impl DastOrchestrator {
let t4 = target.clone();
let c4 = context.clone();
let h4 = http.clone();
let ssrf_handle = tokio::spawn(async move {
crate::agents::ssrf::SsrfAgent::new(h4)
.run(&t4, &c4)
.await
});
let ssrf_handle =
tokio::spawn(
async move { crate::agents::ssrf::SsrfAgent::new(h4).run(&t4, &c4).await },
);
let t5 = target.clone();
let c5 = context.clone();
@@ -182,8 +178,13 @@ impl DastOrchestrator {
.await
});
let handles: Vec<tokio::task::JoinHandle<Result<Vec<DastFinding>, CoreError>>> =
vec![sqli_handle, xss_handle, auth_handle, ssrf_handle, api_handle];
let handles: Vec<tokio::task::JoinHandle<Result<Vec<DastFinding>, CoreError>>> = vec![
sqli_handle,
xss_handle,
auth_handle,
ssrf_handle,
api_handle,
];
let mut all_findings = Vec::new();
for handle in handles {

View File

@@ -81,10 +81,9 @@ impl ReconAgent {
];
for header in &missing_security {
if !headers.contains_key(*header) {
result.interesting_headers.insert(
format!("missing:{header}"),
"Not present".to_string(),
);
result
.interesting_headers
.insert(format!("missing:{header}"), "Not present".to_string());
}
}
@@ -122,10 +121,10 @@ impl ReconAgent {
let body_lower = body.to_lowercase();
for (tech, pattern) in &patterns {
if body_lower.contains(&pattern.to_lowercase()) {
if !result.technologies.contains(&tech.to_string()) {
result.technologies.push(tech.to_string());
}
if body_lower.contains(&pattern.to_lowercase())
&& !result.technologies.contains(&tech.to_string())
{
result.technologies.push(tech.to_string());
}
}
}

View File

@@ -0,0 +1,96 @@
use std::path::Path;
use compliance_core::models::graph::CodeNode;
/// A chunk of code extracted from a source file, ready for embedding
#[derive(Debug, Clone)]
pub struct CodeChunk {
pub qualified_name: String,
pub kind: String,
pub file_path: String,
pub start_line: u32,
pub end_line: u32,
pub language: String,
pub content: String,
pub context_header: String,
pub token_estimate: u32,
}
/// Extract embeddable code chunks from parsed CodeNodes.
///
/// For each node, reads the corresponding source lines from disk,
/// builds a context header, and estimates tokens.
pub fn extract_chunks(
repo_path: &Path,
nodes: &[CodeNode],
max_chunk_tokens: u32,
) -> Vec<CodeChunk> {
let mut chunks = Vec::new();
for node in nodes {
let file = repo_path.join(&node.file_path);
let source = match std::fs::read_to_string(&file) {
Ok(s) => s,
Err(_) => continue,
};
let lines: Vec<&str> = source.lines().collect();
let start = node.start_line.saturating_sub(1) as usize;
let end = (node.end_line as usize).min(lines.len());
if start >= end {
continue;
}
let content: String = lines[start..end].join("\n");
// Skip tiny chunks
if content.len() < 50 {
continue;
}
// Estimate tokens (~4 chars per token)
let mut token_estimate = (content.len() / 4) as u32;
// Truncate if too large
let final_content = if token_estimate > max_chunk_tokens {
let max_chars = (max_chunk_tokens as usize) * 4;
token_estimate = max_chunk_tokens;
content.chars().take(max_chars).collect()
} else {
content
};
// Build context header: file path + containing scope hint
let context_header = build_context_header(
&node.file_path,
&node.qualified_name,
&node.kind.to_string(),
);
chunks.push(CodeChunk {
qualified_name: node.qualified_name.clone(),
kind: node.kind.to_string(),
file_path: node.file_path.clone(),
start_line: node.start_line,
end_line: node.end_line,
language: node.language.clone(),
content: final_content,
context_header,
token_estimate,
});
}
chunks
}
fn build_context_header(file_path: &str, qualified_name: &str, kind: &str) -> String {
// Extract containing module/class from qualified name
// e.g. "src/main.rs::MyStruct::my_method" → parent is "MyStruct"
let parts: Vec<&str> = qualified_name.split("::").collect();
if parts.len() >= 2 {
let parent = parts[..parts.len() - 1].join("::");
format!("// {file_path} | {kind} in {parent}")
} else {
format!("// {file_path} | {kind}")
}
}

View File

@@ -109,8 +109,8 @@ pub fn detect_communities(code_graph: &CodeGraph) -> u32 {
let mut comm_remap: HashMap<u32, u32> = HashMap::new();
let mut next_id: u32 = 0;
for &c in community.values() {
if !comm_remap.contains_key(&c) {
comm_remap.insert(c, next_id);
if let std::collections::hash_map::Entry::Vacant(e) = comm_remap.entry(c) {
e.insert(next_id);
next_id += 1;
}
}
@@ -137,8 +137,7 @@ pub fn detect_communities(code_graph: &CodeGraph) -> u32 {
/// Apply community assignments back to code nodes
pub fn apply_communities(code_graph: &mut CodeGraph) -> u32 {
let count = detect_communities_with_assignment(code_graph);
count
detect_communities_with_assignment(code_graph)
}
/// Detect communities and write assignments into the nodes
@@ -235,8 +234,8 @@ fn detect_communities_with_assignment(code_graph: &mut CodeGraph) -> u32 {
let mut comm_remap: HashMap<u32, u32> = HashMap::new();
let mut next_id: u32 = 0;
for &c in community.values() {
if !comm_remap.contains_key(&c) {
comm_remap.insert(c, next_id);
if let std::collections::hash_map::Entry::Vacant(e) = comm_remap.entry(c) {
e.insert(next_id);
next_id += 1;
}
}

View File

@@ -0,0 +1,236 @@
use compliance_core::error::CoreError;
use compliance_core::models::embedding::{CodeEmbedding, EmbeddingBuildRun, EmbeddingBuildStatus};
use futures_util::TryStreamExt;
use mongodb::bson::doc;
use mongodb::{Collection, Database, IndexModel};
use tracing::info;
/// MongoDB persistence layer for code embeddings and vector search
pub struct EmbeddingStore {
embeddings: Collection<CodeEmbedding>,
builds: Collection<EmbeddingBuildRun>,
}
impl EmbeddingStore {
pub fn new(db: &Database) -> Self {
Self {
embeddings: db.collection("code_embeddings"),
builds: db.collection("embedding_builds"),
}
}
/// Create standard indexes. NOTE: The Atlas Vector Search index must be
/// created via the Atlas UI or CLI with the following definition:
/// ```json
/// {
/// "fields": [
/// { "type": "vector", "path": "embedding", "numDimensions": 1536, "similarity": "cosine" },
/// { "type": "filter", "path": "repo_id" }
/// ]
/// }
/// ```
pub async fn ensure_indexes(&self) -> Result<(), CoreError> {
self.embeddings
.create_index(
IndexModel::builder()
.keys(doc! { "repo_id": 1, "graph_build_id": 1 })
.build(),
)
.await?;
self.builds
.create_index(
IndexModel::builder()
.keys(doc! { "repo_id": 1, "started_at": -1 })
.build(),
)
.await?;
Ok(())
}
/// Delete all embeddings for a repository
pub async fn delete_repo_embeddings(&self, repo_id: &str) -> Result<u64, CoreError> {
let result = self
.embeddings
.delete_many(doc! { "repo_id": repo_id })
.await?;
info!(
"Deleted {} embeddings for repo {repo_id}",
result.deleted_count
);
Ok(result.deleted_count)
}
/// Store embeddings in batches of 500
pub async fn store_embeddings(&self, embeddings: &[CodeEmbedding]) -> Result<u64, CoreError> {
let mut total_inserted = 0u64;
for batch in embeddings.chunks(500) {
let result = self.embeddings.insert_many(batch).await?;
total_inserted += result.inserted_ids.len() as u64;
}
info!("Stored {total_inserted} embeddings");
Ok(total_inserted)
}
/// Store a new build run
pub async fn store_build(&self, build: &EmbeddingBuildRun) -> Result<(), CoreError> {
self.builds.insert_one(build).await?;
Ok(())
}
/// Update an existing build run
pub async fn update_build(
&self,
repo_id: &str,
graph_build_id: &str,
status: EmbeddingBuildStatus,
embedded_chunks: u32,
error_message: Option<String>,
) -> Result<(), CoreError> {
let mut update = doc! {
"$set": {
"status": mongodb::bson::to_bson(&status).unwrap_or_default(),
"embedded_chunks": embedded_chunks as i64,
}
};
if status == EmbeddingBuildStatus::Completed || status == EmbeddingBuildStatus::Failed {
if let Ok(set_doc) = update.get_document_mut("$set") {
set_doc.insert("completed_at", mongodb::bson::DateTime::now());
}
}
if let Some(msg) = error_message {
if let Ok(set_doc) = update.get_document_mut("$set") {
set_doc.insert("error_message", msg);
}
}
self.builds
.update_one(
doc! { "repo_id": repo_id, "graph_build_id": graph_build_id },
update,
)
.await?;
Ok(())
}
/// Get the latest embedding build for a repository
pub async fn get_latest_build(
&self,
repo_id: &str,
) -> Result<Option<EmbeddingBuildRun>, CoreError> {
Ok(self
.builds
.find_one(doc! { "repo_id": repo_id })
.sort(doc! { "started_at": -1 })
.await?)
}
/// Perform vector search. Tries Atlas $vectorSearch first, falls back to
/// brute-force cosine similarity for local MongoDB instances.
pub async fn vector_search(
&self,
repo_id: &str,
query_embedding: Vec<f64>,
limit: u32,
min_score: f64,
) -> Result<Vec<(CodeEmbedding, f64)>, CoreError> {
match self
.atlas_vector_search(repo_id, &query_embedding, limit, min_score)
.await
{
Ok(results) => Ok(results),
Err(e) => {
info!(
"Atlas $vectorSearch unavailable ({e}), falling back to brute-force cosine similarity"
);
self.bruteforce_vector_search(repo_id, &query_embedding, limit, min_score)
.await
}
}
}
/// Atlas $vectorSearch aggregation stage (requires Atlas Vector Search index)
async fn atlas_vector_search(
&self,
repo_id: &str,
query_embedding: &[f64],
limit: u32,
min_score: f64,
) -> Result<Vec<(CodeEmbedding, f64)>, CoreError> {
use mongodb::bson::{Bson, Document};
let pipeline = vec![
doc! {
"$vectorSearch": {
"index": "embedding_vector_index",
"path": "embedding",
"queryVector": query_embedding.iter().map(|&v| Bson::Double(v)).collect::<Vec<_>>(),
"numCandidates": (limit * 10) as i64,
"limit": limit as i64,
"filter": { "repo_id": repo_id },
}
},
doc! {
"$addFields": {
"search_score": { "$meta": "vectorSearchScore" }
}
},
doc! {
"$match": {
"search_score": { "$gte": min_score }
}
},
];
let mut cursor = self.embeddings.aggregate(pipeline).await?;
let mut results = Vec::new();
while let Some(doc) = cursor.try_next().await? {
let score = doc.get_f64("search_score").unwrap_or(0.0);
let mut clean_doc: Document = doc;
clean_doc.remove("search_score");
if let Ok(embedding) = mongodb::bson::from_document::<CodeEmbedding>(clean_doc) {
results.push((embedding, score));
}
}
Ok(results)
}
/// Brute-force cosine similarity fallback for local MongoDB without Atlas
async fn bruteforce_vector_search(
&self,
repo_id: &str,
query_embedding: &[f64],
limit: u32,
min_score: f64,
) -> Result<Vec<(CodeEmbedding, f64)>, CoreError> {
let mut cursor = self.embeddings.find(doc! { "repo_id": repo_id }).await?;
let query_norm = dot(query_embedding, query_embedding).sqrt();
let mut scored: Vec<(CodeEmbedding, f64)> = Vec::new();
while let Some(emb) = cursor.try_next().await? {
let doc_norm = dot(&emb.embedding, &emb.embedding).sqrt();
let score = if query_norm > 0.0 && doc_norm > 0.0 {
dot(query_embedding, &emb.embedding) / (query_norm * doc_norm)
} else {
0.0
};
if score >= min_score {
scored.push((emb, score));
}
}
scored.sort_by(|a, b| b.1.partial_cmp(&a.1).unwrap_or(std::cmp::Ordering::Equal));
scored.truncate(limit as usize);
Ok(scored)
}
}
fn dot(a: &[f64], b: &[f64]) -> f64 {
a.iter().zip(b.iter()).map(|(x, y)| x * y).sum()
}

View File

@@ -133,10 +133,10 @@ impl GraphEngine {
}
/// Try to resolve an edge target to a known node
fn resolve_edge_target<'a>(
fn resolve_edge_target(
&self,
target: &str,
node_map: &'a HashMap<String, NodeIndex>,
node_map: &HashMap<String, NodeIndex>,
) -> Option<NodeIndex> {
// Direct match
if let Some(idx) = node_map.get(target) {

View File

@@ -26,8 +26,11 @@ impl<'a> ImpactAnalyzer<'a> {
file_path: &str,
line_number: Option<u32>,
) -> ImpactAnalysis {
let mut analysis =
ImpactAnalysis::new(repo_id.to_string(), finding_id.to_string(), graph_build_id.to_string());
let mut analysis = ImpactAnalysis::new(
repo_id.to_string(),
finding_id.to_string(),
graph_build_id.to_string(),
);
// Find the node containing the finding
let target_node = self.find_node_at_location(file_path, line_number);
@@ -97,7 +100,11 @@ impl<'a> ImpactAnalyzer<'a> {
}
/// Find the graph node at a given file/line location
fn find_node_at_location(&self, file_path: &str, line_number: Option<u32>) -> Option<NodeIndex> {
fn find_node_at_location(
&self,
file_path: &str,
line_number: Option<u32>,
) -> Option<NodeIndex> {
let mut best: Option<(NodeIndex, u32)> = None; // (index, line_span)
for node in &self.code_graph.nodes {
@@ -166,12 +173,7 @@ impl<'a> ImpactAnalyzer<'a> {
}
/// Find a path from source to target (BFS, limited depth)
fn find_path(
&self,
from: NodeIndex,
to: NodeIndex,
max_depth: usize,
) -> Option<Vec<String>> {
fn find_path(&self, from: NodeIndex, to: NodeIndex, max_depth: usize) -> Option<Vec<String>> {
let mut visited = HashSet::new();
let mut queue: VecDeque<(NodeIndex, Vec<NodeIndex>)> = VecDeque::new();
queue.push_back((from, vec![from]));
@@ -209,7 +211,10 @@ impl<'a> ImpactAnalyzer<'a> {
None
}
fn get_node_by_index(&self, idx: NodeIndex) -> Option<&compliance_core::models::graph::CodeNode> {
fn get_node_by_index(
&self,
idx: NodeIndex,
) -> Option<&compliance_core::models::graph::CodeNode> {
let target_gi = idx.index() as u32;
self.code_graph
.nodes

View File

@@ -1,4 +1,6 @@
pub mod chunking;
pub mod community;
pub mod embedding_store;
pub mod engine;
pub mod impact;
pub mod persistence;

View File

@@ -211,8 +211,6 @@ impl GraphStore {
repo_id: &str,
graph_build_id: &str,
) -> Result<Vec<CommunityInfo>, CoreError> {
let filter = doc! {
"repo_id": repo_id,
"graph_build_id": graph_build_id,

View File

@@ -1,3 +1,6 @@
#![allow(clippy::only_used_in_recursion)]
#![allow(clippy::too_many_arguments)]
pub mod graph;
pub mod parsers;
pub mod search;

View File

@@ -7,6 +7,12 @@ use tree_sitter::{Node, Parser};
pub struct JavaScriptParser;
impl Default for JavaScriptParser {
fn default() -> Self {
Self::new()
}
}
impl JavaScriptParser {
pub fn new() -> Self {
Self
@@ -51,7 +57,13 @@ impl JavaScriptParser {
if let Some(body) = node.child_by_field_name("body") {
self.extract_calls(
body, source, file_path, repo_id, graph_build_id, &qualified, output,
body,
source,
file_path,
repo_id,
graph_build_id,
&qualified,
output,
);
}
}
@@ -97,7 +109,12 @@ impl JavaScriptParser {
if let Some(body) = node.child_by_field_name("body") {
self.walk_children(
body, source, file_path, repo_id, graph_build_id, Some(&qualified),
body,
source,
file_path,
repo_id,
graph_build_id,
Some(&qualified),
output,
);
}
@@ -130,7 +147,13 @@ impl JavaScriptParser {
if let Some(body) = node.child_by_field_name("body") {
self.extract_calls(
body, source, file_path, repo_id, graph_build_id, &qualified, output,
body,
source,
file_path,
repo_id,
graph_build_id,
&qualified,
output,
);
}
}
@@ -138,7 +161,13 @@ impl JavaScriptParser {
// Arrow functions assigned to variables: const foo = () => {}
"lexical_declaration" | "variable_declaration" => {
self.extract_arrow_functions(
node, source, file_path, repo_id, graph_build_id, parent_qualified, output,
node,
source,
file_path,
repo_id,
graph_build_id,
parent_qualified,
output,
);
}
"import_statement" => {
@@ -183,7 +212,13 @@ impl JavaScriptParser {
let mut cursor = node.walk();
for child in node.children(&mut cursor) {
self.walk_tree(
child, source, file_path, repo_id, graph_build_id, parent_qualified, output,
child,
source,
file_path,
repo_id,
graph_build_id,
parent_qualified,
output,
);
}
}
@@ -217,7 +252,13 @@ impl JavaScriptParser {
let mut cursor = node.walk();
for child in node.children(&mut cursor) {
self.extract_calls(
child, source, file_path, repo_id, graph_build_id, caller_qualified, output,
child,
source,
file_path,
repo_id,
graph_build_id,
caller_qualified,
output,
);
}
}
@@ -263,7 +304,12 @@ impl JavaScriptParser {
if let Some(body) = value_n.child_by_field_name("body") {
self.extract_calls(
body, source, file_path, repo_id, graph_build_id, &qualified,
body,
source,
file_path,
repo_id,
graph_build_id,
&qualified,
output,
);
}

View File

@@ -7,6 +7,12 @@ use tree_sitter::{Node, Parser};
pub struct PythonParser;
impl Default for PythonParser {
fn default() -> Self {
Self::new()
}
}
impl PythonParser {
pub fn new() -> Self {
Self

View File

@@ -57,10 +57,7 @@ impl ParserRegistry {
repo_id: &str,
graph_build_id: &str,
) -> Result<Option<ParseOutput>, CoreError> {
let ext = file_path
.extension()
.and_then(|e| e.to_str())
.unwrap_or("");
let ext = file_path.extension().and_then(|e| e.to_str()).unwrap_or("");
let parser_idx = match self.extension_map.get(ext) {
Some(idx) => *idx,
@@ -89,7 +86,15 @@ impl ParserRegistry {
let mut combined = ParseOutput::default();
let mut node_count: u32 = 0;
self.walk_directory(dir, dir, repo_id, graph_build_id, max_nodes, &mut node_count, &mut combined)?;
self.walk_directory(
dir,
dir,
repo_id,
graph_build_id,
max_nodes,
&mut node_count,
&mut combined,
)?;
info!(
nodes = combined.nodes.len(),
@@ -162,8 +167,7 @@ impl ParserRegistry {
Err(_) => continue, // Skip binary/unreadable files
};
if let Some(output) = self.parse_file(rel_path, &source, repo_id, graph_build_id)?
{
if let Some(output) = self.parse_file(rel_path, &source, repo_id, graph_build_id)? {
*node_count += output.nodes.len() as u32;
combined.nodes.extend(output.nodes);
combined.edges.extend(output.edges);

View File

@@ -7,6 +7,12 @@ use tree_sitter::{Node, Parser};
pub struct RustParser;
impl Default for RustParser {
fn default() -> Self {
Self::new()
}
}
impl RustParser {
pub fn new() -> Self {
Self
@@ -196,9 +202,7 @@ impl RustParser {
id: None,
repo_id: repo_id.to_string(),
graph_build_id: graph_build_id.to_string(),
source: parent_qualified
.unwrap_or(file_path)
.to_string(),
source: parent_qualified.unwrap_or(file_path).to_string(),
target: path,
kind: CodeEdgeKind::Imports,
file_path: file_path.to_string(),
@@ -354,10 +358,7 @@ impl RustParser {
fn extract_use_path(&self, use_text: &str) -> Option<String> {
// "use foo::bar::baz;" -> "foo::bar::baz"
let trimmed = use_text
.strip_prefix("use ")?
.trim_end_matches(';')
.trim();
let trimmed = use_text.strip_prefix("use ")?.trim_end_matches(';').trim();
Some(trimmed.to_string())
}
}

View File

@@ -7,6 +7,12 @@ use tree_sitter::{Node, Parser};
pub struct TypeScriptParser;
impl Default for TypeScriptParser {
fn default() -> Self {
Self::new()
}
}
impl TypeScriptParser {
pub fn new() -> Self {
Self
@@ -49,7 +55,13 @@ impl TypeScriptParser {
if let Some(body) = node.child_by_field_name("body") {
self.extract_calls(
body, source, file_path, repo_id, graph_build_id, &qualified, output,
body,
source,
file_path,
repo_id,
graph_build_id,
&qualified,
output,
);
}
}
@@ -80,12 +92,23 @@ impl TypeScriptParser {
// Heritage clause (extends/implements)
self.extract_heritage(
&node, source, file_path, repo_id, graph_build_id, &qualified, output,
&node,
source,
file_path,
repo_id,
graph_build_id,
&qualified,
output,
);
if let Some(body) = node.child_by_field_name("body") {
self.walk_children(
body, source, file_path, repo_id, graph_build_id, Some(&qualified),
body,
source,
file_path,
repo_id,
graph_build_id,
Some(&qualified),
output,
);
}
@@ -143,14 +166,26 @@ impl TypeScriptParser {
if let Some(body) = node.child_by_field_name("body") {
self.extract_calls(
body, source, file_path, repo_id, graph_build_id, &qualified, output,
body,
source,
file_path,
repo_id,
graph_build_id,
&qualified,
output,
);
}
}
}
"lexical_declaration" | "variable_declaration" => {
self.extract_arrow_functions(
node, source, file_path, repo_id, graph_build_id, parent_qualified, output,
node,
source,
file_path,
repo_id,
graph_build_id,
parent_qualified,
output,
);
}
"import_statement" => {
@@ -172,7 +207,13 @@ impl TypeScriptParser {
}
self.walk_children(
node, source, file_path, repo_id, graph_build_id, parent_qualified, output,
node,
source,
file_path,
repo_id,
graph_build_id,
parent_qualified,
output,
);
}
@@ -189,7 +230,13 @@ impl TypeScriptParser {
let mut cursor = node.walk();
for child in node.children(&mut cursor) {
self.walk_tree(
child, source, file_path, repo_id, graph_build_id, parent_qualified, output,
child,
source,
file_path,
repo_id,
graph_build_id,
parent_qualified,
output,
);
}
}
@@ -223,7 +270,13 @@ impl TypeScriptParser {
let mut cursor = node.walk();
for child in node.children(&mut cursor) {
self.extract_calls(
child, source, file_path, repo_id, graph_build_id, caller_qualified, output,
child,
source,
file_path,
repo_id,
graph_build_id,
caller_qualified,
output,
);
}
}
@@ -269,7 +322,12 @@ impl TypeScriptParser {
if let Some(body) = value_n.child_by_field_name("body") {
self.extract_calls(
body, source, file_path, repo_id, graph_build_id, &qualified,
body,
source,
file_path,
repo_id,
graph_build_id,
&qualified,
output,
);
}

View File

@@ -89,8 +89,10 @@ impl SymbolIndex {
.map_err(|e| CoreError::Graph(format!("Failed to create reader: {e}")))?;
let searcher = reader.searcher();
let query_parser =
QueryParser::for_index(&self.index, vec![self.name_field, self.qualified_name_field]);
let query_parser = QueryParser::for_index(
&self.index,
vec![self.name_field, self.qualified_name_field],
);
let query = query_parser
.parse_query(query_str)

21
compliance-mcp/Cargo.toml Normal file
View File

@@ -0,0 +1,21 @@
[package]
name = "compliance-mcp"
version = "0.1.0"
edition = "2021"
[dependencies]
compliance-core = { workspace = true, features = ["mongodb"] }
rmcp = { version = "0.16", features = ["server", "macros", "transport-io", "transport-streamable-http-server"] }
tokio = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
mongodb = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
dotenvy = "0.15"
thiserror = { workspace = true }
chrono = { workspace = true }
bson = { version = "2", features = ["chrono-0_4"] }
schemars = "1.0"
axum = "0.8"
tower-http = { version = "0.6", features = ["cors"] }

View File

@@ -0,0 +1,34 @@
use mongodb::{Client, Collection};
use compliance_core::models::*;
#[derive(Clone, Debug)]
pub struct Database {
inner: mongodb::Database,
}
impl Database {
pub async fn connect(uri: &str, db_name: &str) -> Result<Self, mongodb::error::Error> {
let client = Client::with_uri_str(uri).await?;
let db = client.database(db_name);
db.run_command(mongodb::bson::doc! { "ping": 1 }).await?;
tracing::info!("MCP server connected to MongoDB '{db_name}'");
Ok(Self { inner: db })
}
pub fn findings(&self) -> Collection<Finding> {
self.inner.collection("findings")
}
pub fn sbom_entries(&self) -> Collection<SbomEntry> {
self.inner.collection("sbom_entries")
}
pub fn dast_findings(&self) -> Collection<DastFinding> {
self.inner.collection("dast_findings")
}
pub fn dast_scan_runs(&self) -> Collection<DastScanRun> {
self.inner.collection("dast_scan_runs")
}
}

View File

@@ -0,0 +1,58 @@
mod database;
mod server;
mod tools;
use std::sync::Arc;
use database::Database;
use rmcp::transport::{
streamable_http_server::session::local::LocalSessionManager, StreamableHttpServerConfig,
StreamableHttpService,
};
use server::ComplianceMcpServer;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let _ = dotenvy::dotenv();
tracing_subscriber::fmt()
.with_env_filter(
tracing_subscriber::EnvFilter::from_default_env()
.add_directive("compliance_mcp=info".parse()?),
)
.init();
let mongo_uri =
std::env::var("MONGODB_URI").unwrap_or_else(|_| "mongodb://localhost:27017".to_string());
let db_name =
std::env::var("MONGODB_DATABASE").unwrap_or_else(|_| "compliance_scanner".to_string());
let db = Database::connect(&mongo_uri, &db_name).await?;
// If MCP_PORT is set, run as Streamable HTTP server; otherwise use stdio.
if let Ok(port_str) = std::env::var("MCP_PORT") {
let port: u16 = port_str.parse()?;
tracing::info!("Starting MCP server on HTTP port {port}");
let db_clone = db.clone();
let service = StreamableHttpService::new(
move || Ok(ComplianceMcpServer::new(db_clone.clone())),
Arc::new(LocalSessionManager::default()),
StreamableHttpServerConfig::default(),
);
let router = axum::Router::new().nest_service("/mcp", service);
let listener = tokio::net::TcpListener::bind(("0.0.0.0", port)).await?;
tracing::info!("MCP HTTP server listening on 0.0.0.0:{port}");
axum::serve(listener, router).await?;
} else {
tracing::info!("Starting MCP server on stdio");
let server = ComplianceMcpServer::new(db);
let transport = rmcp::transport::stdio();
use rmcp::ServiceExt;
let handle = server.serve(transport).await?;
handle.waiting().await?;
}
Ok(())
}

View File

@@ -0,0 +1,109 @@
use rmcp::{
handler::server::wrapper::Parameters, model::*, tool, tool_handler, tool_router, ServerHandler,
};
use crate::database::Database;
use crate::tools::{dast, findings, sbom};
pub struct ComplianceMcpServer {
db: Database,
#[allow(dead_code)]
tool_router: rmcp::handler::server::router::tool::ToolRouter<Self>,
}
#[tool_router]
impl ComplianceMcpServer {
pub fn new(db: Database) -> Self {
Self {
db,
tool_router: Self::tool_router(),
}
}
// ── Findings ──────────────────────────────────────────
#[tool(
description = "List security findings with optional filters for repo, severity, status, and scan type"
)]
async fn list_findings(
&self,
Parameters(params): Parameters<findings::ListFindingsParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
findings::list_findings(&self.db, params).await
}
#[tool(description = "Get a single finding by its ID")]
async fn get_finding(
&self,
Parameters(params): Parameters<findings::GetFindingParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
findings::get_finding(&self.db, params).await
}
#[tool(description = "Get a summary of findings counts grouped by severity and status")]
async fn findings_summary(
&self,
Parameters(params): Parameters<findings::FindingsSummaryParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
findings::findings_summary(&self.db, params).await
}
// ── SBOM ──────────────────────────────────────────────
#[tool(
description = "List SBOM packages with optional filters for repo, vulnerabilities, package manager, and license"
)]
async fn list_sbom_packages(
&self,
Parameters(params): Parameters<sbom::ListSbomPackagesParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
sbom::list_sbom_packages(&self.db, params).await
}
#[tool(
description = "Generate a vulnerability report for a repository showing all packages with known CVEs"
)]
async fn sbom_vuln_report(
&self,
Parameters(params): Parameters<sbom::SbomVulnReportParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
sbom::sbom_vuln_report(&self.db, params).await
}
// ── DAST ──────────────────────────────────────────────
#[tool(
description = "List DAST findings with optional filters for target, scan run, severity, exploitability, and vulnerability type"
)]
async fn list_dast_findings(
&self,
Parameters(params): Parameters<dast::ListDastFindingsParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
dast::list_dast_findings(&self.db, params).await
}
#[tool(description = "Get a summary of recent DAST scan runs and finding counts")]
async fn dast_scan_summary(
&self,
Parameters(params): Parameters<dast::DastScanSummaryParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
dast::dast_scan_summary(&self.db, params).await
}
}
#[tool_handler]
impl ServerHandler for ComplianceMcpServer {
fn get_info(&self) -> ServerInfo {
ServerInfo {
protocol_version: ProtocolVersion::V_2024_11_05,
capabilities: ServerCapabilities::builder()
.enable_tools()
.build(),
server_info: Implementation::from_build_env(),
instructions: Some(
"Compliance Scanner MCP server. Query security findings, SBOM data, and DAST results."
.to_string(),
),
}
}
}

View File

@@ -0,0 +1,154 @@
use mongodb::bson::doc;
use rmcp::{model::*, ErrorData as McpError};
use schemars::JsonSchema;
use serde::Deserialize;
use crate::database::Database;
const MAX_LIMIT: i64 = 200;
const DEFAULT_LIMIT: i64 = 50;
fn cap_limit(limit: Option<i64>) -> i64 {
limit.unwrap_or(DEFAULT_LIMIT).clamp(1, MAX_LIMIT)
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct ListDastFindingsParams {
/// Filter by DAST target ID
pub target_id: Option<String>,
/// Filter by scan run ID
pub scan_run_id: Option<String>,
/// Filter by severity: info, low, medium, high, critical
pub severity: Option<String>,
/// Only show confirmed exploitable findings
pub exploitable: Option<bool>,
/// Filter by vulnerability type (e.g. sql_injection, xss, ssrf)
pub vuln_type: Option<String>,
/// Maximum number of results (default 50, max 200)
pub limit: Option<i64>,
}
pub async fn list_dast_findings(
db: &Database,
params: ListDastFindingsParams,
) -> Result<CallToolResult, McpError> {
let mut filter = doc! {};
if let Some(ref target_id) = params.target_id {
filter.insert("target_id", target_id);
}
if let Some(ref scan_run_id) = params.scan_run_id {
filter.insert("scan_run_id", scan_run_id);
}
if let Some(ref severity) = params.severity {
filter.insert("severity", severity);
}
if let Some(exploitable) = params.exploitable {
filter.insert("exploitable", exploitable);
}
if let Some(ref vuln_type) = params.vuln_type {
filter.insert("vuln_type", vuln_type);
}
let limit = cap_limit(params.limit);
let mut cursor = db
.dast_findings()
.find(filter)
.sort(doc! { "created_at": -1 })
.limit(limit)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut results = Vec::new();
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let finding = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
results.push(finding);
}
let json = serde_json::to_string_pretty(&results)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct DastScanSummaryParams {
/// Filter by DAST target ID
pub target_id: Option<String>,
}
pub async fn dast_scan_summary(
db: &Database,
params: DastScanSummaryParams,
) -> Result<CallToolResult, McpError> {
let mut filter = doc! {};
if let Some(ref target_id) = params.target_id {
filter.insert("target_id", target_id);
}
// Get recent scan runs
let mut cursor = db
.dast_scan_runs()
.find(filter.clone())
.sort(doc! { "started_at": -1 })
.limit(10)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut scan_runs = Vec::new();
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let run = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
scan_runs.push(serde_json::json!({
"id": run.id.map(|id| id.to_hex()),
"target_id": run.target_id,
"status": run.status,
"findings_count": run.findings_count,
"exploitable_count": run.exploitable_count,
"endpoints_discovered": run.endpoints_discovered,
"started_at": run.started_at.to_rfc3339(),
"completed_at": run.completed_at.map(|t| t.to_rfc3339()),
}));
}
// Count findings by severity
let mut findings_filter = doc! {};
if let Some(ref target_id) = params.target_id {
findings_filter.insert("target_id", target_id);
}
let total_findings = db
.dast_findings()
.count_documents(findings_filter.clone())
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut exploitable_filter = findings_filter.clone();
exploitable_filter.insert("exploitable", true);
let exploitable_count = db
.dast_findings()
.count_documents(exploitable_filter)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let summary = serde_json::json!({
"total_findings": total_findings,
"exploitable_findings": exploitable_count,
"recent_scan_runs": scan_runs,
});
let json = serde_json::to_string_pretty(&summary)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}

View File

@@ -0,0 +1,163 @@
use mongodb::bson::doc;
use rmcp::{model::*, ErrorData as McpError};
use schemars::JsonSchema;
use serde::Deserialize;
use crate::database::Database;
const MAX_LIMIT: i64 = 200;
const DEFAULT_LIMIT: i64 = 50;
fn cap_limit(limit: Option<i64>) -> i64 {
limit.unwrap_or(DEFAULT_LIMIT).clamp(1, MAX_LIMIT)
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct ListFindingsParams {
/// Filter by repository ID
pub repo_id: Option<String>,
/// Filter by severity: info, low, medium, high, critical
pub severity: Option<String>,
/// Filter by status: open, triaged, false_positive, resolved, ignored
pub status: Option<String>,
/// Filter by scan type: sast, sbom, cve, gdpr, oauth
pub scan_type: Option<String>,
/// Maximum number of results (default 50, max 200)
pub limit: Option<i64>,
}
pub async fn list_findings(
db: &Database,
params: ListFindingsParams,
) -> Result<CallToolResult, McpError> {
let mut filter = doc! {};
if let Some(ref repo_id) = params.repo_id {
filter.insert("repo_id", repo_id);
}
if let Some(ref severity) = params.severity {
filter.insert("severity", severity);
}
if let Some(ref status) = params.status {
filter.insert("status", status);
}
if let Some(ref scan_type) = params.scan_type {
filter.insert("scan_type", scan_type);
}
let limit = cap_limit(params.limit);
let mut cursor = db
.findings()
.find(filter)
.sort(doc! { "created_at": -1 })
.limit(limit)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut results = Vec::new();
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let finding = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
results.push(finding);
}
let json = serde_json::to_string_pretty(&results)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct GetFindingParams {
/// Finding ID (MongoDB ObjectId hex string)
pub id: String,
}
pub async fn get_finding(
db: &Database,
params: GetFindingParams,
) -> Result<CallToolResult, McpError> {
let oid = bson::oid::ObjectId::parse_str(&params.id)
.map_err(|e| McpError::invalid_params(format!("invalid ObjectId: {e}"), None))?;
let finding = db
.findings()
.find_one(doc! { "_id": oid })
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?
.ok_or_else(|| McpError::invalid_params("finding not found", None))?;
let json = serde_json::to_string_pretty(&finding)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct FindingsSummaryParams {
/// Filter by repository ID
pub repo_id: Option<String>,
}
#[derive(serde::Serialize)]
struct SeverityCount {
severity: String,
count: u64,
}
pub async fn findings_summary(
db: &Database,
params: FindingsSummaryParams,
) -> Result<CallToolResult, McpError> {
let mut base_filter = doc! {};
if let Some(ref repo_id) = params.repo_id {
base_filter.insert("repo_id", repo_id);
}
let severities = ["critical", "high", "medium", "low", "info"];
let mut counts = Vec::new();
for sev in &severities {
let mut filter = base_filter.clone();
filter.insert("severity", sev);
let count = db
.findings()
.count_documents(filter)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
counts.push(SeverityCount {
severity: sev.to_string(),
count,
});
}
let total: u64 = counts.iter().map(|c| c.count).sum();
let mut status_counts = Vec::new();
for status in &["open", "triaged", "false_positive", "resolved", "ignored"] {
let mut filter = base_filter.clone();
filter.insert("status", status);
let count = db
.findings()
.count_documents(filter)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
status_counts.push(serde_json::json!({ "status": status, "count": count }));
}
let summary = serde_json::json!({
"total": total,
"by_severity": counts,
"by_status": status_counts,
});
let json = serde_json::to_string_pretty(&summary)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}

View File

@@ -0,0 +1,3 @@
pub mod dast;
pub mod findings;
pub mod sbom;

View File

@@ -0,0 +1,129 @@
use mongodb::bson::doc;
use rmcp::{model::*, ErrorData as McpError};
use schemars::JsonSchema;
use serde::Deserialize;
use crate::database::Database;
const MAX_LIMIT: i64 = 200;
const DEFAULT_LIMIT: i64 = 50;
fn cap_limit(limit: Option<i64>) -> i64 {
limit.unwrap_or(DEFAULT_LIMIT).clamp(1, MAX_LIMIT)
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct ListSbomPackagesParams {
/// Filter by repository ID
pub repo_id: Option<String>,
/// Only show packages with known vulnerabilities
pub has_vulns: Option<bool>,
/// Filter by package manager (e.g. npm, cargo, pip)
pub package_manager: Option<String>,
/// Filter by license (e.g. MIT, Apache-2.0)
pub license: Option<String>,
/// Maximum number of results (default 50, max 200)
pub limit: Option<i64>,
}
pub async fn list_sbom_packages(
db: &Database,
params: ListSbomPackagesParams,
) -> Result<CallToolResult, McpError> {
let mut filter = doc! {};
if let Some(ref repo_id) = params.repo_id {
filter.insert("repo_id", repo_id);
}
if let Some(ref pm) = params.package_manager {
filter.insert("package_manager", pm);
}
if let Some(ref license) = params.license {
filter.insert("license", license);
}
if params.has_vulns == Some(true) {
filter.insert("known_vulnerabilities.0", doc! { "$exists": true });
}
let limit = cap_limit(params.limit);
let mut cursor = db
.sbom_entries()
.find(filter)
.sort(doc! { "name": 1 })
.limit(limit)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut results = Vec::new();
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let entry = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
results.push(entry);
}
let json = serde_json::to_string_pretty(&results)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct SbomVulnReportParams {
/// Repository ID to generate vulnerability report for
pub repo_id: String,
}
pub async fn sbom_vuln_report(
db: &Database,
params: SbomVulnReportParams,
) -> Result<CallToolResult, McpError> {
let filter = doc! {
"repo_id": &params.repo_id,
"known_vulnerabilities.0": { "$exists": true },
};
let mut cursor = db
.sbom_entries()
.find(filter)
.sort(doc! { "name": 1 })
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut vulnerable_packages = Vec::new();
let mut total_vulns = 0u64;
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let entry = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
total_vulns += entry.known_vulnerabilities.len() as u64;
vulnerable_packages.push(serde_json::json!({
"name": entry.name,
"version": entry.version,
"package_manager": entry.package_manager,
"license": entry.license,
"vulnerabilities": entry.known_vulnerabilities,
}));
}
let report = serde_json::json!({
"repo_id": params.repo_id,
"vulnerable_packages_count": vulnerable_packages.len(),
"total_vulnerabilities": total_vulns,
"packages": vulnerable_packages,
});
let json = serde_json::to_string_pretty(&report)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}

Some files were not shown because too many files have changed in this diff Show More