12 Commits

Author SHA1 Message Date
0065c7c4b2 feat: UI improvements with icons, back navigation, and overview cards (#7)
All checks were successful
CI / Format (push) Successful in 3s
CI / Tests (push) Successful in 5m2s
CI / Detect Changes (push) Successful in 3s
CI / Deploy Agent (push) Has been skipped
CI / Deploy Dashboard (push) Successful in 2s
CI / Deploy MCP (push) Has been skipped
CI / Clippy (push) Successful in 3m59s
CI / Security Audit (push) Successful in 1m44s
CI / Deploy Docs (push) Has been skipped
2026-03-09 17:09:40 +00:00
46bf9de549 feat: findings refinement, new scanners, and deployment tooling (#6)
Some checks failed
CI / Format (push) Successful in 3s
CI / Clippy (push) Successful in 4m3s
CI / Security Audit (push) Successful in 1m38s
CI / Tests (push) Successful in 4m44s
CI / Detect Changes (push) Successful in 2s
CI / Deploy Agent (push) Successful in 2s
CI / Deploy Dashboard (push) Successful in 2s
CI / Deploy Docs (push) Has been skipped
CI / Deploy MCP (push) Failing after 2s
2026-03-09 12:53:12 +00:00
32e5fc21e7 feat: add MCP server for exposing compliance data to LLMs (#5)
Some checks failed
CI / Format (push) Successful in 3s
CI / Clippy (push) Successful in 4m4s
CI / Security Audit (push) Successful in 1m42s
CI / Tests (push) Successful in 4m38s
CI / Deploy Agent (push) Successful in 2s
CI / Deploy Dashboard (push) Successful in 1s
CI / Deploy MCP (push) Failing after 2s
CI / Detect Changes (push) Successful in 7s
CI / Deploy Docs (push) Successful in 2s
New `compliance-mcp` crate providing a Model Context Protocol server
with 7 tools: list/get/summarize findings, list SBOM packages, SBOM
vulnerability report, list DAST findings, and DAST scan summary.
Supports stdio (local dev) and Streamable HTTP (deployment via MCP_PORT).
Includes Dockerfile, CI clippy check, and Coolify deploy job.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Co-authored-by: Sharang Parnerkar <parnerkarsharang@gmail.com>
Reviewed-on: #5
2026-03-09 08:21:04 +00:00
Sharang Parnerkar
d13cef94cb Add Coolify deploy jobs with path-based change detection
All checks were successful
CI / Format (push) Successful in 4s
CI / Clippy (push) Successful in 3m27s
CI / Security Audit (push) Successful in 1m42s
CI / Tests (push) Successful in 4m45s
CI / Detect Changes (push) Successful in 5s
CI / Deploy Agent (push) Has been skipped
CI / Deploy Dashboard (push) Has been skipped
CI / Deploy Docs (push) Has been skipped
Deploys agent, dashboard, and docs independently based on which
files changed. Only triggers on main after tests pass.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 19:22:56 +01:00
Sharang Parnerkar
3a01a28591 Redesign sidebar user section to fix overlap issue
All checks were successful
CI / Format (push) Successful in 5s
CI / Clippy (push) Successful in 3m21s
CI / Security Audit (push) Successful in 1m41s
CI / Tests (push) Successful in 4m44s
Restructured layout: avatar, truncated username, and logout icon
in a single row. Collapsed state stacks vertically. Logout button
uses a subtle icon-only style with red hover. Proper text ellipsis
prevents name overflow.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 18:24:58 +01:00
Sharang Parnerkar
d490359591 Add polished login landing page with feature highlights
All checks were successful
CI / Format (push) Successful in 2s
CI / Clippy (push) Successful in 3m25s
CI / Tests (push) Successful in 4m41s
CI / Security Audit (push) Successful in 1m38s
Dark-themed login page with shield logo, feature grid, gradient
sign-in button, subtle grid background, and glow effect.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 17:51:41 +01:00
Sharang Parnerkar
b95ce44fb9 Bind dashboard to 0.0.0.0 for container accessibility
All checks were successful
CI / Format (push) Successful in 3s
CI / Security Audit (push) Successful in 1m40s
CI / Clippy (push) Successful in 3m22s
CI / Tests (push) Successful in 4m32s
Dioxus defaults to 127.0.0.1 which is unreachable from outside the
container. Hardcode 0.0.0.0 binding so reverse proxies can reach it.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 17:39:25 +01:00
Sharang Parnerkar
175d303dc4 Set IP=0.0.0.0 in dashboard Dockerfile for container networking
All checks were successful
CI / Tests (push) Successful in 4m34s
CI / Format (push) Successful in 4s
CI / Clippy (push) Successful in 3m28s
CI / Security Audit (push) Successful in 1m40s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 17:29:32 +01:00
Sharang Parnerkar
5a4af292fc Fix OTLP HTTP exporter: use reqwest-rustls for HTTPS support
All checks were successful
CI / Format (push) Successful in 3s
CI / Clippy (push) Successful in 3m11s
CI / Security Audit (push) Successful in 1m34s
CI / Tests (push) Successful in 4m29s
The reqwest-client feature doesn't include TLS support, causing
NoHttpClient error when connecting to HTTPS endpoints.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 17:07:41 +01:00
Sharang Parnerkar
04c8084943 Switch OTLP exporter from gRPC/tonic to HTTP/reqwest
Some checks failed
CI / Format (push) Successful in 3s
CI / Clippy (push) Successful in 3m6s
CI / Security Audit (push) Successful in 1m31s
CI / Tests (push) Has been cancelled
gRPC requires special reverse proxy config for HTTP/2. HTTP works
behind standard HTTPS proxies like Traefik/Caddy on port 4318.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 17:01:02 +01:00
Sharang Parnerkar
d67a51db18 Add nginx config for VitePress SPA routing
All checks were successful
CI / Tests (push) Successful in 4m23s
CI / Format (push) Successful in 3s
CI / Clippy (push) Successful in 3m17s
CI / Security Audit (push) Successful in 1m37s
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 15:09:42 +01:00
7e12d1433a docs: added vite-press docs (#4)
All checks were successful
CI / Clippy (push) Successful in 3m17s
CI / Security Audit (push) Successful in 1m36s
CI / Format (push) Successful in 2s
CI / Tests (push) Successful in 4m38s
Co-authored-by: Sharang Parnerkar <parnerkarsharang@gmail.com>
Reviewed-on: #4
2026-03-08 13:59:50 +00:00
101 changed files with 8910 additions and 286 deletions

View File

@@ -38,6 +38,9 @@ GIT_CLONE_BASE_PATH=/tmp/compliance-scanner/repos
DASHBOARD_PORT=8080
AGENT_API_URL=http://localhost:3001
# MCP Server
MCP_ENDPOINT_URL=http://localhost:8090
# Keycloak (required for authentication)
KEYCLOAK_URL=http://localhost:8080
KEYCLOAK_REALM=compliance

View File

@@ -70,6 +70,8 @@ jobs:
run: cargo clippy -p compliance-dashboard --features server --no-default-features -- -D warnings
- name: Clippy (dashboard web)
run: cargo clippy -p compliance-dashboard --features web --no-default-features -- -D warnings
- name: Clippy (mcp)
run: cargo clippy -p compliance-mcp -- -D warnings
- name: Show sccache stats
run: sccache --show-stats
if: always()
@@ -124,3 +126,119 @@ jobs:
- name: Show sccache stats
run: sccache --show-stats
if: always()
# ---------------------------------------------------------------------------
# Stage 3: Deploy (only on main, after tests pass)
# Each service only deploys when its relevant files changed.
# ---------------------------------------------------------------------------
detect-changes:
name: Detect Changes
runs-on: docker
if: github.ref == 'refs/heads/main'
needs: [test]
container:
image: alpine:latest
outputs:
agent: ${{ steps.changes.outputs.agent }}
dashboard: ${{ steps.changes.outputs.dashboard }}
docs: ${{ steps.changes.outputs.docs }}
mcp: ${{ steps.changes.outputs.mcp }}
steps:
- name: Install git
run: apk add --no-cache git
- name: Checkout
run: |
git init
git remote add origin "${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git"
git fetch --depth=2 origin "${GITHUB_SHA}"
git checkout FETCH_HEAD
- name: Detect changed paths
id: changes
run: |
CHANGED=$(git diff --name-only HEAD~1 HEAD 2>/dev/null || echo "")
echo "Changed files:"
echo "$CHANGED"
# Agent: core libs, agent code, agent Dockerfile
if echo "$CHANGED" | grep -qE '^(compliance-core/|compliance-agent/|compliance-graph/|compliance-dast/|Dockerfile\.agent|Cargo\.(toml|lock))'; then
echo "agent=true" >> "$GITHUB_OUTPUT"
else
echo "agent=false" >> "$GITHUB_OUTPUT"
fi
# Dashboard: core libs, dashboard code, dashboard Dockerfile, assets
if echo "$CHANGED" | grep -qE '^(compliance-core/|compliance-dashboard/|Dockerfile\.dashboard|Dioxus\.toml|assets/|bin/|Cargo\.(toml|lock))'; then
echo "dashboard=true" >> "$GITHUB_OUTPUT"
else
echo "dashboard=false" >> "$GITHUB_OUTPUT"
fi
# Docs: docs folder, docs Dockerfile
if echo "$CHANGED" | grep -qE '^(docs/|Dockerfile\.docs)'; then
echo "docs=true" >> "$GITHUB_OUTPUT"
else
echo "docs=false" >> "$GITHUB_OUTPUT"
fi
# MCP: core libs, mcp code, mcp Dockerfile
if echo "$CHANGED" | grep -qE '^(compliance-core/|compliance-mcp/|Dockerfile\.mcp|Cargo\.(toml|lock))'; then
echo "mcp=true" >> "$GITHUB_OUTPUT"
else
echo "mcp=false" >> "$GITHUB_OUTPUT"
fi
deploy-agent:
name: Deploy Agent
runs-on: docker
needs: [detect-changes]
if: needs.detect-changes.outputs.agent == 'true'
container:
image: alpine:latest
steps:
- name: Trigger Coolify deploy
run: |
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK_AGENT }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"
deploy-dashboard:
name: Deploy Dashboard
runs-on: docker
needs: [detect-changes]
if: needs.detect-changes.outputs.dashboard == 'true'
container:
image: alpine:latest
steps:
- name: Trigger Coolify deploy
run: |
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK_DASHBOARD }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"
deploy-docs:
name: Deploy Docs
runs-on: docker
needs: [detect-changes]
if: needs.detect-changes.outputs.docs == 'true'
container:
image: alpine:latest
steps:
- name: Trigger Coolify deploy
run: |
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK_DOCS }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"
deploy-mcp:
name: Deploy MCP
runs-on: docker
needs: [detect-changes]
if: needs.detect-changes.outputs.mcp == 'true'
container:
image: alpine:latest
steps:
- name: Trigger Coolify deploy
run: |
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK_MCP }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"

237
Cargo.lock generated
View File

@@ -167,7 +167,7 @@ dependencies = [
"sync_wrapper",
"tokio",
"tokio-tungstenite 0.28.0",
"tower 0.5.3",
"tower",
"tower-layer",
"tower-service",
"tracing",
@@ -413,6 +413,17 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
[[package]]
name = "chacha20"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f8d983286843e49675a4b7a2d174efe136dc93a18d69130dd18198a6c167601"
dependencies = [
"cfg-if",
"cpufeatures 0.3.0",
"rand_core 0.10.0",
]
[[package]]
name = "charset"
version = "0.1.5"
@@ -604,6 +615,7 @@ version = "0.1.0"
dependencies = [
"axum",
"base64",
"bson",
"chrono",
"compliance-core",
"dioxus",
@@ -627,6 +639,7 @@ dependencies = [
"tower-sessions",
"tracing",
"url",
"uuid",
"web-sys",
]
@@ -675,6 +688,27 @@ dependencies = [
"uuid",
]
[[package]]
name = "compliance-mcp"
version = "0.1.0"
dependencies = [
"axum",
"bson",
"chrono",
"compliance-core",
"dotenvy",
"mongodb",
"rmcp",
"schemars 1.2.1",
"serde",
"serde_json",
"thiserror 2.0.18",
"tokio",
"tower-http",
"tracing",
"tracing-subscriber",
]
[[package]]
name = "console_error_panic_hook"
version = "0.1.7"
@@ -869,6 +903,15 @@ dependencies = [
"libc",
]
[[package]]
name = "cpufeatures"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b2a41393f66f16b0823bb79094d54ac5fbd34ab292ddafb9a0456ac9f87d201"
dependencies = [
"libc",
]
[[package]]
name = "crc32fast"
version = "1.5.0"
@@ -972,8 +1015,18 @@ version = "0.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9cdf337090841a411e2a7f3deb9187445851f91b309c0c0a29e05f74a00a48c0"
dependencies = [
"darling_core",
"darling_macro",
"darling_core 0.21.3",
"darling_macro 0.21.3",
]
[[package]]
name = "darling"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "25ae13da2f202d56bd7f91c25fba009e7717a1e4a1cc98a76d844b65ae912e9d"
dependencies = [
"darling_core 0.23.0",
"darling_macro 0.23.0",
]
[[package]]
@@ -990,13 +1043,37 @@ dependencies = [
"syn",
]
[[package]]
name = "darling_core"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9865a50f7c335f53564bb694ef660825eb8610e0a53d3e11bf1b0d3df31e03b0"
dependencies = [
"ident_case",
"proc-macro2",
"quote",
"strsim",
"syn",
]
[[package]]
name = "darling_macro"
version = "0.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d38308df82d1080de0afee5d069fa14b0326a88c14f15c5ccda35b4a6c414c81"
dependencies = [
"darling_core",
"darling_core 0.21.3",
"quote",
"syn",
]
[[package]]
name = "darling_macro"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3984ec7bd6cfa798e62b4a642426a5be0e68f9401cfc2a01e3fa9ea2fcdb8d"
dependencies = [
"darling_core 0.23.0",
"quote",
"syn",
]
@@ -1335,7 +1412,7 @@ dependencies = [
"tokio-stream",
"tokio-tungstenite 0.27.0",
"tokio-util",
"tower 0.5.3",
"tower",
"tower-http",
"tower-layer",
"tracing",
@@ -1626,7 +1703,7 @@ dependencies = [
"tokio",
"tokio-tungstenite 0.27.0",
"tokio-util",
"tower 0.5.3",
"tower",
"tower-http",
"tracing",
"tracing-futures",
@@ -1827,7 +1904,7 @@ version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f43e744e4ea338060faee68ed933e46e722fb7f3617e722a5772d7e856d8b3ce"
dependencies = [
"darling",
"darling 0.21.3",
"proc-macro2",
"quote",
"syn",
@@ -2104,6 +2181,7 @@ dependencies = [
"cfg-if",
"libc",
"r-efi",
"rand_core 0.10.0",
"wasip2",
"wasip3",
]
@@ -3497,7 +3575,7 @@ dependencies = [
"serde_urlencoded",
"snafu",
"tokio",
"tower 0.5.3",
"tower",
"tower-http",
"tracing",
"url",
@@ -3599,8 +3677,6 @@ dependencies = [
"prost",
"reqwest",
"thiserror 2.0.18",
"tokio",
"tonic",
"tracing",
]
@@ -3668,6 +3744,12 @@ dependencies = [
"windows-link",
]
[[package]]
name = "pastey"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b867cad97c0791bbd3aaa6472142568c6c9e8f71937e98379f584cfb0cf35bec"
[[package]]
name = "pbkdf2"
version = "0.12.2"
@@ -4005,6 +4087,17 @@ dependencies = [
"rand_core 0.9.5",
]
[[package]]
name = "rand"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc266eb313df6c5c09c1c7b1fbe2510961e5bcd3add930c1e31f7ed9da0feff8"
dependencies = [
"chacha20",
"getrandom 0.4.1",
"rand_core 0.10.0",
]
[[package]]
name = "rand_chacha"
version = "0.3.1"
@@ -4043,6 +4136,12 @@ dependencies = [
"getrandom 0.3.4",
]
[[package]]
name = "rand_core"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c8d0fd677905edcbeedbf2edb6494d676f0e98d54d5cf9bda0b061cb8fb8aba"
[[package]]
name = "rand_distr"
version = "0.4.3"
@@ -4163,6 +4262,7 @@ dependencies = [
"pin-project-lite",
"quinn",
"rustls",
"rustls-native-certs",
"rustls-pki-types",
"serde",
"serde_json",
@@ -4171,7 +4271,7 @@ dependencies = [
"tokio",
"tokio-rustls",
"tokio-util",
"tower 0.5.3",
"tower",
"tower-http",
"tower-service",
"url",
@@ -4202,6 +4302,50 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "rmcp"
version = "0.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cc4c9c94680f75470ee8083a0667988b5d7b5beb70b9f998a8e51de7c682ce60"
dependencies = [
"async-trait",
"base64",
"bytes",
"chrono",
"futures",
"http",
"http-body",
"http-body-util",
"pastey",
"pin-project-lite",
"rand 0.10.0",
"rmcp-macros",
"schemars 1.2.1",
"serde",
"serde_json",
"sse-stream",
"thiserror 2.0.18",
"tokio",
"tokio-stream",
"tokio-util",
"tower-service",
"tracing",
"uuid",
]
[[package]]
name = "rmcp-macros"
version = "0.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "90c23c8f26cae4da838fbc3eadfaecf2d549d97c04b558e7bd90526a9c28b42a"
dependencies = [
"darling 0.23.0",
"proc-macro2",
"quote",
"serde_json",
"syn",
]
[[package]]
name = "rust-stemmers"
version = "1.2.0"
@@ -4365,12 +4509,26 @@ version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2b42f36aa1cd011945615b92222f6bf73c599a102a300334cd7f8dbeec726cc"
dependencies = [
"chrono",
"dyn-clone",
"ref-cast",
"schemars_derive",
"serde",
"serde_json",
]
[[package]]
name = "schemars_derive"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7d115b50f4aaeea07e79c1912f645c7513d81715d0420f8bc77a18c6260b307f"
dependencies = [
"proc-macro2",
"quote",
"serde_derive_internals",
"syn",
]
[[package]]
name = "scopeguard"
version = "1.2.0"
@@ -4510,6 +4668,17 @@ dependencies = [
"syn",
]
[[package]]
name = "serde_derive_internals"
version = "0.29.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "18d26a20a969b9e3fdf2fc2d9f21eda6c40e2de84c9408bb5d3b05d499aae711"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "serde_json"
version = "1.0.149"
@@ -4594,7 +4763,7 @@ version = "3.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a6d4e30573c8cb306ed6ab1dca8423eec9a463ea0e155f45399455e0368b27e0"
dependencies = [
"darling",
"darling 0.21.3",
"proc-macro2",
"quote",
"syn",
@@ -4616,7 +4785,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e3bf829a2d51ab4a5ddf1352d8470c140cadc8301b2ae1789db023f01cedd6ba"
dependencies = [
"cfg-if",
"cpufeatures",
"cpufeatures 0.2.17",
"digest",
]
@@ -4627,7 +4796,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a7507d819769d01a365ab707794a4084392c824f54a7a6a7862f8c3d0892b283"
dependencies = [
"cfg-if",
"cpufeatures",
"cpufeatures 0.2.17",
"digest",
]
@@ -4781,6 +4950,19 @@ version = "0.9.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6980e8d7511241f8acf4aebddbb1ff938df5eebe98691418c4468d0b72a96a67"
[[package]]
name = "sse-stream"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eb4dc4d33c68ec1f27d386b5610a351922656e1fdf5c05bbaad930cd1519479a"
dependencies = [
"bytes",
"futures-util",
"http-body",
"http-body-util",
"pin-project-lite",
]
[[package]]
name = "stable_deref_trait"
version = "1.2.1"
@@ -5364,35 +5546,10 @@ dependencies = [
"http",
"http-body",
"http-body-util",
"hyper",
"hyper-timeout",
"hyper-util",
"percent-encoding",
"pin-project",
"prost",
"tokio",
"tokio-stream",
"tower 0.4.13",
"tower-layer",
"tower-service",
"tracing",
]
[[package]]
name = "tower"
version = "0.4.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b8fa9be0de6cf49e536ce1851f987bd21a43b771b09473c3549a6c853db37c1c"
dependencies = [
"futures-core",
"futures-util",
"indexmap 1.9.3",
"pin-project",
"pin-project-lite",
"rand 0.8.5",
"slab",
"tokio",
"tokio-util",
"tower-layer",
"tower-service",
"tracing",
@@ -5453,7 +5610,7 @@ dependencies = [
"pin-project-lite",
"tokio",
"tokio-util",
"tower 0.5.3",
"tower",
"tower-layer",
"tower-service",
"tracing",

View File

@@ -5,6 +5,7 @@ members = [
"compliance-dashboard",
"compliance-graph",
"compliance-dast",
"compliance-mcp",
]
resolver = "2"

View File

@@ -5,11 +5,21 @@ COPY . .
RUN cargo build --release -p compliance-agent
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates libssl3 git curl && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y ca-certificates libssl3 git curl python3 python3-pip && rm -rf /var/lib/apt/lists/*
# Install syft for SBOM generation
RUN curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
# Install gitleaks for secret detection
RUN curl -sSfL https://github.com/gitleaks/gitleaks/releases/download/v8.21.2/gitleaks_8.21.2_linux_x64.tar.gz \
| tar -xz -C /usr/local/bin gitleaks
# Install semgrep for static analysis
RUN pip3 install --break-system-packages semgrep
# Install ruff for Python linting
RUN pip3 install --break-system-packages ruff
COPY --from=builder /app/target/release/compliance-agent /usr/local/bin/compliance-agent
EXPOSE 3001 3002

View File

@@ -2,8 +2,11 @@ FROM rust:1.89-bookworm AS builder
RUN cargo install dioxus-cli --version 0.7.3
ARG DOCS_URL=/docs
WORKDIR /app
COPY . .
ENV DOCS_URL=${DOCS_URL}
RUN dx build --release --package compliance-dashboard
FROM debian:bookworm-slim
@@ -13,6 +16,7 @@ WORKDIR /app
COPY --from=builder /app/target/dx/compliance-dashboard/release/web/compliance-dashboard /app/compliance-dashboard
COPY --from=builder /app/target/dx/compliance-dashboard/release/web/public /app/public
ENV IP=0.0.0.0
EXPOSE 8080
ENTRYPOINT ["./compliance-dashboard"]

14
Dockerfile.docs Normal file
View File

@@ -0,0 +1,14 @@
FROM node:22-alpine AS builder
WORKDIR /app
COPY docs/package.json docs/package-lock.json ./
RUN npm ci
COPY docs/ .
RUN npm run build
FROM nginx:alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY docs/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/.vitepress/dist /usr/share/nginx/html
EXPOSE 80

16
Dockerfile.mcp Normal file
View File

@@ -0,0 +1,16 @@
FROM rust:1.89-bookworm AS builder
WORKDIR /app
COPY . .
RUN cargo build --release -p compliance-mcp
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates libssl3 && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/compliance-mcp /usr/local/bin/compliance-mcp
EXPOSE 8090
ENV MCP_PORT=8090
ENTRYPOINT ["compliance-mcp"]

View File

@@ -300,6 +300,84 @@ tr:hover {
color: var(--text-secondary);
}
/* Sidebar User Section */
.sidebar-user {
display: flex;
align-items: center;
gap: 10px;
padding: 12px 14px;
margin: 8px;
border-top: 1px solid var(--border);
padding-top: 16px;
}
.sidebar-user-collapsed {
flex-direction: column;
gap: 8px;
padding: 12px 4px;
margin: 8px 4px;
}
.user-avatar {
width: 34px;
height: 34px;
border-radius: 10px;
background: linear-gradient(135deg, rgba(56, 189, 248, 0.2), rgba(56, 189, 248, 0.08));
border: 1px solid rgba(56, 189, 248, 0.15);
display: flex;
align-items: center;
justify-content: center;
flex-shrink: 0;
}
.avatar-initials {
font-size: 13px;
font-weight: 700;
color: var(--accent);
line-height: 1;
}
.avatar-img {
width: 100%;
height: 100%;
border-radius: 10px;
object-fit: cover;
}
.user-name {
flex: 1;
font-size: 13px;
font-weight: 500;
color: var(--text-primary);
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
min-width: 0;
}
.logout-btn {
display: flex;
align-items: center;
justify-content: center;
width: 32px;
height: 32px;
border-radius: 8px;
color: var(--text-secondary);
text-decoration: none;
flex-shrink: 0;
transition: all 0.15s;
}
.logout-btn:hover {
background: rgba(239, 68, 68, 0.12);
color: #fca5a5;
}
.logout-btn-collapsed {
width: 34px;
height: 34px;
}
@media (max-width: 768px) {
.sidebar {
transform: translateX(-100%);
@@ -313,3 +391,216 @@ tr:hover {
padding: 16px;
}
}
/* ── Utility classes ────────────────────────────────────── */
.mb-3 { margin-bottom: 12px; }
.mb-4 { margin-bottom: 16px; }
.text-secondary { color: var(--text-secondary); }
.btn-sm {
padding: 4px 10px;
font-size: 12px;
}
.btn-danger {
background: var(--danger);
color: #fff;
}
.btn-danger:hover {
background: #dc2626;
}
.btn-secondary {
background: var(--bg-secondary);
color: var(--text-primary);
border: 1px solid var(--border);
}
.btn-secondary:hover {
background: var(--bg-primary);
}
/* ── Modal ──────────────────────────────────────────────── */
.modal-overlay {
position: fixed;
inset: 0;
background: rgba(0, 0, 0, 0.6);
backdrop-filter: blur(4px);
display: flex;
align-items: center;
justify-content: center;
z-index: 1000;
}
.modal-dialog {
background: var(--bg-secondary);
border: 1px solid var(--border);
border-radius: 12px;
padding: 24px;
max-width: 440px;
width: 90%;
}
.modal-dialog h3 {
margin-bottom: 12px;
}
.modal-dialog p {
margin-bottom: 8px;
font-size: 14px;
color: var(--text-secondary);
}
.modal-warning {
color: var(--warning) !important;
font-size: 13px !important;
}
.modal-actions {
display: flex;
gap: 8px;
justify-content: flex-end;
margin-top: 16px;
}
/* ── MCP Servers ────────────────────────────────────────── */
.mcp-server-card {
padding: 20px;
}
.mcp-server-header {
display: flex;
justify-content: space-between;
align-items: flex-start;
margin-bottom: 12px;
}
.mcp-server-title {
display: flex;
align-items: center;
gap: 10px;
}
.mcp-server-title h3 {
font-size: 16px;
font-weight: 600;
margin: 0;
}
.mcp-server-actions {
display: flex;
gap: 6px;
}
.mcp-status {
display: inline-flex;
align-items: center;
padding: 2px 10px;
border-radius: 20px;
font-size: 11px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.5px;
}
.mcp-status-running {
background: rgba(34, 197, 94, 0.15);
color: var(--success);
}
.mcp-status-stopped {
background: rgba(148, 163, 184, 0.15);
color: var(--text-secondary);
}
.mcp-status-error {
background: rgba(239, 68, 68, 0.15);
color: var(--danger);
}
.mcp-config-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
gap: 12px;
margin-bottom: 16px;
}
.mcp-config-item {
display: flex;
flex-direction: column;
gap: 4px;
}
.mcp-config-label {
font-size: 11px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.5px;
color: var(--text-secondary);
}
.mcp-config-value {
font-size: 13px;
color: var(--text-primary);
word-break: break-all;
}
.mcp-form-grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 0 16px;
}
.mcp-tools-section {
margin-bottom: 16px;
}
.mcp-tools-list {
display: flex;
flex-wrap: wrap;
gap: 6px;
margin-top: 6px;
}
.mcp-tool-badge {
display: inline-block;
padding: 3px 10px;
background: rgba(56, 189, 248, 0.1);
border: 1px solid rgba(56, 189, 248, 0.2);
border-radius: 6px;
font-size: 12px;
font-family: 'JetBrains Mono', monospace;
color: var(--accent);
}
.mcp-token-section {
margin-bottom: 12px;
}
.mcp-token-row {
display: flex;
align-items: center;
gap: 8px;
margin-top: 6px;
}
.mcp-token-value {
flex: 1;
padding: 6px 10px;
background: var(--bg-primary);
border: 1px solid var(--border);
border-radius: 6px;
font-size: 12px;
font-family: 'JetBrains Mono', monospace;
color: var(--text-secondary);
word-break: break-all;
}
.mcp-meta {
padding-top: 12px;
border-top: 1px solid var(--border);
font-size: 12px;
}

View File

@@ -187,7 +187,13 @@ pub async fn build_embeddings(
}
};
let git_ops = crate::pipeline::git::GitOps::new(&agent_clone.config.git_clone_base_path);
let creds = crate::pipeline::git::RepoCredentials {
ssh_key_path: Some(agent_clone.config.ssh_key_path.clone()),
auth_token: repo.auth_token.clone(),
auth_username: repo.auth_username.clone(),
};
let git_ops =
crate::pipeline::git::GitOps::new(&agent_clone.config.git_clone_base_path, creds);
let repo_path = match git_ops.clone_or_fetch(&repo.git_url, &repo.name) {
Ok(p) => p,
Err(e) => {

View File

@@ -291,7 +291,13 @@ pub async fn trigger_build(
}
};
let git_ops = crate::pipeline::git::GitOps::new(&agent_clone.config.git_clone_base_path);
let creds = crate::pipeline::git::RepoCredentials {
ssh_key_path: Some(agent_clone.config.ssh_key_path.clone()),
auth_token: repo.auth_token.clone(),
auth_username: repo.auth_username.clone(),
};
let git_ops =
crate::pipeline::git::GitOps::new(&agent_clone.config.git_clone_base_path, creds);
let repo_path = match git_ops.clone_or_fetch(&repo.git_url, &repo.name) {
Ok(p) => p,
Err(e) => {

View File

@@ -41,6 +41,12 @@ pub struct FindingsFilter {
pub scan_type: Option<String>,
#[serde(default)]
pub status: Option<String>,
#[serde(default)]
pub q: Option<String>,
#[serde(default)]
pub sort_by: Option<String>,
#[serde(default)]
pub sort_order: Option<String>,
#[serde(default = "default_page")]
pub page: u64,
#[serde(default = "default_limit")]
@@ -76,6 +82,8 @@ pub struct AddRepositoryRequest {
pub git_url: String,
#[serde(default = "default_branch")]
pub default_branch: String,
pub auth_token: Option<String>,
pub auth_username: Option<String>,
pub tracker_type: Option<TrackerType>,
pub tracker_owner: Option<String>,
pub tracker_repo: Option<String>,
@@ -91,6 +99,17 @@ pub struct UpdateStatusRequest {
pub status: String,
}
#[derive(Deserialize)]
pub struct BulkUpdateStatusRequest {
pub ids: Vec<String>,
pub status: String,
}
#[derive(Deserialize)]
pub struct UpdateFeedbackRequest {
pub feedback: String,
}
#[derive(Deserialize)]
pub struct SbomFilter {
#[serde(default)]
@@ -267,9 +286,25 @@ pub async fn list_repositories(
pub async fn add_repository(
Extension(agent): AgentExt,
Json(req): Json<AddRepositoryRequest>,
) -> Result<Json<ApiResponse<TrackedRepository>>, StatusCode> {
) -> Result<Json<ApiResponse<TrackedRepository>>, (StatusCode, String)> {
// Validate repository access before saving
let creds = crate::pipeline::git::RepoCredentials {
ssh_key_path: Some(agent.config.ssh_key_path.clone()),
auth_token: req.auth_token.clone(),
auth_username: req.auth_username.clone(),
};
if let Err(e) = crate::pipeline::git::GitOps::test_access(&req.git_url, &creds) {
return Err((
StatusCode::BAD_REQUEST,
format!("Cannot access repository: {e}"),
));
}
let mut repo = TrackedRepository::new(req.name, req.git_url);
repo.default_branch = req.default_branch;
repo.auth_token = req.auth_token;
repo.auth_username = req.auth_username;
repo.tracker_type = req.tracker_type;
repo.tracker_owner = req.tracker_owner;
repo.tracker_repo = req.tracker_repo;
@@ -280,7 +315,12 @@ pub async fn add_repository(
.repositories()
.insert_one(&repo)
.await
.map_err(|_| StatusCode::CONFLICT)?;
.map_err(|_| {
(
StatusCode::CONFLICT,
"Repository already exists".to_string(),
)
})?;
Ok(Json(ApiResponse {
data: repo,
@@ -289,6 +329,14 @@ pub async fn add_repository(
}))
}
pub async fn get_ssh_public_key(
Extension(agent): AgentExt,
) -> Result<Json<serde_json::Value>, StatusCode> {
let public_path = format!("{}.pub", agent.config.ssh_key_path);
let public_key = std::fs::read_to_string(&public_path).map_err(|_| StatusCode::NOT_FOUND)?;
Ok(Json(serde_json::json!({ "public_key": public_key.trim() })))
}
pub async fn trigger_scan(
Extension(agent): AgentExt,
Path(id): Path<String>,
@@ -367,6 +415,29 @@ pub async fn list_findings(
if let Some(status) = &filter.status {
query.insert("status", status);
}
// Text search across title, description, file_path, rule_id
if let Some(q) = &filter.q {
if !q.is_empty() {
let regex = doc! { "$regex": q, "$options": "i" };
query.insert(
"$or",
mongodb::bson::bson!([
{ "title": regex.clone() },
{ "description": regex.clone() },
{ "file_path": regex.clone() },
{ "rule_id": regex },
]),
);
}
}
// Dynamic sort
let sort_field = filter.sort_by.as_deref().unwrap_or("created_at");
let sort_dir: i32 = match filter.sort_order.as_deref() {
Some("asc") => 1,
_ => -1,
};
let sort_doc = doc! { sort_field: sort_dir };
let skip = (filter.page.saturating_sub(1)) * filter.limit as u64;
let total = db
@@ -378,7 +449,7 @@ pub async fn list_findings(
let findings = match db
.findings()
.find(query)
.sort(doc! { "created_at": -1 })
.sort(sort_doc)
.skip(skip)
.limit(filter.limit)
.await
@@ -434,6 +505,55 @@ pub async fn update_finding_status(
Ok(Json(serde_json::json!({ "status": "updated" })))
}
pub async fn bulk_update_finding_status(
Extension(agent): AgentExt,
Json(req): Json<BulkUpdateStatusRequest>,
) -> Result<Json<serde_json::Value>, StatusCode> {
let oids: Vec<mongodb::bson::oid::ObjectId> = req
.ids
.iter()
.filter_map(|id| mongodb::bson::oid::ObjectId::parse_str(id).ok())
.collect();
if oids.is_empty() {
return Err(StatusCode::BAD_REQUEST);
}
let result = agent
.db
.findings()
.update_many(
doc! { "_id": { "$in": oids } },
doc! { "$set": { "status": &req.status, "updated_at": mongodb::bson::DateTime::now() } },
)
.await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
Ok(Json(
serde_json::json!({ "status": "updated", "modified_count": result.modified_count }),
))
}
pub async fn update_finding_feedback(
Extension(agent): AgentExt,
Path(id): Path<String>,
Json(req): Json<UpdateFeedbackRequest>,
) -> Result<Json<serde_json::Value>, StatusCode> {
let oid = mongodb::bson::oid::ObjectId::parse_str(&id).map_err(|_| StatusCode::BAD_REQUEST)?;
agent
.db
.findings()
.update_one(
doc! { "_id": oid },
doc! { "$set": { "developer_feedback": &req.feedback, "updated_at": mongodb::bson::DateTime::now() } },
)
.await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
Ok(Json(serde_json::json!({ "status": "updated" })))
}
pub async fn list_sbom(
Extension(agent): AgentExt,
Query(filter): Query<SbomFilter>,

View File

@@ -7,6 +7,10 @@ pub fn build_router() -> Router {
Router::new()
.route("/api/v1/health", get(handlers::health))
.route("/api/v1/stats/overview", get(handlers::stats_overview))
.route(
"/api/v1/settings/ssh-public-key",
get(handlers::get_ssh_public_key),
)
.route("/api/v1/repositories", get(handlers::list_repositories))
.route("/api/v1/repositories", post(handlers::add_repository))
.route(
@@ -23,6 +27,14 @@ pub fn build_router() -> Router {
"/api/v1/findings/{id}/status",
patch(handlers::update_finding_status),
)
.route(
"/api/v1/findings/bulk-status",
patch(handlers::bulk_update_finding_status),
)
.route(
"/api/v1/findings/{id}/feedback",
patch(handlers::update_finding_feedback),
)
.route("/api/v1/sbom", get(handlers::list_sbom))
.route("/api/v1/sbom/export", get(handlers::export_sbom))
.route("/api/v1/sbom/licenses", get(handlers::license_summary))

View File

@@ -45,6 +45,8 @@ pub fn load_config() -> Result<AgentConfig, AgentError> {
.unwrap_or_else(|| "0 0 0 * * *".to_string()),
git_clone_base_path: env_var_opt("GIT_CLONE_BASE_PATH")
.unwrap_or_else(|| "/tmp/compliance-scanner/repos".to_string()),
ssh_key_path: env_var_opt("SSH_KEY_PATH")
.unwrap_or_else(|| "/data/compliance-scanner/ssh/id_ed25519".to_string()),
keycloak_url: env_var_opt("KEYCLOAK_URL"),
keycloak_realm: env_var_opt("KEYCLOAK_REALM"),
})

View File

@@ -5,6 +5,7 @@ pub mod descriptions;
pub mod fixes;
#[allow(dead_code)]
pub mod pr_review;
pub mod review_prompts;
pub mod triage;
pub use client::LlmClient;

View File

@@ -0,0 +1,77 @@
// System prompts for multi-pass LLM code review.
// Each pass focuses on a different aspect to avoid overloading a single prompt.
pub const LOGIC_REVIEW_PROMPT: &str = r#"You are a senior software engineer reviewing code changes. Focus ONLY on logic and correctness issues.
Look for:
- Off-by-one errors, wrong comparisons, missing edge cases
- Incorrect control flow (unreachable code, missing returns, wrong loop conditions)
- Race conditions or concurrency bugs
- Resource leaks (unclosed handles, missing cleanup)
- Wrong variable used (copy-paste errors)
- Incorrect error handling (swallowed errors, wrong error type)
Ignore: style, naming, formatting, documentation, minor improvements.
For each issue found, respond with a JSON array:
[{"title": "...", "description": "...", "severity": "high|medium|low", "file": "...", "line": N, "suggestion": "..."}]
If no issues found, respond with: []"#;
pub const SECURITY_REVIEW_PROMPT: &str = r#"You are a security engineer reviewing code changes. Focus ONLY on security vulnerabilities.
Look for:
- Injection vulnerabilities (SQL, command, XSS, template injection)
- Authentication/authorization bypasses
- Sensitive data exposure (logging secrets, hardcoded credentials)
- Insecure cryptography (weak algorithms, predictable randomness)
- Path traversal, SSRF, open redirects
- Unsafe deserialization
- Missing input validation at trust boundaries
Ignore: code style, performance, general quality.
For each issue found, respond with a JSON array:
[{"title": "...", "description": "...", "severity": "critical|high|medium", "file": "...", "line": N, "cwe": "CWE-XXX", "suggestion": "..."}]
If no issues found, respond with: []"#;
pub const CONVENTION_REVIEW_PROMPT: &str = r#"You are a code reviewer checking adherence to project conventions. Focus ONLY on patterns that indicate likely bugs or maintenance problems.
Look for:
- Inconsistent error handling patterns within the same module
- Public API that doesn't follow the project's established patterns
- Missing or incorrect type annotations that could cause runtime issues
- Anti-patterns specific to the language (e.g. unwrap in Rust library code, any in TypeScript)
Do NOT report: minor style preferences, documentation gaps, formatting.
Only report issues with HIGH confidence that they deviate from the visible codebase conventions.
For each issue found, respond with a JSON array:
[{"title": "...", "description": "...", "severity": "medium|low", "file": "...", "line": N, "suggestion": "..."}]
If no issues found, respond with: []"#;
pub const COMPLEXITY_REVIEW_PROMPT: &str = r#"You are reviewing code changes for excessive complexity that could lead to bugs.
Look for:
- Functions over 50 lines that should be decomposed
- Deeply nested control flow (4+ levels)
- Complex boolean expressions that are hard to reason about
- Functions with 5+ parameters
- Code duplication within the changed files
Only report complexity issues that are HIGH risk for future bugs. Ignore acceptable complexity in configuration, CLI argument parsing, or generated code.
For each issue found, respond with a JSON array:
[{"title": "...", "description": "...", "severity": "medium|low", "file": "...", "line": N, "suggestion": "..."}]
If no issues found, respond with: []"#;
/// All review types with their prompts
pub const REVIEW_PASSES: &[(&str, &str)] = &[
("logic", LOGIC_REVIEW_PROMPT),
("security", SECURITY_REVIEW_PROMPT),
("convention", CONVENTION_REVIEW_PROMPT),
("complexity", COMPLEXITY_REVIEW_PROMPT),
];

View File

@@ -5,13 +5,22 @@ use compliance_core::models::{Finding, FindingStatus};
use crate::llm::LlmClient;
use crate::pipeline::orchestrator::GraphContext;
const TRIAGE_SYSTEM_PROMPT: &str = r#"You are a security finding triage expert. Analyze the following security finding and determine:
1. Is this a true positive? (yes/no)
2. Confidence score (0-10, where 10 is highest confidence this is a real issue)
3. Brief remediation suggestion (1-2 sentences)
const TRIAGE_SYSTEM_PROMPT: &str = r#"You are a security finding triage expert. Analyze the following security finding with its code context and determine the appropriate action.
Actions:
- "confirm": The finding is a true positive at the reported severity. Keep as-is.
- "downgrade": The finding is real but over-reported. Lower severity recommended.
- "upgrade": The finding is under-reported. Higher severity recommended.
- "dismiss": The finding is a false positive. Should be removed.
Consider:
- Is the code in a test, example, or generated file? (lower confidence for test code)
- Does the surrounding code context confirm or refute the finding?
- Is the finding actionable by a developer?
- Would a real attacker be able to exploit this?
Respond in JSON format:
{"true_positive": true/false, "confidence": N, "remediation": "..."}"#;
{"action": "confirm|downgrade|upgrade|dismiss", "confidence": 0-10, "rationale": "brief explanation", "remediation": "optional fix suggestion"}"#;
pub async fn triage_findings(
llm: &Arc<LlmClient>,
@@ -21,8 +30,10 @@ pub async fn triage_findings(
let mut passed = 0;
for finding in findings.iter_mut() {
let file_classification = classify_file_path(finding.file_path.as_deref());
let mut user_prompt = format!(
"Scanner: {}\nRule: {}\nSeverity: {}\nTitle: {}\nDescription: {}\nFile: {}\nLine: {}\nCode: {}",
"Scanner: {}\nRule: {}\nSeverity: {}\nTitle: {}\nDescription: {}\nFile: {}\nLine: {}\nCode: {}\nFile classification: {}",
finding.scanner,
finding.rule_id.as_deref().unwrap_or("N/A"),
finding.severity,
@@ -31,8 +42,16 @@ pub async fn triage_findings(
finding.file_path.as_deref().unwrap_or("N/A"),
finding.line_number.map(|n| n.to_string()).unwrap_or_else(|| "N/A".to_string()),
finding.code_snippet.as_deref().unwrap_or("N/A"),
file_classification,
);
// Enrich with surrounding code context if possible
if let Some(context) = read_surrounding_context(finding) {
user_prompt.push_str(&format!(
"\n\n--- Surrounding Code (50 lines) ---\n{context}"
));
}
// Enrich with graph context if available
if let Some(ctx) = graph_context {
if let Some(impact) = ctx
@@ -69,32 +88,55 @@ pub async fn triage_findings(
.await
{
Ok(response) => {
// Strip markdown code fences if present (e.g. ```json ... ```)
let cleaned = response.trim();
let cleaned = if cleaned.starts_with("```") {
let inner = cleaned
cleaned
.trim_start_matches("```json")
.trim_start_matches("```")
.trim_end_matches("```")
.trim();
inner
.trim()
} else {
cleaned
};
if let Ok(result) = serde_json::from_str::<TriageResult>(cleaned) {
finding.confidence = Some(result.confidence);
// Apply file-path confidence adjustment
let adjusted_confidence =
adjust_confidence(result.confidence, &file_classification);
finding.confidence = Some(adjusted_confidence);
finding.triage_action = Some(result.action.clone());
finding.triage_rationale = Some(result.rationale);
if let Some(remediation) = result.remediation {
finding.remediation = Some(remediation);
}
if result.confidence >= 3.0 {
finding.status = FindingStatus::Triaged;
passed += 1;
} else {
finding.status = FindingStatus::FalsePositive;
match result.action.as_str() {
"dismiss" => {
finding.status = FindingStatus::FalsePositive;
}
"downgrade" => {
// Downgrade severity by one level
finding.severity = downgrade_severity(&finding.severity);
finding.status = FindingStatus::Triaged;
passed += 1;
}
"upgrade" => {
finding.severity = upgrade_severity(&finding.severity);
finding.status = FindingStatus::Triaged;
passed += 1;
}
_ => {
// "confirm" or unknown — keep as-is
if adjusted_confidence >= 3.0 {
finding.status = FindingStatus::Triaged;
passed += 1;
} else {
finding.status = FindingStatus::FalsePositive;
}
}
}
} else {
// If LLM response doesn't parse, keep the finding
// Parse failure — keep the finding
finding.status = FindingStatus::Triaged;
passed += 1;
tracing::warn!(
@@ -117,12 +159,122 @@ pub async fn triage_findings(
passed
}
/// Read ~50 lines of surrounding code from the file at the finding's location
fn read_surrounding_context(finding: &Finding) -> Option<String> {
let file_path = finding.file_path.as_deref()?;
let line = finding.line_number? as usize;
// Try to read the file — this works because the repo is cloned locally
let content = std::fs::read_to_string(file_path).ok()?;
let lines: Vec<&str> = content.lines().collect();
let start = line.saturating_sub(25);
let end = (line + 25).min(lines.len());
Some(
lines[start..end]
.iter()
.enumerate()
.map(|(i, l)| format!("{:>4} | {}", start + i + 1, l))
.collect::<Vec<_>>()
.join("\n"),
)
}
/// Classify a file path to inform triage confidence adjustment
fn classify_file_path(path: Option<&str>) -> String {
let path = match path {
Some(p) => p.to_lowercase(),
None => return "unknown".to_string(),
};
if path.contains("/test/")
|| path.contains("/tests/")
|| path.contains("_test.")
|| path.contains(".test.")
|| path.contains(".spec.")
|| path.contains("/fixtures/")
|| path.contains("/testdata/")
{
return "test".to_string();
}
if path.contains("/example")
|| path.contains("/examples/")
|| path.contains("/demo/")
|| path.contains("/sample")
{
return "example".to_string();
}
if path.contains("/generated/")
|| path.contains("/gen/")
|| path.contains(".generated.")
|| path.contains(".pb.go")
|| path.contains("_generated.rs")
{
return "generated".to_string();
}
if path.contains("/vendor/")
|| path.contains("/node_modules/")
|| path.contains("/third_party/")
{
return "vendored".to_string();
}
"production".to_string()
}
/// Adjust confidence based on file classification
fn adjust_confidence(raw_confidence: f64, classification: &str) -> f64 {
let multiplier = match classification {
"test" => 0.5,
"example" => 0.6,
"generated" => 0.3,
"vendored" => 0.4,
_ => 1.0,
};
raw_confidence * multiplier
}
fn downgrade_severity(
severity: &compliance_core::models::Severity,
) -> compliance_core::models::Severity {
use compliance_core::models::Severity;
match severity {
Severity::Critical => Severity::High,
Severity::High => Severity::Medium,
Severity::Medium => Severity::Low,
Severity::Low => Severity::Info,
Severity::Info => Severity::Info,
}
}
fn upgrade_severity(
severity: &compliance_core::models::Severity,
) -> compliance_core::models::Severity {
use compliance_core::models::Severity;
match severity {
Severity::Info => Severity::Low,
Severity::Low => Severity::Medium,
Severity::Medium => Severity::High,
Severity::High => Severity::Critical,
Severity::Critical => Severity::Critical,
}
}
#[derive(serde::Deserialize)]
struct TriageResult {
#[serde(default)]
#[allow(dead_code)]
true_positive: bool,
#[serde(default = "default_action")]
action: String,
#[serde(default)]
confidence: f64,
#[serde(default)]
rationale: String,
remediation: Option<String>,
}
fn default_action() -> String {
"confirm".to_string()
}

View File

@@ -7,6 +7,7 @@ mod llm;
mod pipeline;
mod rag;
mod scheduler;
mod ssh;
#[allow(dead_code)]
mod trackers;
mod webhooks;
@@ -20,6 +21,12 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
tracing::info!("Loading configuration...");
let config = config::load_config()?;
// Ensure SSH key pair exists for cloning private repos
match ssh::ensure_ssh_key(&config.ssh_key_path) {
Ok(pubkey) => tracing::info!("SSH public key: {}", pubkey.trim()),
Err(e) => tracing::warn!("SSH key generation skipped: {e}"),
}
tracing::info!("Connecting to MongoDB...");
let db = database::Database::connect(&config.mongodb_uri, &config.mongodb_database).await?;
db.ensure_indexes().await?;

View File

@@ -0,0 +1,186 @@
use std::path::Path;
use std::sync::Arc;
use compliance_core::models::{Finding, ScanType, Severity};
use compliance_core::traits::ScanOutput;
use crate::llm::review_prompts::REVIEW_PASSES;
use crate::llm::LlmClient;
use crate::pipeline::dedup;
use crate::pipeline::git::{DiffFile, GitOps};
pub struct CodeReviewScanner {
llm: Arc<LlmClient>,
}
impl CodeReviewScanner {
pub fn new(llm: Arc<LlmClient>) -> Self {
Self { llm }
}
/// Run multi-pass LLM code review on the diff between old and new commits.
pub async fn review_diff(
&self,
repo_path: &Path,
repo_id: &str,
old_sha: &str,
new_sha: &str,
) -> ScanOutput {
let diff_files = match GitOps::get_diff_content(repo_path, old_sha, new_sha) {
Ok(files) => files,
Err(e) => {
tracing::warn!("Failed to extract diff for code review: {e}");
return ScanOutput::default();
}
};
if diff_files.is_empty() {
return ScanOutput::default();
}
let mut all_findings = Vec::new();
// Chunk diff files into groups to avoid exceeding context limits
let chunks = chunk_diff_files(&diff_files, 8000);
for (pass_name, system_prompt) in REVIEW_PASSES {
for chunk in &chunks {
let user_prompt = format!(
"Review the following code changes:\n\n{}",
chunk
.iter()
.map(|f| format!("--- {} ---\n{}", f.path, f.hunks))
.collect::<Vec<_>>()
.join("\n\n")
);
match self.llm.chat(system_prompt, &user_prompt, Some(0.1)).await {
Ok(response) => {
let parsed = parse_review_response(&response, pass_name, repo_id, chunk);
all_findings.extend(parsed);
}
Err(e) => {
tracing::warn!("Code review pass '{pass_name}' failed: {e}");
}
}
}
}
ScanOutput {
findings: all_findings,
sbom_entries: Vec::new(),
}
}
}
/// Group diff files into chunks that fit within a token budget (rough char estimate)
fn chunk_diff_files(files: &[DiffFile], max_chars: usize) -> Vec<Vec<&DiffFile>> {
let mut chunks: Vec<Vec<&DiffFile>> = Vec::new();
let mut current_chunk: Vec<&DiffFile> = Vec::new();
let mut current_size = 0;
for file in files {
if current_size + file.hunks.len() > max_chars && !current_chunk.is_empty() {
chunks.push(std::mem::take(&mut current_chunk));
current_size = 0;
}
current_chunk.push(file);
current_size += file.hunks.len();
}
if !current_chunk.is_empty() {
chunks.push(current_chunk);
}
chunks
}
fn parse_review_response(
response: &str,
pass_name: &str,
repo_id: &str,
chunk: &[&DiffFile],
) -> Vec<Finding> {
let cleaned = response.trim();
let cleaned = if cleaned.starts_with("```") {
cleaned
.trim_start_matches("```json")
.trim_start_matches("```")
.trim_end_matches("```")
.trim()
} else {
cleaned
};
let issues: Vec<ReviewIssue> = match serde_json::from_str(cleaned) {
Ok(v) => v,
Err(_) => {
if cleaned != "[]" {
tracing::debug!("Failed to parse {pass_name} review response: {cleaned}");
}
return Vec::new();
}
};
issues
.into_iter()
.filter(|issue| {
// Verify the file exists in the diff chunk
chunk.iter().any(|f| f.path == issue.file)
})
.map(|issue| {
let severity = match issue.severity.as_str() {
"critical" => Severity::Critical,
"high" => Severity::High,
"medium" => Severity::Medium,
"low" => Severity::Low,
_ => Severity::Info,
};
let fingerprint = dedup::compute_fingerprint(&[
repo_id,
"code-review",
pass_name,
&issue.file,
&issue.line.to_string(),
&issue.title,
]);
let description = if let Some(suggestion) = &issue.suggestion {
format!("{}\n\nSuggested fix: {}", issue.description, suggestion)
} else {
issue.description.clone()
};
let mut finding = Finding::new(
repo_id.to_string(),
fingerprint,
format!("code-review/{pass_name}"),
ScanType::CodeReview,
issue.title,
description,
severity,
);
finding.rule_id = Some(format!("review/{pass_name}"));
finding.file_path = Some(issue.file);
finding.line_number = Some(issue.line);
finding.cwe = issue.cwe;
finding.suggested_fix = issue.suggestion;
finding
})
.collect()
}
#[derive(serde::Deserialize)]
struct ReviewIssue {
title: String,
description: String,
severity: String,
file: String,
#[serde(default)]
line: u32,
#[serde(default)]
cwe: Option<String>,
#[serde(default)]
suggestion: Option<String>,
}

View File

@@ -64,6 +64,8 @@ impl CveScanner {
}
async fn query_osv_batch(&self, entries: &[SbomEntry]) -> Result<Vec<Vec<OsvVuln>>, CoreError> {
const OSV_BATCH_SIZE: usize = 500;
let queries: Vec<_> = entries
.iter()
.filter_map(|e| {
@@ -79,32 +81,34 @@ impl CveScanner {
return Ok(Vec::new());
}
let body = serde_json::json!({ "queries": queries });
let mut all_vulns: Vec<Vec<OsvVuln>> = Vec::with_capacity(queries.len());
let resp = self
.http
.post("https://api.osv.dev/v1/querybatch")
.json(&body)
.send()
.await
.map_err(|e| CoreError::Http(format!("OSV.dev request failed: {e}")))?;
for chunk in queries.chunks(OSV_BATCH_SIZE) {
let body = serde_json::json!({ "queries": chunk });
if !resp.status().is_success() {
let status = resp.status();
let body = resp.text().await.unwrap_or_default();
tracing::warn!("OSV.dev returned {status}: {body}");
return Ok(Vec::new());
}
let resp = self
.http
.post("https://api.osv.dev/v1/querybatch")
.json(&body)
.send()
.await
.map_err(|e| CoreError::Http(format!("OSV.dev request failed: {e}")))?;
let result: OsvBatchResponse = resp
.json()
.await
.map_err(|e| CoreError::Http(format!("Failed to parse OSV.dev response: {e}")))?;
if !resp.status().is_success() {
let status = resp.status();
let body = resp.text().await.unwrap_or_default();
tracing::warn!("OSV.dev returned {status}: {body}");
// Push empty results for this chunk so indices stay aligned
all_vulns.extend(std::iter::repeat_with(Vec::new).take(chunk.len()));
continue;
}
let vulns = result
.results
.into_iter()
.map(|r| {
let result: OsvBatchResponse = resp
.json()
.await
.map_err(|e| CoreError::Http(format!("Failed to parse OSV.dev response: {e}")))?;
let chunk_vulns = result.results.into_iter().map(|r| {
r.vulns
.unwrap_or_default()
.into_iter()
@@ -116,10 +120,12 @@ impl CveScanner {
}),
})
.collect()
})
.collect();
});
Ok(vulns)
all_vulns.extend(chunk_vulns);
}
Ok(all_vulns)
}
async fn query_nvd(&self, cve_id: &str) -> Result<Option<f64>, CoreError> {

View File

@@ -1,17 +1,80 @@
use std::path::{Path, PathBuf};
use git2::{FetchOptions, Repository};
use git2::{Cred, FetchOptions, RemoteCallbacks, Repository};
use crate::error::AgentError;
/// Credentials for accessing a private repository
#[derive(Debug, Clone, Default)]
pub struct RepoCredentials {
/// Path to the SSH private key (for SSH URLs)
pub ssh_key_path: Option<String>,
/// Auth token / password (for HTTPS URLs)
pub auth_token: Option<String>,
/// Username for HTTPS auth (defaults to "x-access-token")
pub auth_username: Option<String>,
}
impl RepoCredentials {
pub(crate) fn make_callbacks(&self) -> RemoteCallbacks<'_> {
let mut callbacks = RemoteCallbacks::new();
let ssh_key = self.ssh_key_path.clone();
let token = self.auth_token.clone();
let username = self.auth_username.clone();
callbacks.credentials(move |_url, username_from_url, allowed_types| {
// SSH key authentication
if allowed_types.contains(git2::CredentialType::SSH_KEY) {
if let Some(ref key_path) = ssh_key {
let key = Path::new(key_path);
if key.exists() {
let user = username_from_url.unwrap_or("git");
return Cred::ssh_key(user, None, key, None);
}
}
}
// HTTPS userpass authentication
if allowed_types.contains(git2::CredentialType::USER_PASS_PLAINTEXT) {
if let Some(ref tok) = token {
let user = username.as_deref().unwrap_or("x-access-token");
return Cred::userpass_plaintext(user, tok);
}
}
Cred::default()
});
callbacks
}
fn fetch_options(&self) -> FetchOptions<'_> {
let mut fetch_opts = FetchOptions::new();
if self.has_credentials() {
fetch_opts.remote_callbacks(self.make_callbacks());
}
fetch_opts
}
fn has_credentials(&self) -> bool {
self.ssh_key_path
.as_ref()
.map(|p| Path::new(p).exists())
.unwrap_or(false)
|| self.auth_token.is_some()
}
}
pub struct GitOps {
base_path: PathBuf,
credentials: RepoCredentials,
}
impl GitOps {
pub fn new(base_path: &str) -> Self {
pub fn new(base_path: &str, credentials: RepoCredentials) -> Self {
Self {
base_path: PathBuf::from(base_path),
credentials,
}
}
@@ -22,17 +85,25 @@ impl GitOps {
self.fetch(&repo_path)?;
} else {
std::fs::create_dir_all(&repo_path)?;
Repository::clone(git_url, &repo_path)?;
self.clone_repo(git_url, &repo_path)?;
tracing::info!("Cloned {git_url} to {}", repo_path.display());
}
Ok(repo_path)
}
fn clone_repo(&self, git_url: &str, repo_path: &Path) -> Result<(), AgentError> {
let mut builder = git2::build::RepoBuilder::new();
let fetch_opts = self.credentials.fetch_options();
builder.fetch_options(fetch_opts);
builder.clone(git_url, repo_path)?;
Ok(())
}
fn fetch(&self, repo_path: &Path) -> Result<(), AgentError> {
let repo = Repository::open(repo_path)?;
let mut remote = repo.find_remote("origin")?;
let mut fetch_opts = FetchOptions::new();
let mut fetch_opts = self.credentials.fetch_options();
remote.fetch(&[] as &[&str], Some(&mut fetch_opts), None)?;
// Fast-forward to origin/HEAD
@@ -48,6 +119,15 @@ impl GitOps {
Ok(())
}
/// Test that we can access a remote repository (used during add validation)
pub fn test_access(git_url: &str, credentials: &RepoCredentials) -> Result<(), AgentError> {
let mut remote = git2::Remote::create_detached(git_url)?;
let callbacks = credentials.make_callbacks();
remote.connect_auth(git2::Direction::Fetch, Some(callbacks), None)?;
remote.disconnect()?;
Ok(())
}
pub fn get_head_sha(repo_path: &Path) -> Result<String, AgentError> {
let repo = Repository::open(repo_path)?;
let head = repo.head()?;
@@ -63,6 +143,62 @@ impl GitOps {
}
}
/// Extract structured diff content between two commits
pub fn get_diff_content(
repo_path: &Path,
old_sha: &str,
new_sha: &str,
) -> Result<Vec<DiffFile>, AgentError> {
let repo = Repository::open(repo_path)?;
let old_commit = repo.find_commit(git2::Oid::from_str(old_sha)?)?;
let new_commit = repo.find_commit(git2::Oid::from_str(new_sha)?)?;
let old_tree = old_commit.tree()?;
let new_tree = new_commit.tree()?;
let diff = repo.diff_tree_to_tree(Some(&old_tree), Some(&new_tree), None)?;
let mut diff_files: Vec<DiffFile> = Vec::new();
diff.print(git2::DiffFormat::Patch, |delta, _hunk, line| {
let file_path = delta
.new_file()
.path()
.map(|p| p.to_string_lossy().to_string())
.unwrap_or_default();
// Find or create the DiffFile entry
let idx = if let Some(pos) = diff_files.iter().position(|f| f.path == file_path) {
pos
} else {
diff_files.push(DiffFile {
path: file_path,
hunks: String::new(),
});
diff_files.len() - 1
};
let diff_file = &mut diff_files[idx];
let prefix = match line.origin() {
'+' => "+",
'-' => "-",
' ' => " ",
_ => "",
};
let content = std::str::from_utf8(line.content()).unwrap_or("");
diff_file.hunks.push_str(prefix);
diff_file.hunks.push_str(content);
true
})?;
// Filter out binary files and very large diffs
diff_files.retain(|f| !f.hunks.is_empty() && f.hunks.len() < 50_000);
Ok(diff_files)
}
#[allow(dead_code)]
pub fn get_changed_files(
repo_path: &Path,
@@ -94,3 +230,10 @@ impl GitOps {
Ok(files)
}
}
/// A file changed between two commits with its diff content
#[derive(Debug, Clone)]
pub struct DiffFile {
pub path: String,
pub hunks: String,
}

View File

@@ -0,0 +1,130 @@
use std::path::Path;
use compliance_core::models::{Finding, ScanType, Severity};
use compliance_core::traits::{ScanOutput, Scanner};
use compliance_core::CoreError;
use crate::pipeline::dedup;
pub struct GitleaksScanner;
impl Scanner for GitleaksScanner {
fn name(&self) -> &str {
"gitleaks"
}
fn scan_type(&self) -> ScanType {
ScanType::SecretDetection
}
async fn scan(&self, repo_path: &Path, repo_id: &str) -> Result<ScanOutput, CoreError> {
let output = tokio::process::Command::new("gitleaks")
.args([
"detect",
"--source",
".",
"--report-format",
"json",
"--report-path",
"/dev/stdout",
"--no-banner",
"--exit-code",
"0",
])
.current_dir(repo_path)
.output()
.await
.map_err(|e| CoreError::Scanner {
scanner: "gitleaks".to_string(),
source: Box::new(e),
})?;
if output.stdout.is_empty() {
return Ok(ScanOutput::default());
}
let results: Vec<GitleaksResult> =
serde_json::from_slice(&output.stdout).unwrap_or_default();
let findings = results
.into_iter()
.filter(|r| !is_allowlisted(&r.file))
.map(|r| {
let severity = match r.rule_id.as_str() {
s if s.contains("private-key") => Severity::Critical,
s if s.contains("token") || s.contains("password") || s.contains("secret") => {
Severity::High
}
s if s.contains("api-key") => Severity::High,
_ => Severity::Medium,
};
let fingerprint = dedup::compute_fingerprint(&[
repo_id,
&r.rule_id,
&r.file,
&r.start_line.to_string(),
]);
let title = format!("Secret detected: {}", r.description);
let description = format!(
"Potential secret ({}) found in {}:{}. Match: {}",
r.rule_id,
r.file,
r.start_line,
r.r#match.chars().take(80).collect::<String>(),
);
let mut finding = Finding::new(
repo_id.to_string(),
fingerprint,
"gitleaks".to_string(),
ScanType::SecretDetection,
title,
description,
severity,
);
finding.rule_id = Some(r.rule_id);
finding.file_path = Some(r.file);
finding.line_number = Some(r.start_line);
finding.code_snippet = Some(r.r#match);
finding
})
.collect();
Ok(ScanOutput {
findings,
sbom_entries: Vec::new(),
})
}
}
/// Skip files that commonly contain example/placeholder secrets
fn is_allowlisted(file_path: &str) -> bool {
let lower = file_path.to_lowercase();
lower.ends_with(".env.example")
|| lower.ends_with(".env.sample")
|| lower.ends_with(".env.template")
|| lower.contains("/test/")
|| lower.contains("/tests/")
|| lower.contains("/fixtures/")
|| lower.contains("/testdata/")
|| lower.contains("mock")
|| lower.ends_with("_test.go")
|| lower.ends_with(".test.ts")
|| lower.ends_with(".test.js")
|| lower.ends_with(".spec.ts")
|| lower.ends_with(".spec.js")
}
#[derive(serde::Deserialize)]
#[serde(rename_all = "PascalCase")]
struct GitleaksResult {
description: String,
#[serde(rename = "RuleID")]
rule_id: String,
file: String,
start_line: u32,
#[serde(rename = "Match")]
r#match: String,
}

View File

@@ -0,0 +1,364 @@
use std::path::Path;
use std::time::Duration;
use compliance_core::models::{Finding, ScanType, Severity};
use compliance_core::traits::{ScanOutput, Scanner};
use compliance_core::CoreError;
use tokio::process::Command;
use crate::pipeline::dedup;
/// Timeout for each individual lint command
const LINT_TIMEOUT: Duration = Duration::from_secs(120);
pub struct LintScanner;
impl Scanner for LintScanner {
fn name(&self) -> &str {
"lint"
}
fn scan_type(&self) -> ScanType {
ScanType::Lint
}
async fn scan(&self, repo_path: &Path, repo_id: &str) -> Result<ScanOutput, CoreError> {
let mut all_findings = Vec::new();
// Detect which languages are present and run appropriate linters
if has_rust_project(repo_path) {
match run_clippy(repo_path, repo_id).await {
Ok(findings) => all_findings.extend(findings),
Err(e) => tracing::warn!("Clippy failed: {e}"),
}
}
if has_js_project(repo_path) {
match run_eslint(repo_path, repo_id).await {
Ok(findings) => all_findings.extend(findings),
Err(e) => tracing::warn!("ESLint failed: {e}"),
}
}
if has_python_project(repo_path) {
match run_ruff(repo_path, repo_id).await {
Ok(findings) => all_findings.extend(findings),
Err(e) => tracing::warn!("Ruff failed: {e}"),
}
}
Ok(ScanOutput {
findings: all_findings,
sbom_entries: Vec::new(),
})
}
}
fn has_rust_project(repo_path: &Path) -> bool {
repo_path.join("Cargo.toml").exists()
}
fn has_js_project(repo_path: &Path) -> bool {
// Only run if eslint is actually installed in the project
repo_path.join("package.json").exists() && repo_path.join("node_modules/.bin/eslint").exists()
}
fn has_python_project(repo_path: &Path) -> bool {
repo_path.join("pyproject.toml").exists()
|| repo_path.join("setup.py").exists()
|| repo_path.join("requirements.txt").exists()
}
/// Run a command with a timeout, returning its output or an error
async fn run_with_timeout(
child: tokio::process::Child,
scanner_name: &str,
) -> Result<std::process::Output, CoreError> {
let result = tokio::time::timeout(LINT_TIMEOUT, child.wait_with_output()).await;
match result {
Ok(Ok(output)) => Ok(output),
Ok(Err(e)) => Err(CoreError::Scanner {
scanner: scanner_name.to_string(),
source: Box::new(e),
}),
Err(_) => {
// Process is dropped here which sends SIGKILL on Unix
Err(CoreError::Scanner {
scanner: scanner_name.to_string(),
source: Box::new(std::io::Error::new(
std::io::ErrorKind::TimedOut,
format!("{scanner_name} timed out after {}s", LINT_TIMEOUT.as_secs()),
)),
})
}
}
}
// ── Clippy ──────────────────────────────────────────────
async fn run_clippy(repo_path: &Path, repo_id: &str) -> Result<Vec<Finding>, CoreError> {
let child = Command::new("cargo")
.args([
"clippy",
"--message-format=json",
"--quiet",
"--",
"-W",
"clippy::all",
])
.current_dir(repo_path)
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped())
.spawn()
.map_err(|e| CoreError::Scanner {
scanner: "clippy".to_string(),
source: Box::new(e),
})?;
let output = run_with_timeout(child, "clippy").await?;
let stdout = String::from_utf8_lossy(&output.stdout);
let mut findings = Vec::new();
for line in stdout.lines() {
let msg: serde_json::Value = match serde_json::from_str(line) {
Ok(v) => v,
Err(_) => continue,
};
if msg.get("reason").and_then(|v| v.as_str()) != Some("compiler-message") {
continue;
}
let message = match msg.get("message") {
Some(m) => m,
None => continue,
};
let level = message.get("level").and_then(|v| v.as_str()).unwrap_or("");
if level != "warning" && level != "error" {
continue;
}
let text = message
.get("message")
.and_then(|v| v.as_str())
.unwrap_or("")
.to_string();
let code = message
.get("code")
.and_then(|v| v.get("code"))
.and_then(|v| v.as_str())
.unwrap_or("")
.to_string();
if text.starts_with("aborting due to") || code.is_empty() {
continue;
}
let (file_path, line_number) = extract_primary_span(message);
let severity = if level == "error" {
Severity::High
} else {
Severity::Low
};
let fingerprint = dedup::compute_fingerprint(&[
repo_id,
"clippy",
&code,
&file_path,
&line_number.to_string(),
]);
let mut finding = Finding::new(
repo_id.to_string(),
fingerprint,
"clippy".to_string(),
ScanType::Lint,
format!("[clippy] {text}"),
text,
severity,
);
finding.rule_id = Some(code);
if !file_path.is_empty() {
finding.file_path = Some(file_path);
}
if line_number > 0 {
finding.line_number = Some(line_number);
}
findings.push(finding);
}
Ok(findings)
}
fn extract_primary_span(message: &serde_json::Value) -> (String, u32) {
let spans = match message.get("spans").and_then(|v| v.as_array()) {
Some(s) => s,
None => return (String::new(), 0),
};
for span in spans {
if span.get("is_primary").and_then(|v| v.as_bool()) == Some(true) {
let file = span
.get("file_name")
.and_then(|v| v.as_str())
.unwrap_or("")
.to_string();
let line = span.get("line_start").and_then(|v| v.as_u64()).unwrap_or(0) as u32;
return (file, line);
}
}
(String::new(), 0)
}
// ── ESLint ──────────────────────────────────────────────
async fn run_eslint(repo_path: &Path, repo_id: &str) -> Result<Vec<Finding>, CoreError> {
// Use the project-local eslint binary directly, not npx (which can hang downloading)
let eslint_bin = repo_path.join("node_modules/.bin/eslint");
let child = Command::new(eslint_bin)
.args([".", "--format", "json", "--no-error-on-unmatched-pattern"])
.current_dir(repo_path)
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped())
.spawn()
.map_err(|e| CoreError::Scanner {
scanner: "eslint".to_string(),
source: Box::new(e),
})?;
let output = run_with_timeout(child, "eslint").await?;
if output.stdout.is_empty() {
return Ok(Vec::new());
}
let results: Vec<EslintFileResult> = serde_json::from_slice(&output.stdout).unwrap_or_default();
let mut findings = Vec::new();
for file_result in results {
for msg in file_result.messages {
let severity = match msg.severity {
2 => Severity::Medium,
_ => Severity::Low,
};
let rule_id = msg.rule_id.unwrap_or_default();
let fingerprint = dedup::compute_fingerprint(&[
repo_id,
"eslint",
&rule_id,
&file_result.file_path,
&msg.line.to_string(),
]);
let mut finding = Finding::new(
repo_id.to_string(),
fingerprint,
"eslint".to_string(),
ScanType::Lint,
format!("[eslint] {}", msg.message),
msg.message,
severity,
);
finding.rule_id = Some(rule_id);
finding.file_path = Some(file_result.file_path.clone());
finding.line_number = Some(msg.line);
findings.push(finding);
}
}
Ok(findings)
}
#[derive(serde::Deserialize)]
struct EslintFileResult {
#[serde(rename = "filePath")]
file_path: String,
messages: Vec<EslintMessage>,
}
#[derive(serde::Deserialize)]
struct EslintMessage {
#[serde(rename = "ruleId")]
rule_id: Option<String>,
severity: u8,
message: String,
line: u32,
}
// ── Ruff ────────────────────────────────────────────────
async fn run_ruff(repo_path: &Path, repo_id: &str) -> Result<Vec<Finding>, CoreError> {
let child = Command::new("ruff")
.args(["check", ".", "--output-format", "json", "--exit-zero"])
.current_dir(repo_path)
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped())
.spawn()
.map_err(|e| CoreError::Scanner {
scanner: "ruff".to_string(),
source: Box::new(e),
})?;
let output = run_with_timeout(child, "ruff").await?;
if output.stdout.is_empty() {
return Ok(Vec::new());
}
let results: Vec<RuffResult> = serde_json::from_slice(&output.stdout).unwrap_or_default();
let findings = results
.into_iter()
.map(|r| {
let severity = if r.code.starts_with('E') || r.code.starts_with('F') {
Severity::Medium
} else {
Severity::Low
};
let fingerprint = dedup::compute_fingerprint(&[
repo_id,
"ruff",
&r.code,
&r.filename,
&r.location.row.to_string(),
]);
let mut finding = Finding::new(
repo_id.to_string(),
fingerprint,
"ruff".to_string(),
ScanType::Lint,
format!("[ruff] {}: {}", r.code, r.message),
r.message,
severity,
);
finding.rule_id = Some(r.code);
finding.file_path = Some(r.filename);
finding.line_number = Some(r.location.row);
finding
})
.collect();
Ok(findings)
}
#[derive(serde::Deserialize)]
struct RuffResult {
code: String,
message: String,
filename: String,
location: RuffLocation,
}
#[derive(serde::Deserialize)]
struct RuffLocation {
row: u32,
}

View File

@@ -1,6 +1,9 @@
pub mod code_review;
pub mod cve;
pub mod dedup;
pub mod git;
pub mod gitleaks;
pub mod lint;
pub mod orchestrator;
pub mod patterns;
pub mod sbom;

View File

@@ -9,8 +9,11 @@ use compliance_core::AgentConfig;
use crate::database::Database;
use crate::error::AgentError;
use crate::llm::LlmClient;
use crate::pipeline::code_review::CodeReviewScanner;
use crate::pipeline::cve::CveScanner;
use crate::pipeline::git::GitOps;
use crate::pipeline::git::{GitOps, RepoCredentials};
use crate::pipeline::gitleaks::GitleaksScanner;
use crate::pipeline::lint::LintScanner;
use crate::pipeline::patterns::{GdprPatternScanner, OAuthPatternScanner};
use crate::pipeline::sbom::SbomScanner;
use crate::pipeline::semgrep::SemgrepScanner;
@@ -114,7 +117,12 @@ impl PipelineOrchestrator {
// Stage 0: Change detection
tracing::info!("[{repo_id}] Stage 0: Change detection");
let git_ops = GitOps::new(&self.config.git_clone_base_path);
let creds = RepoCredentials {
ssh_key_path: Some(self.config.ssh_key_path.clone()),
auth_token: repo.auth_token.clone(),
auth_username: repo.auth_username.clone(),
};
let git_ops = GitOps::new(&self.config.git_clone_base_path, creds);
let repo_path = git_ops.clone_or_fetch(&repo.git_url, &repo.name)?;
if !GitOps::has_new_commits(&repo_path, repo.last_scanned_commit.as_deref())? {
@@ -182,6 +190,35 @@ impl PipelineOrchestrator {
Err(e) => tracing::warn!("[{repo_id}] OAuth pattern scan failed: {e}"),
}
// Stage 4a: Secret Detection (Gitleaks)
tracing::info!("[{repo_id}] Stage 4a: Secret Detection");
self.update_phase(scan_run_id, "secret_detection").await;
let gitleaks = GitleaksScanner;
match gitleaks.scan(&repo_path, &repo_id).await {
Ok(output) => all_findings.extend(output.findings),
Err(e) => tracing::warn!("[{repo_id}] Gitleaks failed: {e}"),
}
// Stage 4b: Lint Scanning
tracing::info!("[{repo_id}] Stage 4b: Lint Scanning");
self.update_phase(scan_run_id, "lint_scanning").await;
let lint = LintScanner;
match lint.scan(&repo_path, &repo_id).await {
Ok(output) => all_findings.extend(output.findings),
Err(e) => tracing::warn!("[{repo_id}] Lint scanning failed: {e}"),
}
// Stage 4c: LLM Code Review (only on incremental scans)
if let Some(old_sha) = &repo.last_scanned_commit {
tracing::info!("[{repo_id}] Stage 4c: LLM Code Review");
self.update_phase(scan_run_id, "code_review").await;
let reviewer = CodeReviewScanner::new(self.llm.clone());
let review_output = reviewer
.review_diff(&repo_path, &repo_id, old_sha, &current_sha)
.await;
all_findings.extend(review_output.findings);
}
// Stage 4.5: Graph Building
tracing::info!("[{repo_id}] Stage 4.5: Graph Building");
self.update_phase(scan_run_id, "graph_building").await;

View File

@@ -0,0 +1,53 @@
use std::path::Path;
use crate::error::AgentError;
/// Ensure the SSH key pair exists at the given path, generating it if missing.
/// Returns the public key contents.
pub fn ensure_ssh_key(key_path: &str) -> Result<String, AgentError> {
let private_path = Path::new(key_path);
let public_path = private_path.with_extension("pub");
if private_path.exists() && public_path.exists() {
return std::fs::read_to_string(&public_path)
.map_err(|e| AgentError::Config(format!("Failed to read SSH public key: {e}")));
}
// Create parent directory
if let Some(parent) = private_path.parent() {
std::fs::create_dir_all(parent)?;
}
// Generate ed25519 key pair using ssh-keygen
let output = std::process::Command::new("ssh-keygen")
.args([
"-t",
"ed25519",
"-f",
key_path,
"-N",
"", // no passphrase
"-C",
"compliance-scanner-agent",
])
.output()
.map_err(|e| AgentError::Config(format!("Failed to run ssh-keygen: {e}")))?;
if !output.status.success() {
let stderr = String::from_utf8_lossy(&output.stderr);
return Err(AgentError::Config(format!("ssh-keygen failed: {stderr}")));
}
// Set correct permissions
#[cfg(unix)]
{
use std::os::unix::fs::PermissionsExt;
std::fs::set_permissions(private_path, std::fs::Permissions::from_mode(0o600))?;
}
let public_key = std::fs::read_to_string(&public_path)
.map_err(|e| AgentError::Config(format!("Failed to read generated SSH public key: {e}")))?;
tracing::info!("Generated new SSH key pair at {key_path}");
Ok(public_key)
}

View File

@@ -32,7 +32,7 @@ bson = { version = "2", features = ["chrono-0_4"] }
mongodb = { workspace = true, optional = true }
opentelemetry = { version = "0.29", optional = true }
opentelemetry_sdk = { version = "0.29", features = ["rt-tokio"], optional = true }
opentelemetry-otlp = { version = "0.29", features = ["grpc-tonic"], optional = true }
opentelemetry-otlp = { version = "0.29", features = ["http", "reqwest-rustls"], optional = true }
opentelemetry-appender-tracing = { version = "0.29", optional = true }
tracing-opentelemetry = { version = "0.30", optional = true }
tracing-subscriber = { workspace = true, optional = true }

View File

@@ -24,6 +24,7 @@ pub struct AgentConfig {
pub scan_schedule: String,
pub cve_monitor_schedule: String,
pub git_clone_base_path: String,
pub ssh_key_path: String,
pub keycloak_url: Option<String>,
pub keycloak_realm: Option<String>,
}
@@ -34,4 +35,5 @@ pub struct DashboardConfig {
pub mongodb_database: String,
pub agent_api_url: String,
pub dashboard_port: u16,
pub mcp_endpoint_url: Option<String>,
}

View File

@@ -23,6 +23,7 @@ pub struct CveAlert {
pub summary: Option<String>,
pub llm_impact_summary: Option<String>,
pub references: Vec<String>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub created_at: DateTime<Utc>,
}

View File

@@ -58,7 +58,9 @@ pub struct DastTarget {
pub rate_limit: u32,
/// Whether destructive tests (DELETE, PUT) are allowed
pub allow_destructive: bool,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub created_at: DateTime<Utc>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub updated_at: DateTime<Utc>,
}
@@ -135,7 +137,9 @@ pub struct DastScanRun {
pub error_message: Option<String>,
/// Linked SAST scan run ID (if triggered as part of pipeline)
pub sast_scan_run_id: Option<String>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub started_at: DateTime<Utc>,
#[serde(default, with = "super::serde_helpers::opt_bson_datetime")]
pub completed_at: Option<DateTime<Utc>>,
}
@@ -240,6 +244,7 @@ pub struct DastFinding {
pub remediation: Option<String>,
/// Linked SAST finding ID (if correlated)
pub linked_sast_finding_id: Option<String>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub created_at: DateTime<Utc>,
}

View File

@@ -71,7 +71,14 @@ pub struct Finding {
pub status: FindingStatus,
pub tracker_issue_url: Option<String>,
pub scan_run_id: Option<String>,
/// LLM triage action and reasoning
pub triage_action: Option<String>,
pub triage_rationale: Option<String>,
/// Developer feedback on finding quality
pub developer_feedback: Option<String>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub created_at: DateTime<Utc>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub updated_at: DateTime<Utc>,
}
@@ -108,6 +115,9 @@ impl Finding {
status: FindingStatus::Open,
tracker_issue_url: None,
scan_run_id: None,
triage_action: None,
triage_rationale: None,
developer_feedback: None,
created_at: now,
updated_at: now,
}

View File

@@ -122,7 +122,9 @@ pub struct GraphBuildRun {
pub community_count: u32,
pub languages_parsed: Vec<String>,
pub error_message: Option<String>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub started_at: DateTime<Utc>,
#[serde(default, with = "super::serde_helpers::opt_bson_datetime")]
pub completed_at: Option<DateTime<Utc>>,
}
@@ -164,6 +166,7 @@ pub struct ImpactAnalysis {
pub direct_callers: Vec<String>,
/// Direct callees of the affected function
pub direct_callees: Vec<String>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub created_at: DateTime<Utc>,
}

View File

@@ -49,7 +49,9 @@ pub struct TrackerIssue {
pub external_url: String,
pub title: String,
pub status: IssueStatus,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub created_at: DateTime<Utc>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub updated_at: DateTime<Utc>,
}

View File

@@ -0,0 +1,69 @@
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
/// Transport mode for MCP server
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum McpTransport {
Stdio,
Http,
}
impl std::fmt::Display for McpTransport {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Stdio => write!(f, "stdio"),
Self::Http => write!(f, "http"),
}
}
}
/// Status of a running MCP server
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum McpServerStatus {
Running,
Stopped,
Error,
}
impl std::fmt::Display for McpServerStatus {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Running => write!(f, "running"),
Self::Stopped => write!(f, "stopped"),
Self::Error => write!(f, "error"),
}
}
}
/// Configuration for a registered MCP server instance
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct McpServerConfig {
#[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
pub id: Option<bson::oid::ObjectId>,
/// Display name for this MCP server
pub name: String,
/// Endpoint URL (e.g. https://mcp.example.com/mcp)
pub endpoint_url: String,
/// Transport type
pub transport: McpTransport,
/// Port number (for HTTP transport)
pub port: Option<u16>,
/// Current status
pub status: McpServerStatus,
/// Bearer access token for authentication
pub access_token: String,
/// Which tools are enabled on this server
pub tools_enabled: Vec<String>,
/// Optional description / notes
pub description: Option<String>,
/// MongoDB URI this server connects to
pub mongodb_uri: Option<String>,
/// Database name
pub mongodb_database: Option<String>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub created_at: DateTime<Utc>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub updated_at: DateTime<Utc>,
}

View File

@@ -6,9 +6,11 @@ pub mod embedding;
pub mod finding;
pub mod graph;
pub mod issue;
pub mod mcp;
pub mod repository;
pub mod sbom;
pub mod scan;
pub(crate) mod serde_helpers;
pub use auth::AuthInfo;
pub use chat::{ChatMessage, ChatRequest, ChatResponse, SourceReference};
@@ -23,6 +25,7 @@ pub use graph::{
CodeEdge, CodeEdgeKind, CodeNode, CodeNodeKind, GraphBuildRun, GraphBuildStatus, ImpactAnalysis,
};
pub use issue::{IssueStatus, TrackerIssue, TrackerType};
pub use mcp::{McpServerConfig, McpServerStatus, McpTransport};
pub use repository::{ScanTrigger, TrackedRepository};
pub use sbom::{SbomEntry, VulnRef};
pub use scan::{ScanPhase, ScanRun, ScanRunStatus, ScanType};

View File

@@ -28,17 +28,23 @@ pub struct TrackedRepository {
pub tracker_type: Option<TrackerType>,
pub tracker_owner: Option<String>,
pub tracker_repo: Option<String>,
/// Optional auth token for HTTPS private repos (PAT or password)
#[serde(default, skip_serializing_if = "Option::is_none")]
pub auth_token: Option<String>,
/// Optional username for HTTPS auth (defaults to "x-access-token" for PATs)
#[serde(default, skip_serializing_if = "Option::is_none")]
pub auth_username: Option<String>,
pub last_scanned_commit: Option<String>,
#[serde(default, deserialize_with = "deserialize_findings_count")]
pub findings_count: u32,
#[serde(
default = "chrono::Utc::now",
deserialize_with = "deserialize_datetime"
with = "super::serde_helpers::bson_datetime"
)]
pub created_at: DateTime<Utc>,
#[serde(
default = "chrono::Utc::now",
deserialize_with = "deserialize_datetime"
with = "super::serde_helpers::bson_datetime"
)]
pub updated_at: DateTime<Utc>,
}
@@ -47,23 +53,6 @@ fn default_branch() -> String {
"main".to_string()
}
/// Handles findings_count stored as either a plain integer or a BSON Int64
/// which the driver may present as a map `{"low": N, "high": N, "unsigned": bool}`.
/// Handles datetime stored as either a BSON DateTime or an RFC 3339 string.
fn deserialize_datetime<'de, D>(deserializer: D) -> Result<DateTime<Utc>, D::Error>
where
D: Deserializer<'de>,
{
let bson = bson::Bson::deserialize(deserializer)?;
match bson {
bson::Bson::DateTime(dt) => Ok(dt.into()),
bson::Bson::String(s) => s.parse::<DateTime<Utc>>().map_err(serde::de::Error::custom),
other => Err(serde::de::Error::custom(format!(
"expected DateTime or string, got: {other:?}"
))),
}
}
fn deserialize_findings_count<'de, D>(deserializer: D) -> Result<u32, D::Error>
where
D: Deserializer<'de>,
@@ -87,6 +76,8 @@ impl TrackedRepository {
default_branch: "main".to_string(),
local_path: None,
scan_schedule: None,
auth_token: None,
auth_username: None,
webhook_enabled: false,
tracker_type: None,
tracker_owner: None,

View File

@@ -20,7 +20,9 @@ pub struct SbomEntry {
pub license: Option<String>,
pub purl: Option<String>,
pub known_vulnerabilities: Vec<VulnRef>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub created_at: DateTime<Utc>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub updated_at: DateTime<Utc>,
}

View File

@@ -13,6 +13,9 @@ pub enum ScanType {
OAuth,
Graph,
Dast,
SecretDetection,
Lint,
CodeReview,
}
impl std::fmt::Display for ScanType {
@@ -25,6 +28,9 @@ impl std::fmt::Display for ScanType {
Self::OAuth => write!(f, "oauth"),
Self::Graph => write!(f, "graph"),
Self::Dast => write!(f, "dast"),
Self::SecretDetection => write!(f, "secret_detection"),
Self::Lint => write!(f, "lint"),
Self::CodeReview => write!(f, "code_review"),
}
}
}
@@ -45,6 +51,9 @@ pub enum ScanPhase {
SbomGeneration,
CveScanning,
PatternScanning,
SecretDetection,
LintScanning,
CodeReview,
GraphBuilding,
LlmTriage,
IssueCreation,
@@ -64,7 +73,9 @@ pub struct ScanRun {
pub phases_completed: Vec<ScanPhase>,
pub new_findings_count: u32,
pub error_message: Option<String>,
#[serde(with = "super::serde_helpers::bson_datetime")]
pub started_at: DateTime<Utc>,
#[serde(default, with = "super::serde_helpers::opt_bson_datetime")]
pub completed_at: Option<DateTime<Utc>>,
}

View File

@@ -0,0 +1,68 @@
use chrono::{DateTime, Utc};
use serde::{Deserialize, Deserializer, Serializer};
/// Serialize/deserialize `DateTime<Utc>` as BSON DateTime.
/// Handles both BSON DateTime objects and RFC 3339 strings on deserialization.
pub mod bson_datetime {
use super::*;
use serde::Serialize as _;
pub fn serialize<S>(dt: &DateTime<Utc>, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let bson_dt: bson::DateTime = (*dt).into();
bson_dt.serialize(serializer)
}
pub fn deserialize<'de, D>(deserializer: D) -> Result<DateTime<Utc>, D::Error>
where
D: Deserializer<'de>,
{
let bson_val = bson::Bson::deserialize(deserializer)?;
match bson_val {
bson::Bson::DateTime(dt) => Ok(dt.into()),
bson::Bson::String(s) => s.parse::<DateTime<Utc>>().map_err(serde::de::Error::custom),
other => Err(serde::de::Error::custom(format!(
"expected DateTime or string, got: {other:?}"
))),
}
}
}
/// Serialize/deserialize `Option<DateTime<Utc>>` as BSON DateTime.
pub mod opt_bson_datetime {
use super::*;
use serde::Serialize as _;
pub fn serialize<S>(dt: &Option<DateTime<Utc>>, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match dt {
Some(dt) => {
let bson_dt: bson::DateTime = (*dt).into();
bson_dt.serialize(serializer)
}
None => serializer.serialize_none(),
}
}
pub fn deserialize<'de, D>(deserializer: D) -> Result<Option<DateTime<Utc>>, D::Error>
where
D: Deserializer<'de>,
{
let bson_val = Option::<bson::Bson>::deserialize(deserializer)?;
match bson_val {
Some(bson::Bson::DateTime(dt)) => Ok(Some(dt.into())),
Some(bson::Bson::String(s)) => s
.parse::<DateTime<Utc>>()
.map(Some)
.map_err(serde::de::Error::custom),
Some(bson::Bson::Null) | None => Ok(None),
Some(other) => Err(serde::de::Error::custom(format!(
"expected DateTime, string, or null, got: {other:?}"
))),
}
}
}

View File

@@ -1,6 +1,6 @@
//! OpenTelemetry initialization for traces and logs.
//!
//! Exports traces and logs via OTLP (gRPC) when `OTEL_EXPORTER_OTLP_ENDPOINT`
//! Exports traces and logs via OTLP/HTTP when `OTEL_EXPORTER_OTLP_ENDPOINT`
//! is set. Always includes a `tracing_subscriber::fmt` layer for console output.
//!
//! Compatible with SigNoz, Grafana Tempo/Loki, Jaeger, and any OTLP-compatible
@@ -10,7 +10,7 @@
//!
//! | Variable | Description | Default |
//! |---|---|---|
//! | `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP collector endpoint (e.g. `http://localhost:4317`) | *(disabled)* |
//! | `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP collector endpoint (e.g. `https://otel.example.com`) | *(disabled)* |
//! | `OTEL_SERVICE_NAME` | Service name for resource | `service_name` param |
//! | `RUST_LOG` / standard `EnvFilter` | Log level filter | `info` |
@@ -58,7 +58,7 @@ fn build_resource(service_name: &str) -> Resource {
/// Initialize telemetry (tracing + logging).
///
/// If `OTEL_EXPORTER_OTLP_ENDPOINT` is set, traces and logs are exported
/// via OTLP/gRPC. Console fmt output is always enabled.
/// via OTLP/HTTP. Console fmt output is always enabled.
///
/// Returns a [`TelemetryGuard`] that must be held alive for the application
/// lifetime. Dropping it triggers a graceful shutdown of OTel providers.
@@ -76,11 +76,14 @@ pub fn init_telemetry(service_name: &str) -> TelemetryGuard {
Some(ref endpoint) => {
let resource = build_resource(service_name);
let traces_endpoint = format!("{endpoint}/v1/traces");
let logs_endpoint = format!("{endpoint}/v1/logs");
// Traces
#[allow(clippy::expect_used)]
let span_exporter = SpanExporter::builder()
.with_tonic()
.with_endpoint(endpoint)
.with_http()
.with_endpoint(&traces_endpoint)
.build()
.expect("failed to create OTLP span exporter");
@@ -96,8 +99,8 @@ pub fn init_telemetry(service_name: &str) -> TelemetryGuard {
// Logs
#[allow(clippy::expect_used)]
let log_exporter = LogExporter::builder()
.with_tonic()
.with_endpoint(endpoint)
.with_http()
.with_endpoint(&logs_endpoint)
.build()
.expect("failed to create OTLP log exporter");
@@ -111,7 +114,6 @@ pub fn init_telemetry(service_name: &str) -> TelemetryGuard {
// Filter to prevent telemetry-induced-telemetry loops
let otel_filter = EnvFilter::new("info")
.add_directive("hyper=off".parse().unwrap_or_default())
.add_directive("tonic=off".parse().unwrap_or_default())
.add_directive("h2=off".parse().unwrap_or_default())
.add_directive("reqwest=off".parse().unwrap_or_default());
@@ -125,7 +127,7 @@ pub fn init_telemetry(service_name: &str) -> TelemetryGuard {
tracing::info!(
endpoint = endpoint.as_str(),
service = service_name,
"OpenTelemetry OTLP export enabled"
"OpenTelemetry OTLP/HTTP export enabled"
);
TelemetryGuard {

View File

@@ -34,6 +34,8 @@ server = [
"dep:url",
"dep:sha2",
"dep:base64",
"dep:uuid",
"dep:bson",
]
[dependencies]
@@ -67,3 +69,5 @@ rand = { version = "0.9", optional = true }
url = { version = "2", optional = true }
sha2 = { workspace = true, optional = true }
base64 = { version = "0.22", optional = true }
uuid = { workspace = true, optional = true }
bson = { version = "2", features = ["chrono-0_4"], optional = true }

View File

@@ -323,6 +323,25 @@ code {
/* ── Page Header ── */
/* ── Back Navigation ── */
.back-nav {
margin-bottom: 12px;
}
.btn-back {
display: inline-flex;
align-items: center;
gap: 6px;
font-size: 13px;
padding: 6px 12px;
color: var(--text-secondary);
}
.btn-back:hover {
color: var(--text-primary);
}
.page-header {
margin-bottom: 28px;
padding-bottom: 20px;
@@ -479,7 +498,7 @@ th {
}
td {
padding: 11px 16px;
padding: 12px 16px;
border-bottom: 1px solid var(--border);
font-size: 13.5px;
color: var(--text-primary);
@@ -505,7 +524,8 @@ tbody tr:last-child td {
.badge {
display: inline-flex;
align-items: center;
padding: 3px 10px;
gap: 5px;
padding: 4px 10px;
border-radius: 6px;
font-family: var(--font-mono);
font-size: 11px;
@@ -609,6 +629,316 @@ tbody tr:last-child td {
background: var(--danger-bg);
}
.btn-scanning {
opacity: 0.7;
cursor: not-allowed;
display: inline-flex;
align-items: center;
gap: 6px;
}
.btn-icon {
display: inline-flex;
align-items: center;
justify-content: center;
padding: 6px 8px;
min-width: 32px;
}
/* ── Overview Cards Grid ── */
.overview-section {
margin-top: 28px;
}
.overview-section-header {
display: flex;
align-items: center;
justify-content: space-between;
margin-bottom: 16px;
}
.overview-section-header h3 {
font-family: var(--font-display);
font-size: 18px;
font-weight: 700;
color: var(--text-primary);
}
.overview-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(240px, 1fr));
gap: 14px;
}
.overview-card {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius);
padding: 16px;
transition: border-color 0.2s, box-shadow 0.2s;
text-decoration: none;
color: inherit;
display: flex;
align-items: center;
gap: 12px;
}
.overview-card:hover {
border-color: var(--accent);
box-shadow: 0 0 16px rgba(0, 200, 255, 0.06);
}
.overview-card-icon {
color: var(--accent);
flex-shrink: 0;
}
.overview-card-body {
min-width: 0;
}
.overview-card-title {
font-weight: 600;
font-size: 14px;
color: var(--text-primary);
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.overview-card-sub {
font-size: 12px;
color: var(--text-secondary);
margin-top: 2px;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.mcp-status-dot {
width: 8px;
height: 8px;
border-radius: 50%;
flex-shrink: 0;
}
.mcp-status-dot.running { background: var(--success); }
.mcp-status-dot.stopped { background: var(--text-tertiary); }
.mcp-status-dot.error { background: var(--danger); }
/* ── MCP Server Cards ── */
.mcp-cards-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(420px, 1fr));
gap: 16px;
}
.mcp-card {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 20px;
transition: border-color 0.2s;
}
.mcp-card:hover {
border-color: var(--border-bright);
}
.mcp-card-header {
display: flex;
align-items: center;
justify-content: space-between;
margin-bottom: 12px;
}
.mcp-card-title {
display: flex;
align-items: center;
gap: 10px;
}
.mcp-card-title h3 {
font-family: var(--font-display);
font-size: 16px;
font-weight: 600;
margin: 0;
color: var(--text-primary);
}
.mcp-card-status {
font-size: 11px;
font-weight: 500;
text-transform: uppercase;
letter-spacing: 0.05em;
padding: 2px 8px;
border-radius: 10px;
}
.mcp-card-status.running {
color: var(--success);
background: var(--success-bg);
}
.mcp-card-status.stopped {
color: var(--text-secondary);
background: var(--bg-secondary);
}
.mcp-card-status.error {
color: var(--danger);
background: var(--danger-bg);
}
.mcp-card-desc {
font-size: 13px;
color: var(--text-secondary);
margin: 0 0 16px;
line-height: 1.4;
}
.mcp-card-details {
display: flex;
flex-direction: column;
gap: 8px;
margin-bottom: 16px;
padding: 12px;
background: var(--bg-secondary);
border-radius: var(--radius-md);
}
.mcp-detail-row {
display: flex;
align-items: center;
gap: 8px;
font-size: 13px;
color: var(--text-secondary);
}
.mcp-detail-label {
font-size: 12px;
color: var(--text-secondary);
display: flex;
align-items: center;
gap: 6px;
min-width: 80px;
flex-shrink: 0;
}
.mcp-detail-value {
font-family: var(--font-mono);
font-size: 12px;
color: var(--text-primary);
word-break: break-all;
}
.mcp-card-tools {
margin-bottom: 16px;
}
.mcp-tools-list {
display: flex;
flex-wrap: wrap;
gap: 6px;
margin-top: 8px;
}
.mcp-tool-chip {
font-family: var(--font-mono);
font-size: 11px;
padding: 3px 10px;
background: var(--accent-muted);
color: var(--accent);
border-radius: 12px;
border: 1px solid var(--border-accent);
}
.mcp-card-token {
display: flex;
align-items: center;
justify-content: space-between;
gap: 12px;
padding: 10px 12px;
background: var(--bg-secondary);
border-radius: var(--radius-md);
margin-bottom: 12px;
}
.mcp-token-display {
display: flex;
align-items: center;
gap: 8px;
min-width: 0;
flex: 1;
color: var(--text-secondary);
}
.mcp-token-code {
font-family: var(--font-mono);
font-size: 12px;
color: var(--text-primary);
word-break: break-all;
}
.mcp-token-actions {
display: flex;
gap: 4px;
flex-shrink: 0;
}
.mcp-card-footer {
font-size: 11px;
color: var(--text-tertiary);
}
/* ── DAST Stat Cards ── */
.stat-card-item {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 20px;
text-align: center;
}
.stat-card-value {
font-family: var(--font-display);
font-size: 28px;
font-weight: 700;
color: var(--text-primary);
margin-bottom: 6px;
}
.stat-card-label {
font-size: 13px;
color: var(--text-secondary);
display: flex;
align-items: center;
justify-content: center;
gap: 6px;
}
/* ── Button active state ── */
.btn-active,
.btn.btn-active {
background: var(--accent-muted);
border-color: var(--accent);
color: var(--accent);
}
.spinner {
display: inline-block;
width: 14px;
height: 14px;
border: 2px solid var(--border-bright);
border-top-color: var(--accent);
border-radius: 50%;
animation: spin 0.8s linear infinite;
}
.btn-danger {
background: var(--danger);
color: #fff;

View File

@@ -38,6 +38,8 @@ pub enum Route {
DastFindingsPage {},
#[route("/dast/findings/:id")]
DastFindingDetailPage { id: String },
#[route("/mcp-servers")]
McpServersPage {},
#[route("/settings")]
SettingsPage {},
}

View File

@@ -24,12 +24,13 @@ pub fn AppShell() -> Element {
}
}
}
Some(Ok(_)) => {
rsx! { LoginPage {} }
}
Some(Err(e)) => {
tracing::error!("Auth check failed: {e}");
rsx! { LoginPage {} }
Some(Ok(_)) | Some(Err(_)) => {
// Not authenticated — redirect to Keycloak login
rsx! {
document::Script {
dangerous_inner_html: "window.location.href = '/auth';"
}
}
}
None => {
rsx! {
@@ -40,20 +41,3 @@ pub fn AppShell() -> Element {
}
}
}
#[component]
fn LoginPage() -> Element {
rsx! {
div { class: "flex items-center justify-center h-screen bg-gray-950",
div { class: "text-center",
h1 { class: "text-3xl font-bold text-white mb-4", "Compliance Scanner" }
p { class: "text-gray-400 mb-8", "Sign in to access the dashboard" }
a {
href: "/auth",
class: "px-6 py-3 bg-blue-600 text-white rounded-lg hover:bg-blue-500 transition-colors font-medium",
"Sign in with Keycloak"
}
}
}
}
}

View File

@@ -42,16 +42,6 @@ pub fn Sidebar() -> Element {
route: Route::IssuesPage {},
icon: rsx! { Icon { icon: BsListTask, width: 18, height: 18 } },
},
NavItem {
label: "Code Graph",
route: Route::GraphIndexPage {},
icon: rsx! { Icon { icon: BsDiagram3, width: 18, height: 18 } },
},
NavItem {
label: "AI Chat",
route: Route::ChatIndexPage {},
icon: rsx! { Icon { icon: BsChatDots, width: 18, height: 18 } },
},
NavItem {
label: "DAST",
route: Route::DastOverviewPage {},
@@ -64,6 +54,8 @@ pub fn Sidebar() -> Element {
},
];
let docs_url = option_env!("DOCS_URL").unwrap_or("/docs");
let sidebar_class = if collapsed() {
"sidebar collapsed"
} else {
@@ -83,10 +75,6 @@ pub fn Sidebar() -> Element {
{
let is_active = match (&current_route, &item.route) {
(Route::FindingDetailPage { .. }, Route::FindingsPage {}) => true,
(Route::GraphIndexPage {}, Route::GraphIndexPage {}) => true,
(Route::GraphExplorerPage { .. }, Route::GraphIndexPage {}) => true,
(Route::ImpactAnalysisPage { .. }, Route::GraphIndexPage {}) => true,
(Route::ChatPage { .. }, Route::ChatIndexPage {}) => true,
(Route::DastTargetsPage {}, Route::DastOverviewPage {}) => true,
(Route::DastFindingsPage {}, Route::DastOverviewPage {}) => true,
(Route::DastFindingDetailPage { .. }, Route::DastOverviewPage {}) => true,
@@ -106,6 +94,15 @@ pub fn Sidebar() -> Element {
}
}
}
a {
href: "{docs_url}",
target: "_blank",
class: "nav-item",
Icon { icon: BsBook, width: 18, height: 18 }
if !collapsed() {
span { "Docs" }
}
}
button {
class: "sidebar-toggle",
onclick: move |_| collapsed.set(!collapsed()),
@@ -119,8 +116,9 @@ pub fn Sidebar() -> Element {
let auth_info = use_context::<Signal<AuthInfo>>();
let info = auth_info();
let initials = info.name.chars().next().unwrap_or('U').to_uppercase().to_string();
let user_class = if collapsed() { "sidebar-user sidebar-user-collapsed" } else { "sidebar-user" };
rsx! {
div { class: "sidebar-user",
div { class: "{user_class}",
div { class: "user-avatar",
if info.avatar_url.is_empty() {
span { class: "avatar-initials", "{initials}" }
@@ -129,15 +127,13 @@ pub fn Sidebar() -> Element {
}
}
if !collapsed() {
div { class: "user-info",
span { class: "user-name", "{info.name}" }
a {
href: "/logout",
class: "logout-link",
Icon { icon: BsBoxArrowRight, width: 14, height: 14 }
" Logout"
}
}
span { class: "user-name", "{info.name}" }
}
a {
href: "/logout",
class: if collapsed() { "logout-btn logout-btn-collapsed" } else { "logout-btn" },
title: "Sign out",
Icon { icon: BsBoxArrowRight, width: 16, height: 16 }
}
}
}

View File

@@ -75,7 +75,9 @@ pub async fn auth_login(
Extension(pending): Extension<PendingOAuthStore>,
Query(params): Query<HashMap<String, String>>,
) -> Result<impl IntoResponse, DashboardError> {
let kc = state.keycloak;
let kc = state
.keycloak
.ok_or(DashboardError::Other("Keycloak not configured".into()))?;
let csrf_state = generate_state();
let code_verifier = generate_code_verifier();
let code_challenge = derive_code_challenge(&code_verifier);
@@ -126,7 +128,9 @@ pub async fn auth_callback(
Extension(pending): Extension<PendingOAuthStore>,
Query(params): Query<HashMap<String, String>>,
) -> Result<impl IntoResponse, DashboardError> {
let kc = state.keycloak;
let kc = state
.keycloak
.ok_or(DashboardError::Other("Keycloak not configured".into()))?;
let returned_state = params
.get("state")
@@ -210,7 +214,9 @@ pub async fn logout(
session: Session,
Extension(state): Extension<ServerState>,
) -> Result<impl IntoResponse, DashboardError> {
let kc = state.keycloak;
let kc = state
.keycloak
.ok_or(DashboardError::Other("Keycloak not configured".into()))?;
session
.flush()

View File

@@ -9,9 +9,21 @@ use dioxus::prelude::*;
#[server(endpoint = "check-auth")]
pub async fn check_auth() -> Result<AuthInfo, ServerFnError> {
use super::auth::LOGGED_IN_USER_SESS_KEY;
use super::server_state::ServerState;
use super::user_state::UserStateInner;
use dioxus_fullstack::FullstackContext;
let state: ServerState = FullstackContext::extract().await?;
// When Keycloak is not configured, treat as always authenticated
if state.keycloak.is_none() {
return Ok(AuthInfo {
authenticated: true,
name: "Local User".into(),
..Default::default()
});
}
let session: tower_sessions::Session = FullstackContext::extract().await?;
let user_state: Option<UserStateInner> = session

View File

@@ -2,18 +2,30 @@ use axum::{
extract::Request,
middleware::Next,
response::{IntoResponse, Response},
Extension,
};
use reqwest::StatusCode;
use tower_sessions::Session;
use super::auth::LOGGED_IN_USER_SESS_KEY;
use super::server_state::ServerState;
use super::user_state::UserStateInner;
const PUBLIC_API_ENDPOINTS: &[&str] = &["/api/check-auth"];
/// Axum middleware that enforces authentication on `/api/` server
/// function endpoints.
pub async fn require_auth(session: Session, request: Request, next: Next) -> Response {
/// function endpoints. Skips auth entirely when Keycloak is not configured.
pub async fn require_auth(
Extension(state): Extension<ServerState>,
session: Session,
request: Request,
next: Next,
) -> Response {
// Skip auth when Keycloak is not configured
if state.keycloak.is_none() {
return next.run(request).await;
}
let path = request.uri().path();
if path.starts_with("/api/") && !PUBLIC_API_ENDPOINTS.contains(&path) {

View File

@@ -14,5 +14,8 @@ pub fn load_config() -> Result<DashboardConfig, DashboardError> {
.ok()
.and_then(|p| p.parse().ok())
.unwrap_or(8080),
mcp_endpoint_url: std::env::var("MCP_ENDPOINT_URL")
.ok()
.filter(|v| !v.is_empty()),
})
}

View File

@@ -42,4 +42,8 @@ impl Database {
pub fn tracker_issues(&self) -> Collection<TrackerIssue> {
self.inner.collection("tracker_issues")
}
pub fn mcp_servers(&self) -> Collection<McpServerConfig> {
self.inner.collection("mcp_servers")
}
}

View File

@@ -10,32 +10,50 @@ pub struct FindingsListResponse {
pub page: Option<u64>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct FindingsQuery {
pub page: u64,
pub severity: String,
pub scan_type: String,
pub status: String,
pub repo_id: String,
pub q: String,
pub sort_by: String,
pub sort_order: String,
}
#[server]
pub async fn fetch_findings(
page: u64,
severity: String,
scan_type: String,
status: String,
repo_id: String,
) -> Result<FindingsListResponse, ServerFnError> {
pub async fn fetch_findings(query: FindingsQuery) -> Result<FindingsListResponse, ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let mut url = format!(
"{}/api/v1/findings?page={page}&limit=20",
state.agent_api_url
"{}/api/v1/findings?page={}&limit=20",
state.agent_api_url, query.page
);
if !severity.is_empty() {
url.push_str(&format!("&severity={severity}"));
if !query.severity.is_empty() {
url.push_str(&format!("&severity={}", query.severity));
}
if !scan_type.is_empty() {
url.push_str(&format!("&scan_type={scan_type}"));
if !query.scan_type.is_empty() {
url.push_str(&format!("&scan_type={}", query.scan_type));
}
if !status.is_empty() {
url.push_str(&format!("&status={status}"));
if !query.status.is_empty() {
url.push_str(&format!("&status={}", query.status));
}
if !repo_id.is_empty() {
url.push_str(&format!("&repo_id={repo_id}"));
if !query.repo_id.is_empty() {
url.push_str(&format!("&repo_id={}", query.repo_id));
}
if !query.q.is_empty() {
url.push_str(&format!(
"&q={}",
url::form_urlencoded::byte_serialize(query.q.as_bytes()).collect::<String>()
));
}
if !query.sort_by.is_empty() {
url.push_str(&format!("&sort_by={}", query.sort_by));
}
if !query.sort_order.is_empty() {
url.push_str(&format!("&sort_order={}", query.sort_order));
}
let resp = reqwest::get(&url)
@@ -82,3 +100,40 @@ pub async fn update_finding_status(id: String, status: String) -> Result<(), Ser
Ok(())
}
#[server]
pub async fn bulk_update_finding_status(
ids: Vec<String>,
status: String,
) -> Result<(), ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!("{}/api/v1/findings/bulk-status", state.agent_api_url);
let client = reqwest::Client::new();
client
.patch(&url)
.json(&serde_json::json!({ "ids": ids, "status": status }))
.send()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(())
}
#[server]
pub async fn update_finding_feedback(id: String, feedback: String) -> Result<(), ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!("{}/api/v1/findings/{id}/feedback", state.agent_api_url);
let client = reqwest::Client::new();
client
.patch(&url)
.json(&serde_json::json!({ "feedback": feedback }))
.send()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(())
}

View File

@@ -1,5 +1,3 @@
use super::error::DashboardError;
/// Keycloak OpenID Connect settings.
#[derive(Debug)]
pub struct KeycloakConfig {
@@ -11,13 +9,18 @@ pub struct KeycloakConfig {
}
impl KeycloakConfig {
pub fn from_env() -> Result<Self, DashboardError> {
Ok(Self {
url: required_env("KEYCLOAK_URL")?,
realm: required_env("KEYCLOAK_REALM")?,
client_id: required_env("KEYCLOAK_CLIENT_ID")?,
redirect_uri: required_env("REDIRECT_URI")?,
app_url: required_env("APP_URL")?,
pub fn from_env() -> Option<Self> {
let url = std::env::var("KEYCLOAK_URL").ok()?;
let realm = std::env::var("KEYCLOAK_REALM").ok()?;
let client_id = std::env::var("KEYCLOAK_CLIENT_ID").ok()?;
let redirect_uri = std::env::var("REDIRECT_URI").ok()?;
let app_url = std::env::var("APP_URL").ok()?;
Some(Self {
url,
realm,
client_id,
redirect_uri,
app_url,
})
}
@@ -49,8 +52,3 @@ impl KeycloakConfig {
)
}
}
fn required_env(name: &str) -> Result<String, DashboardError> {
std::env::var(name)
.map_err(|_| DashboardError::Config(format!("{name} is required but not set")))
}

View File

@@ -0,0 +1,160 @@
use dioxus::prelude::*;
use serde::{Deserialize, Serialize};
use compliance_core::models::McpServerConfig;
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct McpServersResponse {
pub data: Vec<McpServerConfig>,
}
#[server]
pub async fn fetch_mcp_servers() -> Result<McpServersResponse, ServerFnError> {
use mongodb::bson::doc;
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let mut cursor = state
.db
.mcp_servers()
.find(doc! {})
.sort(doc! { "created_at": -1 })
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
let mut data = Vec::new();
while cursor
.advance()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?
{
let server = cursor
.deserialize_current()
.map_err(|e| ServerFnError::new(e.to_string()))?;
data.push(server);
}
Ok(McpServersResponse { data })
}
#[server]
pub async fn add_mcp_server(
name: String,
endpoint_url: String,
transport: String,
port: String,
description: String,
mongodb_uri: String,
mongodb_database: String,
) -> Result<(), ServerFnError> {
use chrono::Utc;
use compliance_core::models::{McpServerStatus, McpTransport};
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let transport = match transport.as_str() {
"http" => McpTransport::Http,
_ => McpTransport::Stdio,
};
let port_num: Option<u16> = port.parse().ok();
// Generate a random access token
let token = format!("mcp_{}", uuid::Uuid::new_v4().to_string().replace('-', ""));
let all_tools = vec![
"list_findings".to_string(),
"get_finding".to_string(),
"findings_summary".to_string(),
"list_sbom_packages".to_string(),
"sbom_vuln_report".to_string(),
"list_dast_findings".to_string(),
"dast_scan_summary".to_string(),
];
let now = Utc::now();
let server = McpServerConfig {
id: None,
name,
endpoint_url,
transport,
port: port_num,
status: McpServerStatus::Stopped,
access_token: token,
tools_enabled: all_tools,
description: if description.is_empty() {
None
} else {
Some(description)
},
mongodb_uri: if mongodb_uri.is_empty() {
None
} else {
Some(mongodb_uri)
},
mongodb_database: if mongodb_database.is_empty() {
None
} else {
Some(mongodb_database)
},
created_at: now,
updated_at: now,
};
state
.db
.mcp_servers()
.insert_one(server)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(())
}
#[server]
pub async fn delete_mcp_server(server_id: String) -> Result<(), ServerFnError> {
use mongodb::bson::doc;
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let oid = bson::oid::ObjectId::parse_str(&server_id)
.map_err(|e| ServerFnError::new(e.to_string()))?;
state
.db
.mcp_servers()
.delete_one(doc! { "_id": oid })
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(())
}
#[server]
pub async fn regenerate_mcp_token(server_id: String) -> Result<String, ServerFnError> {
use chrono::Utc;
use mongodb::bson::doc;
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let oid = bson::oid::ObjectId::parse_str(&server_id)
.map_err(|e| ServerFnError::new(e.to_string()))?;
let new_token = format!("mcp_{}", uuid::Uuid::new_v4().to_string().replace('-', ""));
state
.db
.mcp_servers()
.update_one(
doc! { "_id": oid },
doc! { "$set": { "access_token": &new_token, "updated_at": Utc::now().to_rfc3339() } },
)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(new_token)
}

View File

@@ -6,6 +6,7 @@ pub mod dast;
pub mod findings;
pub mod graph;
pub mod issues;
pub mod mcp;
pub mod repositories;
pub mod sbom;
pub mod scans;

View File

@@ -34,19 +34,29 @@ pub async fn add_repository(
name: String,
git_url: String,
default_branch: String,
auth_token: Option<String>,
auth_username: Option<String>,
) -> Result<(), ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!("{}/api/v1/repositories", state.agent_api_url);
let mut body = serde_json::json!({
"name": name,
"git_url": git_url,
"default_branch": default_branch,
});
if let Some(token) = auth_token.filter(|t| !t.is_empty()) {
body["auth_token"] = serde_json::Value::String(token);
}
if let Some(username) = auth_username.filter(|u| !u.is_empty()) {
body["auth_username"] = serde_json::Value::String(username);
}
let client = reqwest::Client::new();
let resp = client
.post(&url)
.json(&serde_json::json!({
"name": name,
"git_url": git_url,
"default_branch": default_branch,
}))
.json(&body)
.send()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
@@ -61,6 +71,32 @@ pub async fn add_repository(
Ok(())
}
#[server]
pub async fn fetch_ssh_public_key() -> Result<String, ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!("{}/api/v1/settings/ssh-public-key", state.agent_api_url);
let resp = reqwest::get(&url)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
if !resp.status().is_success() {
return Err(ServerFnError::new("SSH key not available".to_string()));
}
let body: serde_json::Value = resp
.json()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(body
.get("public_key")
.and_then(|v| v.as_str())
.unwrap_or("")
.to_string())
}
#[server]
pub async fn delete_repository(repo_id: String) -> Result<(), ServerFnError> {
let state: super::server_state::ServerState =
@@ -99,3 +135,32 @@ pub async fn trigger_repo_scan(repo_id: String) -> Result<(), ServerFnError> {
Ok(())
}
/// Check if a repository has any running scans
#[server]
pub async fn check_repo_scanning(repo_id: String) -> Result<bool, ServerFnError> {
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let url = format!("{}/api/v1/scan-runs?page=1&limit=1", state.agent_api_url);
let resp = reqwest::get(&url)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
let body: serde_json::Value = resp
.json()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
// Check if the most recent scan for this repo is still running
if let Some(scans) = body.get("data").and_then(|d| d.as_array()) {
for scan in scans {
let scan_repo = scan.get("repo_id").and_then(|v| v.as_str()).unwrap_or("");
let status = scan.get("status").and_then(|v| v.as_str()).unwrap_or("");
if scan_repo == repo_id && status == "running" {
return Ok(true);
}
}
}
Ok(false)
}

View File

@@ -4,6 +4,9 @@ use dioxus::prelude::*;
use time::Duration;
use tower_sessions::{cookie::Key, MemoryStore, SessionManagerLayer};
use compliance_core::models::{McpServerConfig, McpServerStatus, McpTransport};
use mongodb::bson::doc;
use super::config;
use super::database::Database;
use super::error::DashboardError;
@@ -18,11 +21,18 @@ pub fn server_start(app: fn() -> Element) -> Result<(), DashboardError> {
dotenvy::dotenv().ok();
let config = config::load_config()?;
let keycloak: &'static KeycloakConfig =
Box::leak(Box::new(KeycloakConfig::from_env()?));
let keycloak: Option<&'static KeycloakConfig> =
KeycloakConfig::from_env().map(|kc| &*Box::leak(Box::new(kc)));
let db = Database::connect(&config.mongodb_uri, &config.mongodb_database).await?;
tracing::info!("Keycloak configured for realm '{}'", keycloak.realm);
// Seed default MCP server configs
seed_default_mcp_servers(&db, config.mcp_endpoint_url.as_deref()).await;
if let Some(kc) = keycloak {
tracing::info!("Keycloak configured for realm '{}'", kc.realm);
} else {
tracing::warn!("Keycloak not configured - dashboard is unprotected");
}
let server_state: ServerState = ServerStateInner {
agent_api_url: config.agent_api_url.clone(),
@@ -41,7 +51,8 @@ pub fn server_start(app: fn() -> Element) -> Result<(), DashboardError> {
.with_expiry(tower_sessions::Expiry::OnInactivity(Duration::hours(24)))
.with_signed(key);
let addr = dioxus_cli_config::fullstack_address_or_localhost();
let port = dioxus_cli_config::server_port().unwrap_or(8080);
let addr = std::net::SocketAddr::from(([0, 0, 0, 0], port));
let listener = tokio::net::TcpListener::bind(addr)
.await
.map_err(|e| DashboardError::Other(format!("Failed to bind: {e}")))?;
@@ -54,8 +65,8 @@ pub fn server_start(app: fn() -> Element) -> Result<(), DashboardError> {
.route("/logout", get(logout))
.serve_dioxus_application(ServeConfig::new(), app)
.layer(Extension(PendingOAuthStore::default()))
.layer(Extension(server_state))
.layer(middleware::from_fn(require_auth))
.layer(Extension(server_state))
.layer(session);
axum::serve(listener, router.into_make_service())
@@ -65,3 +76,66 @@ pub fn server_start(app: fn() -> Element) -> Result<(), DashboardError> {
Ok(())
})
}
/// Seed three default MCP server configs (Findings, SBOM, DAST) if they don't already exist.
async fn seed_default_mcp_servers(db: &Database, mcp_endpoint_url: Option<&str>) {
let endpoint = mcp_endpoint_url.unwrap_or("http://localhost:8090");
let defaults = [
(
"Findings MCP",
"Exposes security findings, triage data, and finding summaries to LLM agents",
vec!["list_findings", "get_finding", "findings_summary"],
),
(
"SBOM MCP",
"Exposes software bill of materials and vulnerability reports to LLM agents",
vec!["list_sbom_packages", "sbom_vuln_report"],
),
(
"DAST MCP",
"Exposes DAST scan findings and scan summaries to LLM agents",
vec!["list_dast_findings", "dast_scan_summary"],
),
];
let collection = db.mcp_servers();
for (name, description, tools) in defaults {
// Skip if already exists
let exists = collection
.find_one(doc! { "name": name })
.await
.ok()
.flatten()
.is_some();
if exists {
continue;
}
let now = chrono::Utc::now();
let token = format!("mcp_{}", uuid::Uuid::new_v4().to_string().replace('-', ""));
let server = McpServerConfig {
id: None,
name: name.to_string(),
endpoint_url: format!("{endpoint}/mcp"),
transport: McpTransport::Http,
port: Some(8090),
status: McpServerStatus::Stopped,
access_token: token,
tools_enabled: tools.into_iter().map(|s| s.to_string()).collect(),
description: Some(description.to_string()),
mongodb_uri: None,
mongodb_database: None,
created_at: now,
updated_at: now,
};
match collection.insert_one(server).await {
Ok(_) => tracing::info!("Seeded default MCP server: {name}"),
Err(e) => tracing::warn!("Failed to seed MCP server '{name}': {e}"),
}
}
}

View File

@@ -20,7 +20,7 @@ pub struct ServerStateInner {
pub db: Database,
pub config: DashboardConfig,
pub agent_api_url: String,
pub keycloak: &'static KeycloakConfig,
pub keycloak: Option<&'static KeycloakConfig>,
}
impl From<ServerStateInner> for ServerState {

View File

@@ -1,4 +1,6 @@
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
use crate::components::page_header::PageHeader;
use crate::infrastructure::chat::{
@@ -179,6 +181,15 @@ pub fn ChatPage(repo_id: String) -> Element {
let mut do_send_click = do_send.clone();
rsx! {
div { class: "back-nav",
button {
class: "btn btn-ghost btn-back",
onclick: move |_| { navigator().go_back(); },
Icon { icon: BsArrowLeft, width: 16, height: 16 }
"Back"
}
}
PageHeader { title: "AI Chat" }
// Embedding status banner

View File

@@ -1,4 +1,6 @@
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
use crate::components::page_header::PageHeader;
use crate::components::severity_badge::SeverityBadge;
@@ -12,6 +14,15 @@ pub fn DastFindingDetailPage(id: String) -> Element {
});
rsx! {
div { class: "back-nav",
button {
class: "btn btn-ghost btn-back",
onclick: move |_| { navigator().go_back(); },
Icon { icon: BsArrowLeft, width: 16, height: 16 }
"Back"
}
}
PageHeader {
title: "DAST Finding Detail",
description: "Full evidence and details for a dynamic security finding",

View File

@@ -1,4 +1,6 @@
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
use crate::app::Route;
use crate::components::page_header::PageHeader;
@@ -10,6 +12,15 @@ pub fn DastFindingsPage() -> Element {
let findings = use_resource(|| async { fetch_dast_findings().await.ok() });
rsx! {
div { class: "back-nav",
button {
class: "btn btn-ghost btn-back",
onclick: move |_| { navigator().go_back(); },
Icon { icon: BsArrowLeft, width: 16, height: 16 }
"Back"
}
}
PageHeader {
title: "DAST Findings",
description: "Vulnerabilities discovered through dynamic application security testing",

View File

@@ -1,4 +1,6 @@
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
use crate::app::Route;
use crate::components::page_header::PageHeader;
@@ -15,9 +17,9 @@ pub fn DastOverviewPage() -> Element {
description: "Dynamic Application Security Testing — scan running applications for vulnerabilities",
}
div { class: "grid grid-cols-3 gap-4 mb-6",
div { class: "stat-card",
div { class: "stat-value",
div { class: "stat-cards", style: "margin-bottom: 24px;",
div { class: "stat-card-item",
div { class: "stat-card-value",
match &*scan_runs.read() {
Some(Some(data)) => {
let count = data.total.unwrap_or(0);
@@ -26,10 +28,13 @@ pub fn DastOverviewPage() -> Element {
_ => rsx! { "" },
}
}
div { class: "stat-label", "Total Scans" }
div { class: "stat-card-label",
Icon { icon: BsPlayCircle, width: 14, height: 14 }
" Total Scans"
}
}
div { class: "stat-card",
div { class: "stat-value",
div { class: "stat-card-item",
div { class: "stat-card-value",
match &*findings.read() {
Some(Some(data)) => {
let count = data.total.unwrap_or(0);
@@ -38,29 +43,37 @@ pub fn DastOverviewPage() -> Element {
_ => rsx! { "" },
}
}
div { class: "stat-label", "DAST Findings" }
div { class: "stat-card-label",
Icon { icon: BsShieldExclamation, width: 14, height: 14 }
" DAST Findings"
}
}
div { class: "stat-card",
div { class: "stat-value", "" }
div { class: "stat-label", "Active Targets" }
div { class: "stat-card-item",
div { class: "stat-card-value", "" }
div { class: "stat-card-label",
Icon { icon: BsBullseye, width: 14, height: 14 }
" Active Targets"
}
}
}
div { class: "flex gap-4 mb-4",
div { style: "display: flex; gap: 12px; margin-bottom: 24px;",
Link {
to: Route::DastTargetsPage {},
class: "btn btn-primary",
"Manage Targets"
Icon { icon: BsBullseye, width: 14, height: 14 }
" Manage Targets"
}
Link {
to: Route::DastFindingsPage {},
class: "btn btn-secondary",
"View Findings"
Icon { icon: BsShieldExclamation, width: 14, height: 14 }
" View Findings"
}
}
div { class: "card",
h3 { "Recent Scan Runs" }
div { class: "card-header", "Recent Scan Runs" }
match &*scan_runs.read() {
Some(Some(data)) => {
let runs = &data.data;

View File

@@ -1,4 +1,6 @@
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
use crate::components::page_header::PageHeader;
use crate::components::toast::{ToastType, Toasts};
@@ -14,6 +16,15 @@ pub fn DastTargetsPage() -> Element {
let mut new_url = use_signal(String::new);
rsx! {
div { class: "back-nav",
button {
class: "btn btn-ghost btn-back",
onclick: move |_| { navigator().go_back(); },
Icon { icon: BsArrowLeft, width: 16, height: 16 }
"Back"
}
}
PageHeader {
title: "DAST Targets",
description: "Configure target applications for dynamic security testing",

View File

@@ -1,4 +1,6 @@
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
use crate::components::code_snippet::CodeSnippet;
use crate::components::page_header::PageHeader;
@@ -8,7 +10,7 @@ use crate::components::severity_badge::SeverityBadge;
pub fn FindingDetailPage(id: String) -> Element {
let finding_id = id.clone();
let finding = use_resource(move || {
let mut finding = use_resource(move || {
let fid = finding_id.clone();
async move {
crate::infrastructure::findings::fetch_finding_detail(fid)
@@ -22,7 +24,18 @@ pub fn FindingDetailPage(id: String) -> Element {
match snapshot {
Some(Some(f)) => {
let finding_id_for_status = id.clone();
let finding_id_for_feedback = id.clone();
let existing_feedback = f.developer_feedback.clone().unwrap_or_default();
rsx! {
div { class: "back-nav",
button {
class: "btn btn-ghost btn-back",
onclick: move |_| { navigator().go_back(); },
Icon { icon: BsArrowLeft, width: 16, height: 16 }
"Back"
}
}
PageHeader {
title: f.title.clone(),
description: format!("{} | {} | {}", f.scanner, f.scan_type, f.status),
@@ -39,6 +52,9 @@ pub fn FindingDetailPage(id: String) -> Element {
if let Some(score) = f.cvss_score {
span { class: "badge badge-medium", "CVSS: {score}" }
}
if let Some(confidence) = f.confidence {
span { class: "badge badge-info", "Confidence: {confidence:.1}" }
}
}
div { class: "card",
@@ -46,6 +62,19 @@ pub fn FindingDetailPage(id: String) -> Element {
p { "{f.description}" }
}
if let Some(rationale) = &f.triage_rationale {
div { class: "card",
div { class: "card-header", "Triage Rationale" }
div {
style: "display: flex; align-items: center; gap: 8px; margin-bottom: 8px;",
if let Some(action) = &f.triage_action {
span { class: "badge badge-info", "{action}" }
}
}
p { style: "color: var(--text-secondary); font-size: 14px;", "{rationale}" }
}
}
if let Some(code) = &f.code_snippet {
div { class: "card",
div { class: "card-header", "Code Evidence" }
@@ -90,23 +119,60 @@ pub fn FindingDetailPage(id: String) -> Element {
{
let status_str = status.to_string();
let id_clone = finding_id_for_status.clone();
let label = match status {
"open" => "Open",
"triaged" => "Triaged",
"resolved" => "Resolved",
"false_positive" => "False Positive",
"ignored" => "Ignored",
_ => status,
};
rsx! {
button {
class: "btn btn-ghost",
title: "{label}",
onclick: move |_| {
let s = status_str.clone();
let id = id_clone.clone();
spawn(async move {
let _ = crate::infrastructure::findings::update_finding_status(id, s).await;
});
finding.restart();
},
"{status}"
match status {
"open" => rsx! { Icon { icon: BsCircle, width: 14, height: 14 } },
"triaged" => rsx! { Icon { icon: BsEye, width: 14, height: 14 } },
"resolved" => rsx! { Icon { icon: BsCheckCircle, width: 14, height: 14 } },
"false_positive" => rsx! { Icon { icon: BsXCircle, width: 14, height: 14 } },
"ignored" => rsx! { Icon { icon: BsDashCircle, width: 14, height: 14 } },
_ => rsx! {},
}
" {label}"
}
}
}
}
}
}
div { class: "card",
div { class: "card-header", "Developer Feedback" }
p {
style: "font-size: 13px; color: var(--text-secondary); margin-bottom: 8px;",
"Share your assessment of this finding (e.g. false positive, actionable, needs context)"
}
textarea {
style: "width: 100%; min-height: 80px; background: var(--bg-primary); border: 1px solid var(--border); border-radius: 8px; padding: 10px 14px; color: var(--text-primary); font-size: 14px; resize: vertical;",
value: "{existing_feedback}",
oninput: move |e| {
let feedback = e.value();
let id = finding_id_for_feedback.clone();
spawn(async move {
let _ = crate::infrastructure::findings::update_finding_feedback(id, feedback).await;
});
},
}
}
}
}
Some(None) => rsx! {

View File

@@ -1,4 +1,6 @@
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
use crate::app::Route;
use crate::components::page_header::PageHeader;
@@ -12,6 +14,10 @@ pub fn FindingsPage() -> Element {
let mut type_filter = use_signal(String::new);
let mut status_filter = use_signal(String::new);
let mut repo_filter = use_signal(String::new);
let mut search_query = use_signal(String::new);
let mut sort_by = use_signal(|| "created_at".to_string());
let mut sort_order = use_signal(|| "desc".to_string());
let mut selected_ids = use_signal(Vec::<String>::new);
let repos = use_resource(|| async {
crate::infrastructure::repositories::fetch_repositories(1)
@@ -19,19 +25,52 @@ pub fn FindingsPage() -> Element {
.ok()
});
let findings = use_resource(move || {
let p = page();
let sev = severity_filter();
let typ = type_filter();
let stat = status_filter();
let repo = repo_filter();
let mut findings = use_resource(move || {
let query = crate::infrastructure::findings::FindingsQuery {
page: page(),
severity: severity_filter(),
scan_type: type_filter(),
status: status_filter(),
repo_id: repo_filter(),
q: search_query(),
sort_by: sort_by(),
sort_order: sort_order(),
};
async move {
crate::infrastructure::findings::fetch_findings(p, sev, typ, stat, repo)
crate::infrastructure::findings::fetch_findings(query)
.await
.ok()
}
});
let toggle_sort = move |field: &'static str| {
move |_: MouseEvent| {
if sort_by() == field {
sort_order.set(if sort_order() == "asc" {
"desc".to_string()
} else {
"asc".to_string()
});
} else {
sort_by.set(field.to_string());
sort_order.set("desc".to_string());
}
page.set(1);
}
};
let sort_indicator = move |field: &str| -> String {
if sort_by() == field {
if sort_order() == "asc" {
" \u{25B2}".to_string()
} else {
" \u{25BC}".to_string()
}
} else {
String::new()
}
};
rsx! {
PageHeader {
title: "Findings",
@@ -39,6 +78,12 @@ pub fn FindingsPage() -> Element {
}
div { class: "filter-bar",
input {
r#type: "text",
placeholder: "Search findings...",
style: "min-width: 200px;",
oninput: move |e| { search_query.set(e.value()); page.set(1); },
}
select {
onchange: move |e| { repo_filter.set(e.value()); page.set(1); },
option { value: "", "All Repositories" }
@@ -76,6 +121,9 @@ pub fn FindingsPage() -> Element {
option { value: "cve", "CVE" }
option { value: "gdpr", "GDPR" }
option { value: "oauth", "OAuth" }
option { value: "secret_detection", "Secrets" }
option { value: "lint", "Lint" }
option { value: "code_review", "Code Review" }
}
select {
onchange: move |e| { status_filter.set(e.value()); page.set(1); },
@@ -88,29 +136,132 @@ pub fn FindingsPage() -> Element {
}
}
// Bulk action bar
if !selected_ids().is_empty() {
div {
class: "card",
style: "display: flex; align-items: center; gap: 12px; padding: 12px 16px; margin-bottom: 16px; background: rgba(56, 189, 248, 0.08); border-color: rgba(56, 189, 248, 0.2);",
span {
style: "font-size: 14px; color: var(--text-secondary);",
"{selected_ids().len()} selected"
}
for status in ["triaged", "resolved", "false_positive", "ignored"] {
{
let status_str = status.to_string();
let label = match status {
"false_positive" => "False Positive",
other => {
// Capitalize first letter
let mut s = other.to_string();
if let Some(c) = s.get_mut(0..1) { c.make_ascii_uppercase(); }
// Leak to get a &str that lives long enough - this is fine for static-ish UI strings
&*Box::leak(s.into_boxed_str())
}
};
rsx! {
button {
class: "btn btn-sm btn-ghost",
title: "Mark {label}",
onclick: move |_| {
let ids = selected_ids();
let s = status_str.clone();
spawn(async move {
let _ = crate::infrastructure::findings::bulk_update_finding_status(ids, s).await;
findings.restart();
});
selected_ids.set(Vec::new());
},
match status {
"triaged" => rsx! { Icon { icon: BsEye, width: 14, height: 14 } },
"resolved" => rsx! { Icon { icon: BsCheckCircle, width: 14, height: 14 } },
"false_positive" => rsx! { Icon { icon: BsXCircle, width: 14, height: 14 } },
"ignored" => rsx! { Icon { icon: BsDashCircle, width: 14, height: 14 } },
_ => rsx! {},
}
" {label}"
}
}
}
}
button {
class: "btn btn-sm btn-ghost",
onclick: move |_| { selected_ids.set(Vec::new()); },
"Clear"
}
}
}
match &*findings.read() {
Some(Some(resp)) => {
let total_pages = resp.total.unwrap_or(0).div_ceil(20).max(1);
let all_ids: Vec<String> = resp.data.iter().filter_map(|f| f.id.as_ref().map(|id| id.to_hex())).collect();
rsx! {
div { class: "card",
div { class: "table-wrapper",
table {
thead {
tr {
th { "Severity" }
th { "Title" }
th { "Type" }
th {
style: "width: 40px;",
input {
r#type: "checkbox",
checked: !all_ids.is_empty() && selected_ids().len() == all_ids.len(),
onchange: move |_| {
if selected_ids().len() == all_ids.len() {
selected_ids.set(Vec::new());
} else {
selected_ids.set(all_ids.clone());
}
},
}
}
th {
style: "cursor: pointer; user-select: none;",
onclick: toggle_sort("severity"),
"Severity{sort_indicator(\"severity\")}"
}
th {
style: "cursor: pointer; user-select: none;",
onclick: toggle_sort("title"),
"Title{sort_indicator(\"title\")}"
}
th {
style: "cursor: pointer; user-select: none;",
onclick: toggle_sort("scan_type"),
"Type{sort_indicator(\"scan_type\")}"
}
th { "Scanner" }
th { "File" }
th { "Status" }
th {
style: "cursor: pointer; user-select: none;",
onclick: toggle_sort("status"),
"Status{sort_indicator(\"status\")}"
}
}
}
tbody {
for finding in &resp.data {
{
let id = finding.id.as_ref().map(|id| id.to_hex()).unwrap_or_default();
let id_for_check = id.clone();
let is_selected = selected_ids().contains(&id);
rsx! {
tr {
td {
input {
r#type: "checkbox",
checked: is_selected,
onchange: move |_| {
let mut ids = selected_ids();
if ids.contains(&id_for_check) {
ids.retain(|i| i != &id_for_check);
} else {
ids.push(id_for_check.clone());
}
selected_ids.set(ids);
},
}
}
td { SeverityBadge { severity: finding.severity.to_string() } }
td {
Link {
@@ -120,13 +271,29 @@ pub fn FindingsPage() -> Element {
}
}
td { "{finding.scan_type}" }
td { "{finding.scanner}" }
td {
style: "font-family: monospace; font-size: 12px;",
"{finding.file_path.as_deref().unwrap_or(\"-\")}"
Icon { icon: BsCpu, width: 14, height: 14 }
" {finding.scanner}"
}
td {
span { class: "badge badge-info", "{finding.status}" }
style: "font-family: monospace; font-size: 12px;",
Icon { icon: BsFileEarmarkCode, width: 14, height: 14 }
" {finding.file_path.as_deref().unwrap_or(\"-\")}"
}
td {
span { class: "badge badge-info",
{
use compliance_core::models::FindingStatus;
match &finding.status {
FindingStatus::Open => rsx! { Icon { icon: BsCircle, width: 12, height: 12 } },
FindingStatus::Triaged => rsx! { Icon { icon: BsEye, width: 12, height: 12 } },
FindingStatus::Resolved => rsx! { Icon { icon: BsCheckCircle, width: 12, height: 12 } },
FindingStatus::FalsePositive => rsx! { Icon { icon: BsXCircle, width: 12, height: 12 } },
FindingStatus::Ignored => rsx! { Icon { icon: BsDashCircle, width: 12, height: 12 } },
}
}
" {finding.status}"
}
}
}
}

View File

@@ -1,4 +1,6 @@
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
use crate::components::code_inspector::CodeInspector;
use crate::components::file_tree::{build_file_tree, FileTree};
@@ -8,6 +10,36 @@ use crate::infrastructure::graph::{fetch_graph, search_nodes, trigger_graph_buil
#[component]
pub fn GraphExplorerPage(repo_id: String) -> Element {
rsx! {
div { class: "back-nav",
button {
class: "btn btn-ghost btn-back",
onclick: move |_| { navigator().go_back(); },
Icon { icon: BsArrowLeft, width: 16, height: 16 }
"Back"
}
}
PageHeader {
title: "Code Knowledge Graph",
description: "Interactive visualization of code structure and relationships",
}
GraphExplorerBody { repo_id: repo_id }
}
}
/// Inline variant without back button and page header — for embedding in other pages.
#[component]
pub fn GraphExplorerInline(repo_id: String) -> Element {
rsx! {
GraphExplorerBody { repo_id: repo_id }
}
}
/// Shared graph explorer body used by both the full page and inline variants.
#[component]
fn GraphExplorerBody(repo_id: String) -> Element {
let repo_id_clone = repo_id.clone();
let mut graph_data = use_resource(move || {
let rid = repo_id_clone.clone();
@@ -21,22 +53,15 @@ pub fn GraphExplorerPage(repo_id: String) -> Element {
let mut building = use_signal(|| false);
let mut toasts = use_context::<Toasts>();
// Selected node state
let mut selected_node = use_signal(|| Option::<serde_json::Value>::None);
let mut inspector_open = use_signal(|| false);
// Search state
let mut search_query = use_signal(String::new);
let mut search_results = use_signal(Vec::<serde_json::Value>::new);
let mut file_filter = use_signal(String::new);
// Store serialized graph JSON in signals so use_effect can react to them
let mut nodes_json = use_signal(String::new);
let mut edges_json = use_signal(String::new);
let mut graph_ready = use_signal(|| false);
// When resource resolves, serialize the data into signals
let graph_data_read = graph_data.read();
if let Some(Some(data)) = &*graph_data_read {
if !data.data.nodes.is_empty() && !graph_ready() {
@@ -48,7 +73,6 @@ pub fn GraphExplorerPage(repo_id: String) -> Element {
}
}
// Derive stats and file tree
let (node_count, edge_count, community_count, languages, file_tree_data) =
if let Some(Some(data)) = &*graph_data_read {
let build = data.data.build.clone().unwrap_or_default();
@@ -80,11 +104,8 @@ pub fn GraphExplorerPage(repo_id: String) -> Element {
};
let has_graph_data = matches!(&*graph_data_read, Some(Some(d)) if !d.data.nodes.is_empty());
// Drop the read guard before rendering
drop(graph_data_read);
// use_effect runs AFTER DOM commit — this is when #graph-canvas exists
use_effect(move || {
let ready = graph_ready();
if !ready {
@@ -96,7 +117,6 @@ pub fn GraphExplorerPage(repo_id: String) -> Element {
return;
}
spawn(async move {
// Register the click callback + load graph with a small delay for DOM paint
let js = format!(
r#"
window.__onNodeClick = function(nodeJson) {{
@@ -109,8 +129,6 @@ pub fn GraphExplorerPage(repo_id: String) -> Element {
setTimeout(function() {{
if (window.__loadGraph) {{
window.__loadGraph({nj}, {ej});
}} else {{
console.error('[graph-viz] __loadGraph not found — vis-network may not be loaded');
}}
}}, 300);
"#
@@ -119,7 +137,6 @@ pub fn GraphExplorerPage(repo_id: String) -> Element {
});
});
// Extract selected node fields
let sel = selected_node();
let sel_file = sel
.as_ref()
@@ -146,11 +163,6 @@ pub fn GraphExplorerPage(repo_id: String) -> Element {
.unwrap_or(0) as u32;
rsx! {
PageHeader {
title: "Code Knowledge Graph",
description: "Interactive visualization of code structure and relationships",
}
if repo_id.is_empty() {
div { class: "card",
p { "Select a repository to view its code graph." }

View File

@@ -1,4 +1,6 @@
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
use crate::components::page_header::PageHeader;
use crate::infrastructure::graph::fetch_impact;
@@ -12,6 +14,15 @@ pub fn ImpactAnalysisPage(repo_id: String, finding_id: String) -> Element {
});
rsx! {
div { class: "back-nav",
button {
class: "btn btn-ghost btn-back",
onclick: move |_| { navigator().go_back(); },
Icon { icon: BsArrowLeft, width: 16, height: 16 }
"Back"
}
}
PageHeader {
title: "Impact Analysis",
description: "Blast radius and affected entry points for a security finding",

View File

@@ -0,0 +1,351 @@
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
use crate::components::page_header::PageHeader;
use crate::components::toast::{ToastType, Toasts};
use crate::infrastructure::mcp::{
add_mcp_server, delete_mcp_server, fetch_mcp_servers, regenerate_mcp_token,
};
#[component]
pub fn McpServersPage() -> Element {
let mut servers = use_resource(|| async { fetch_mcp_servers().await.ok() });
let mut toasts = use_context::<Toasts>();
let mut show_form = use_signal(|| false);
let mut new_name = use_signal(String::new);
let mut new_endpoint = use_signal(String::new);
let mut new_transport = use_signal(|| "http".to_string());
let mut new_port = use_signal(|| "8090".to_string());
let mut new_description = use_signal(String::new);
let mut new_mongo_uri = use_signal(String::new);
let mut new_mongo_db = use_signal(String::new);
// Track which server's token is visible
let mut visible_token: Signal<Option<String>> = use_signal(|| None);
// Track which server is pending delete confirmation
let mut confirm_delete: Signal<Option<(String, String)>> = use_signal(|| None);
rsx! {
div { class: "back-nav",
button {
class: "btn btn-ghost btn-back",
onclick: move |_| { navigator().go_back(); },
Icon { icon: BsArrowLeft, width: 16, height: 16 }
"Back"
}
}
PageHeader {
title: "MCP Servers",
description: "Manage Model Context Protocol servers for LLM integrations",
}
div { class: "mb-4",
button {
class: "btn btn-primary",
onclick: move |_| show_form.set(!show_form()),
if show_form() { "Cancel" } else { "Register Server" }
}
}
if show_form() {
div { class: "card mb-4",
div { class: "card-header", "Register MCP Server" }
div { class: "mcp-form-grid",
div { class: "form-group",
label { "Name" }
input {
r#type: "text",
placeholder: "Production MCP",
value: "{new_name}",
oninput: move |e| new_name.set(e.value()),
}
}
div { class: "form-group",
label { "Endpoint URL" }
input {
r#type: "text",
placeholder: "https://mcp.example.com/mcp",
value: "{new_endpoint}",
oninput: move |e| new_endpoint.set(e.value()),
}
}
div { class: "form-group",
label { "Transport" }
select {
value: "{new_transport}",
oninput: move |e| new_transport.set(e.value()),
option { value: "http", "HTTP (Streamable)" }
option { value: "stdio", "Stdio" }
}
}
div { class: "form-group",
label { "Port" }
input {
r#type: "text",
placeholder: "8090",
value: "{new_port}",
oninput: move |e| new_port.set(e.value()),
}
}
div { class: "form-group",
label { "MongoDB URI" }
input {
r#type: "text",
placeholder: "mongodb://localhost:27017",
value: "{new_mongo_uri}",
oninput: move |e| new_mongo_uri.set(e.value()),
}
}
div { class: "form-group",
label { "Database Name" }
input {
r#type: "text",
placeholder: "compliance_scanner",
value: "{new_mongo_db}",
oninput: move |e| new_mongo_db.set(e.value()),
}
}
}
div { class: "form-group",
label { "Description" }
input {
r#type: "text",
placeholder: "Optional notes about this server",
value: "{new_description}",
oninput: move |e| new_description.set(e.value()),
}
}
button {
class: "btn btn-primary",
onclick: move |_| {
let name = new_name();
let endpoint = new_endpoint();
let transport = new_transport();
let port = new_port();
let desc = new_description();
let mongo_uri = new_mongo_uri();
let mongo_db = new_mongo_db();
spawn(async move {
match add_mcp_server(name, endpoint, transport, port, desc, mongo_uri, mongo_db).await {
Ok(_) => {
toasts.push(ToastType::Success, "MCP server registered");
servers.restart();
}
Err(e) => toasts.push(ToastType::Error, e.to_string()),
}
});
show_form.set(false);
new_name.set(String::new());
new_endpoint.set(String::new());
new_transport.set("http".to_string());
new_port.set("8090".to_string());
new_description.set(String::new());
new_mongo_uri.set(String::new());
new_mongo_db.set(String::new());
},
"Register"
}
}
}
// Delete confirmation modal
if let Some((ref del_id, ref del_name)) = *confirm_delete.read() {
div { class: "modal-overlay",
onclick: move |_| confirm_delete.set(None),
div { class: "modal-dialog",
onclick: move |e| e.stop_propagation(),
h3 { "Delete MCP Server" }
p { "Are you sure you want to remove " strong { "{del_name}" } "?" }
p { class: "text-secondary", "Connected LLM clients will lose access." }
div { class: "modal-actions",
button {
class: "btn btn-ghost",
onclick: move |_| confirm_delete.set(None),
"Cancel"
}
button {
class: "btn btn-danger",
onclick: {
let id = del_id.clone();
move |_| {
let id = id.clone();
spawn(async move {
match delete_mcp_server(id).await {
Ok(_) => {
toasts.push(ToastType::Success, "Server removed");
servers.restart();
}
Err(e) => toasts.push(ToastType::Error, e.to_string()),
}
});
confirm_delete.set(None);
}
},
"Delete"
}
}
}
}
}
match &*servers.read() {
Some(Some(resp)) => {
if resp.data.is_empty() {
rsx! {
div { class: "card",
p { style: "padding: 1rem; color: var(--text-secondary);", "No MCP servers registered. Add one to get started." }
}
}
} else {
rsx! {
div { class: "mcp-cards-grid",
for server in resp.data.iter() {
{
let sid = server.id.map(|id| id.to_hex()).unwrap_or_default();
let name = server.name.clone();
let status_class = match server.status {
compliance_core::models::McpServerStatus::Running => "running",
compliance_core::models::McpServerStatus::Stopped => "stopped",
compliance_core::models::McpServerStatus::Error => "error",
};
let status_label = format!("{}", server.status);
let is_token_visible = visible_token().as_deref() == Some(sid.as_str());
let created_str = server.created_at.format("%Y-%m-%d %H:%M").to_string();
let tools_count = server.tools_enabled.len();
rsx! {
div { class: "mcp-card",
// Header row: status dot + name + actions
div { class: "mcp-card-header",
div { class: "mcp-card-title",
span { class: "mcp-status-dot {status_class}" }
h3 { "{server.name}" }
span { class: "mcp-card-status {status_class}", "{status_label}" }
}
button {
class: "btn btn-sm btn-ghost btn-ghost-danger",
title: "Delete server",
onclick: {
let id = sid.clone();
let name = name.clone();
move |_| {
confirm_delete.set(Some((id.clone(), name.clone())));
}
},
Icon { icon: BsTrash, width: 14, height: 14 }
}
}
if let Some(ref desc) = server.description {
p { class: "mcp-card-desc", "{desc}" }
}
// Config details
div { class: "mcp-card-details",
div { class: "mcp-detail-row",
Icon { icon: BsGlobe, width: 13, height: 13 }
span { class: "mcp-detail-label", "Endpoint" }
code { class: "mcp-detail-value", "{server.endpoint_url}" }
}
div { class: "mcp-detail-row",
Icon { icon: BsHddNetwork, width: 13, height: 13 }
span { class: "mcp-detail-label", "Transport" }
span { class: "mcp-detail-value", "{server.transport}" }
}
if let Some(port) = server.port {
div { class: "mcp-detail-row",
Icon { icon: BsPlug, width: 13, height: 13 }
span { class: "mcp-detail-label", "Port" }
span { class: "mcp-detail-value", "{port}" }
}
}
}
// Tools
div { class: "mcp-card-tools",
span { class: "mcp-detail-label",
Icon { icon: BsTools, width: 13, height: 13 }
" {tools_count} tools"
}
div { class: "mcp-tools-list",
for tool in server.tools_enabled.iter() {
span { class: "mcp-tool-chip", "{tool}" }
}
}
}
// Token section
div { class: "mcp-card-token",
div { class: "mcp-token-display",
Icon { icon: BsKey, width: 13, height: 13 }
code { class: "mcp-token-code",
if is_token_visible {
"{server.access_token}"
} else {
"mcp_••••••••••••••••••••"
}
}
}
div { class: "mcp-token-actions",
button {
class: "btn btn-sm btn-ghost",
title: if is_token_visible { "Hide token" } else { "Reveal token" },
onclick: {
let id = sid.clone();
move |_| {
if visible_token().as_deref() == Some(id.as_str()) {
visible_token.set(None);
} else {
visible_token.set(Some(id.clone()));
}
}
},
if is_token_visible {
Icon { icon: BsEyeSlash, width: 14, height: 14 }
} else {
Icon { icon: BsEye, width: 14, height: 14 }
}
}
button {
class: "btn btn-sm btn-ghost",
title: "Regenerate token",
onclick: {
let id = sid.clone();
move |_| {
let id = id.clone();
spawn(async move {
match regenerate_mcp_token(id).await {
Ok(_) => {
toasts.push(ToastType::Success, "Token regenerated");
servers.restart();
}
Err(e) => toasts.push(ToastType::Error, e.to_string()),
}
});
}
},
Icon { icon: BsArrowRepeat, width: 14, height: 14 }
}
}
}
// Footer
div { class: "mcp-card-footer",
span { "Created {created_str}" }
}
}
}
}
}
}
}
}
},
Some(None) => rsx! { div { class: "card", p { style: "padding: 1rem;", "Failed to load MCP servers." } } },
None => rsx! { div { class: "loading", "Loading..." } },
}
}
}

View File

@@ -10,6 +10,7 @@ pub mod graph_explorer;
pub mod graph_index;
pub mod impact_analysis;
pub mod issues;
pub mod mcp_servers;
pub mod overview;
pub mod repositories;
pub mod sbom;
@@ -27,6 +28,7 @@ pub use graph_explorer::GraphExplorerPage;
pub use graph_index::GraphIndexPage;
pub use impact_analysis::ImpactAnalysisPage;
pub use issues::IssuesPage;
pub use mcp_servers::McpServersPage;
pub use overview::OverviewPage;
pub use repositories::RepositoriesPage;
pub use sbom::SbomPage;

View File

@@ -1,7 +1,12 @@
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
use crate::app::Route;
use crate::components::page_header::PageHeader;
use crate::components::stat_card::StatCard;
use crate::infrastructure::mcp::fetch_mcp_servers;
use crate::infrastructure::repositories::fetch_repositories;
#[cfg(feature = "server")]
use crate::infrastructure::stats::fetch_overview_stats;
@@ -21,6 +26,9 @@ pub fn OverviewPage() -> Element {
}
});
let repos = use_resource(|| async { fetch_repositories(1).await.ok() });
let mcp_servers = use_resource(|| async { fetch_mcp_servers().await.ok() });
rsx! {
PageHeader {
title: "Overview",
@@ -66,6 +74,125 @@ pub fn OverviewPage() -> Element {
SeverityBar { label: "Low", count: s.low_findings, max: s.total_findings, color: "var(--success)" }
}
}
// AI Chat section
div { class: "card",
div { class: "card-header", "AI Chat" }
match &*repos.read() {
Some(Some(data)) => {
let repo_list = &data.data;
if repo_list.is_empty() {
rsx! {
p { style: "padding: 1rem; color: var(--text-secondary);",
"No repositories found. Add a repository to start chatting."
}
}
} else {
rsx! {
div {
class: "grid",
style: "display: grid; grid-template-columns: repeat(3, 1fr); gap: 1rem; padding: 1rem;",
for repo in repo_list {
{
let repo_id = repo.id.map(|id| id.to_hex()).unwrap_or_default();
let name = repo.name.clone();
rsx! {
Link {
to: Route::ChatPage { repo_id },
class: "graph-repo-card",
div { class: "graph-repo-card-header",
div { class: "graph-repo-card-icon",
Icon { icon: BsChatDots, width: 20, height: 20 }
}
h3 { class: "graph-repo-card-name", "{name}" }
}
}
}
}
}
}
}
}
},
Some(None) => rsx! {
p { style: "padding: 1rem; color: var(--text-secondary);",
"Failed to load repositories."
}
},
None => rsx! {
div { class: "loading", "Loading repositories..." }
},
}
}
// MCP Servers section
div { class: "card",
div { class: "card-header", "MCP Servers" }
match &*mcp_servers.read() {
Some(Some(resp)) => {
if resp.data.is_empty() {
rsx! {
p { style: "padding: 1rem; color: var(--text-secondary);",
"No MCP servers registered."
}
}
} else {
rsx! {
div {
style: "display: grid; grid-template-columns: repeat(3, 1fr); gap: 1rem; padding: 1rem;",
for server in resp.data.iter() {
{
let status_color = match server.status {
compliance_core::models::McpServerStatus::Running => "var(--success)",
compliance_core::models::McpServerStatus::Stopped => "var(--text-secondary)",
compliance_core::models::McpServerStatus::Error => "var(--danger)",
};
let status_label = format!("{}", server.status);
let endpoint = server.endpoint_url.clone();
let name = server.name.clone();
rsx! {
div { class: "card",
style: "padding: 0.75rem;",
div {
style: "display: flex; align-items: center; gap: 0.5rem; margin-bottom: 0.5rem;",
span {
style: "width: 8px; height: 8px; border-radius: 50%; background: {status_color}; display: inline-block;",
}
strong { "{name}" }
}
p {
style: "font-size: 0.8rem; color: var(--text-secondary); margin: 0; word-break: break-all;",
"{endpoint}"
}
p {
style: "font-size: 0.75rem; color: var(--text-secondary); margin-top: 0.25rem;",
"{status_label}"
}
}
}
}
}
}
div { style: "padding: 0 1rem 1rem;",
Link {
to: Route::McpServersPage {},
class: "btn btn-primary btn-sm",
"Manage"
}
}
}
}
},
Some(None) => rsx! {
p { style: "padding: 1rem; color: var(--text-secondary);",
"Failed to load MCP servers."
}
},
None => rsx! {
div { class: "loading", "Loading..." }
},
}
}
},
Some(None) => rsx! {
div { class: "card",

View File

@@ -1,9 +1,22 @@
use dioxus::prelude::*;
use dioxus_free_icons::icons::bs_icons::*;
use dioxus_free_icons::Icon;
use crate::app::Route;
use crate::components::page_header::PageHeader;
use crate::components::pagination::Pagination;
use crate::components::toast::{ToastType, Toasts};
use crate::pages::graph_explorer::GraphExplorerInline;
async fn async_sleep_5s() {
#[cfg(feature = "web")]
{
gloo_timers::future::TimeoutFuture::new(5_000).await;
}
#[cfg(not(feature = "web"))]
{
tokio::time::sleep(std::time::Duration::from_secs(5)).await;
}
}
#[component]
pub fn RepositoriesPage() -> Element {
@@ -12,8 +25,16 @@ pub fn RepositoriesPage() -> Element {
let mut name = use_signal(String::new);
let mut git_url = use_signal(String::new);
let mut branch = use_signal(|| "main".to_string());
let mut auth_token = use_signal(String::new);
let mut auth_username = use_signal(String::new);
let mut show_auth = use_signal(|| false);
let mut show_ssh_key = use_signal(|| false);
let mut ssh_public_key = use_signal(String::new);
let mut adding = use_signal(|| false);
let mut toasts = use_context::<Toasts>();
let mut confirm_delete = use_signal(|| Option::<(String, String)>::None); // (id, name)
let mut scanning_ids = use_signal(Vec::<String>::new);
let mut graph_repo_id = use_signal(|| Option::<String>::None);
let mut repos = use_resource(move || {
let p = page();
@@ -54,7 +75,7 @@ pub fn RepositoriesPage() -> Element {
label { "Git URL" }
input {
r#type: "text",
placeholder: "https://github.com/org/repo.git",
placeholder: "https://github.com/org/repo.git or git@github.com:org/repo.git",
value: "{git_url}",
oninput: move |e| git_url.set(e.value()),
}
@@ -68,26 +89,105 @@ pub fn RepositoriesPage() -> Element {
oninput: move |e| branch.set(e.value()),
}
}
// Private repo auth section
div { style: "margin-top: 8px;",
button {
class: "btn btn-ghost",
style: "font-size: 12px; padding: 4px 8px;",
onclick: move |_| {
show_auth.toggle();
if !show_ssh_key() {
// Fetch SSH key on first open
show_ssh_key.set(true);
spawn(async move {
match crate::infrastructure::repositories::fetch_ssh_public_key().await {
Ok(key) => ssh_public_key.set(key),
Err(_) => ssh_public_key.set("(not available)".to_string()),
}
});
}
},
if show_auth() { "Hide auth options" } else { "Private repository?" }
}
}
if show_auth() {
div { class: "auth-section", style: "margin-top: 12px; padding: 12px; border: 1px solid var(--border-subtle); border-radius: 8px;",
// SSH deploy key display
div { style: "margin-bottom: 12px;",
label { style: "font-size: 12px; color: var(--text-secondary);",
"For SSH URLs: add this deploy key (read-only) to your repository"
}
div {
style: "margin-top: 4px; padding: 8px; background: var(--bg-secondary); border-radius: 4px; font-family: monospace; font-size: 11px; word-break: break-all; user-select: all;",
if ssh_public_key().is_empty() {
"Loading..."
} else {
"{ssh_public_key}"
}
}
}
// HTTPS auth fields
p { style: "font-size: 12px; color: var(--text-secondary); margin-bottom: 8px;",
"For HTTPS URLs: provide an access token (PAT) or username/password"
}
div { class: "form-group",
label { "Auth Token / Password" }
input {
r#type: "password",
placeholder: "ghp_xxxx or personal access token",
value: "{auth_token}",
oninput: move |e| auth_token.set(e.value()),
}
}
div { class: "form-group",
label { "Username (optional, defaults to x-access-token)" }
input {
r#type: "text",
placeholder: "x-access-token",
value: "{auth_username}",
oninput: move |e| auth_username.set(e.value()),
}
}
}
}
button {
class: "btn btn-primary",
disabled: adding(),
onclick: move |_| {
let n = name();
let u = git_url();
let b = branch();
let tok = {
let v = auth_token();
if v.is_empty() { None } else { Some(v) }
};
let usr = {
let v = auth_username();
if v.is_empty() { None } else { Some(v) }
};
adding.set(true);
spawn(async move {
match crate::infrastructure::repositories::add_repository(n, u, b).await {
match crate::infrastructure::repositories::add_repository(n, u, b, tok, usr).await {
Ok(_) => {
toasts.push(ToastType::Success, "Repository added");
repos.restart();
}
Err(e) => toasts.push(ToastType::Error, e.to_string()),
}
adding.set(false);
});
show_add_form.set(false);
show_auth.set(false);
name.set(String::new());
git_url.set(String::new());
auth_token.set(String::new());
auth_username.set(String::new());
},
"Add"
if adding() { "Validating..." } else { "Add" }
}
}
}
@@ -158,6 +258,7 @@ pub fn RepositoriesPage() -> Element {
let repo_id_scan = repo_id.clone();
let repo_id_del = repo_id.clone();
let repo_name_del = repo.name.clone();
let is_scanning = scanning_ids().contains(&repo_id);
rsx! {
tr {
td { "{repo.name}" }
@@ -186,30 +287,68 @@ pub fn RepositoriesPage() -> Element {
}
}
td { style: "display: flex; gap: 4px;",
Link {
to: Route::GraphExplorerPage { repo_id: repo_id.clone() },
class: "btn btn-ghost",
"Graph"
button {
class: if graph_repo_id().as_deref() == Some(repo_id.as_str()) { "btn btn-ghost btn-active" } else { "btn btn-ghost" },
title: "View graph",
onclick: {
let rid = repo_id.clone();
move |_| {
if graph_repo_id().as_deref() == Some(rid.as_str()) {
graph_repo_id.set(None);
} else {
graph_repo_id.set(Some(rid.clone()));
}
}
},
Icon { icon: BsDiagram3, width: 16, height: 16 }
}
button {
class: "btn btn-ghost",
class: if is_scanning { "btn btn-ghost btn-scanning" } else { "btn btn-ghost" },
title: "Trigger scan",
disabled: is_scanning,
onclick: move |_| {
let id = repo_id_scan.clone();
// Add to scanning set
let mut ids = scanning_ids();
ids.push(id.clone());
scanning_ids.set(ids);
spawn(async move {
match crate::infrastructure::repositories::trigger_repo_scan(id).await {
Ok(_) => toasts.push(ToastType::Success, "Scan triggered"),
match crate::infrastructure::repositories::trigger_repo_scan(id.clone()).await {
Ok(_) => {
toasts.push(ToastType::Success, "Scan triggered");
// Poll until scan completes
loop {
async_sleep_5s().await;
match crate::infrastructure::repositories::check_repo_scanning(id.clone()).await {
Ok(false) => break,
Ok(true) => continue,
Err(_) => break,
}
}
toasts.push(ToastType::Success, "Scan complete");
repos.restart();
}
Err(e) => toasts.push(ToastType::Error, e.to_string()),
}
// Remove from scanning set
let mut ids = scanning_ids();
ids.retain(|i| i != &id);
scanning_ids.set(ids);
});
},
"Scan"
if is_scanning {
span { class: "spinner" }
} else {
Icon { icon: BsPlayCircle, width: 16, height: 16 }
}
}
button {
class: "btn btn-ghost btn-ghost-danger",
title: "Delete repository",
onclick: move |_| {
confirm_delete.set(Some((repo_id_del.clone(), repo_name_del.clone())));
},
"Delete"
Icon { icon: BsTrash, width: 16, height: 16 }
}
}
}
@@ -225,6 +364,22 @@ pub fn RepositoriesPage() -> Element {
on_page_change: move |p| page.set(p),
}
}
// Inline graph explorer
if let Some(rid) = graph_repo_id() {
div { class: "card", style: "margin-top: 16px;",
div { class: "card-header", style: "display: flex; justify-content: space-between; align-items: center;",
span { "Code Graph" }
button {
class: "btn btn-sm btn-ghost",
title: "Close graph",
onclick: move |_| { graph_repo_id.set(None); },
Icon { icon: BsX, width: 18, height: 18 }
}
}
GraphExplorerInline { repo_id: rid }
}
}
}
},
Some(None) => rsx! {

21
compliance-mcp/Cargo.toml Normal file
View File

@@ -0,0 +1,21 @@
[package]
name = "compliance-mcp"
version = "0.1.0"
edition = "2021"
[dependencies]
compliance-core = { workspace = true, features = ["mongodb"] }
rmcp = { version = "0.16", features = ["server", "macros", "transport-io", "transport-streamable-http-server"] }
tokio = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
mongodb = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
dotenvy = "0.15"
thiserror = { workspace = true }
chrono = { workspace = true }
bson = { version = "2", features = ["chrono-0_4"] }
schemars = "1.0"
axum = "0.8"
tower-http = { version = "0.6", features = ["cors"] }

View File

@@ -0,0 +1,34 @@
use mongodb::{Client, Collection};
use compliance_core::models::*;
#[derive(Clone, Debug)]
pub struct Database {
inner: mongodb::Database,
}
impl Database {
pub async fn connect(uri: &str, db_name: &str) -> Result<Self, mongodb::error::Error> {
let client = Client::with_uri_str(uri).await?;
let db = client.database(db_name);
db.run_command(mongodb::bson::doc! { "ping": 1 }).await?;
tracing::info!("MCP server connected to MongoDB '{db_name}'");
Ok(Self { inner: db })
}
pub fn findings(&self) -> Collection<Finding> {
self.inner.collection("findings")
}
pub fn sbom_entries(&self) -> Collection<SbomEntry> {
self.inner.collection("sbom_entries")
}
pub fn dast_findings(&self) -> Collection<DastFinding> {
self.inner.collection("dast_findings")
}
pub fn dast_scan_runs(&self) -> Collection<DastScanRun> {
self.inner.collection("dast_scan_runs")
}
}

View File

@@ -0,0 +1,58 @@
mod database;
mod server;
mod tools;
use std::sync::Arc;
use database::Database;
use rmcp::transport::{
streamable_http_server::session::local::LocalSessionManager, StreamableHttpServerConfig,
StreamableHttpService,
};
use server::ComplianceMcpServer;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let _ = dotenvy::dotenv();
tracing_subscriber::fmt()
.with_env_filter(
tracing_subscriber::EnvFilter::from_default_env()
.add_directive("compliance_mcp=info".parse()?),
)
.init();
let mongo_uri =
std::env::var("MONGODB_URI").unwrap_or_else(|_| "mongodb://localhost:27017".to_string());
let db_name =
std::env::var("MONGODB_DATABASE").unwrap_or_else(|_| "compliance_scanner".to_string());
let db = Database::connect(&mongo_uri, &db_name).await?;
// If MCP_PORT is set, run as Streamable HTTP server; otherwise use stdio.
if let Ok(port_str) = std::env::var("MCP_PORT") {
let port: u16 = port_str.parse()?;
tracing::info!("Starting MCP server on HTTP port {port}");
let db_clone = db.clone();
let service = StreamableHttpService::new(
move || Ok(ComplianceMcpServer::new(db_clone.clone())),
Arc::new(LocalSessionManager::default()),
StreamableHttpServerConfig::default(),
);
let router = axum::Router::new().nest_service("/mcp", service);
let listener = tokio::net::TcpListener::bind(("0.0.0.0", port)).await?;
tracing::info!("MCP HTTP server listening on 0.0.0.0:{port}");
axum::serve(listener, router).await?;
} else {
tracing::info!("Starting MCP server on stdio");
let server = ComplianceMcpServer::new(db);
let transport = rmcp::transport::stdio();
use rmcp::ServiceExt;
let handle = server.serve(transport).await?;
handle.waiting().await?;
}
Ok(())
}

View File

@@ -0,0 +1,109 @@
use rmcp::{
handler::server::wrapper::Parameters, model::*, tool, tool_handler, tool_router, ServerHandler,
};
use crate::database::Database;
use crate::tools::{dast, findings, sbom};
pub struct ComplianceMcpServer {
db: Database,
#[allow(dead_code)]
tool_router: rmcp::handler::server::router::tool::ToolRouter<Self>,
}
#[tool_router]
impl ComplianceMcpServer {
pub fn new(db: Database) -> Self {
Self {
db,
tool_router: Self::tool_router(),
}
}
// ── Findings ──────────────────────────────────────────
#[tool(
description = "List security findings with optional filters for repo, severity, status, and scan type"
)]
async fn list_findings(
&self,
Parameters(params): Parameters<findings::ListFindingsParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
findings::list_findings(&self.db, params).await
}
#[tool(description = "Get a single finding by its ID")]
async fn get_finding(
&self,
Parameters(params): Parameters<findings::GetFindingParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
findings::get_finding(&self.db, params).await
}
#[tool(description = "Get a summary of findings counts grouped by severity and status")]
async fn findings_summary(
&self,
Parameters(params): Parameters<findings::FindingsSummaryParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
findings::findings_summary(&self.db, params).await
}
// ── SBOM ──────────────────────────────────────────────
#[tool(
description = "List SBOM packages with optional filters for repo, vulnerabilities, package manager, and license"
)]
async fn list_sbom_packages(
&self,
Parameters(params): Parameters<sbom::ListSbomPackagesParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
sbom::list_sbom_packages(&self.db, params).await
}
#[tool(
description = "Generate a vulnerability report for a repository showing all packages with known CVEs"
)]
async fn sbom_vuln_report(
&self,
Parameters(params): Parameters<sbom::SbomVulnReportParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
sbom::sbom_vuln_report(&self.db, params).await
}
// ── DAST ──────────────────────────────────────────────
#[tool(
description = "List DAST findings with optional filters for target, scan run, severity, exploitability, and vulnerability type"
)]
async fn list_dast_findings(
&self,
Parameters(params): Parameters<dast::ListDastFindingsParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
dast::list_dast_findings(&self.db, params).await
}
#[tool(description = "Get a summary of recent DAST scan runs and finding counts")]
async fn dast_scan_summary(
&self,
Parameters(params): Parameters<dast::DastScanSummaryParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
dast::dast_scan_summary(&self.db, params).await
}
}
#[tool_handler]
impl ServerHandler for ComplianceMcpServer {
fn get_info(&self) -> ServerInfo {
ServerInfo {
protocol_version: ProtocolVersion::V_2024_11_05,
capabilities: ServerCapabilities::builder()
.enable_tools()
.build(),
server_info: Implementation::from_build_env(),
instructions: Some(
"Compliance Scanner MCP server. Query security findings, SBOM data, and DAST results."
.to_string(),
),
}
}
}

View File

@@ -0,0 +1,154 @@
use mongodb::bson::doc;
use rmcp::{model::*, ErrorData as McpError};
use schemars::JsonSchema;
use serde::Deserialize;
use crate::database::Database;
const MAX_LIMIT: i64 = 200;
const DEFAULT_LIMIT: i64 = 50;
fn cap_limit(limit: Option<i64>) -> i64 {
limit.unwrap_or(DEFAULT_LIMIT).clamp(1, MAX_LIMIT)
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct ListDastFindingsParams {
/// Filter by DAST target ID
pub target_id: Option<String>,
/// Filter by scan run ID
pub scan_run_id: Option<String>,
/// Filter by severity: info, low, medium, high, critical
pub severity: Option<String>,
/// Only show confirmed exploitable findings
pub exploitable: Option<bool>,
/// Filter by vulnerability type (e.g. sql_injection, xss, ssrf)
pub vuln_type: Option<String>,
/// Maximum number of results (default 50, max 200)
pub limit: Option<i64>,
}
pub async fn list_dast_findings(
db: &Database,
params: ListDastFindingsParams,
) -> Result<CallToolResult, McpError> {
let mut filter = doc! {};
if let Some(ref target_id) = params.target_id {
filter.insert("target_id", target_id);
}
if let Some(ref scan_run_id) = params.scan_run_id {
filter.insert("scan_run_id", scan_run_id);
}
if let Some(ref severity) = params.severity {
filter.insert("severity", severity);
}
if let Some(exploitable) = params.exploitable {
filter.insert("exploitable", exploitable);
}
if let Some(ref vuln_type) = params.vuln_type {
filter.insert("vuln_type", vuln_type);
}
let limit = cap_limit(params.limit);
let mut cursor = db
.dast_findings()
.find(filter)
.sort(doc! { "created_at": -1 })
.limit(limit)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut results = Vec::new();
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let finding = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
results.push(finding);
}
let json = serde_json::to_string_pretty(&results)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct DastScanSummaryParams {
/// Filter by DAST target ID
pub target_id: Option<String>,
}
pub async fn dast_scan_summary(
db: &Database,
params: DastScanSummaryParams,
) -> Result<CallToolResult, McpError> {
let mut filter = doc! {};
if let Some(ref target_id) = params.target_id {
filter.insert("target_id", target_id);
}
// Get recent scan runs
let mut cursor = db
.dast_scan_runs()
.find(filter.clone())
.sort(doc! { "started_at": -1 })
.limit(10)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut scan_runs = Vec::new();
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let run = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
scan_runs.push(serde_json::json!({
"id": run.id.map(|id| id.to_hex()),
"target_id": run.target_id,
"status": run.status,
"findings_count": run.findings_count,
"exploitable_count": run.exploitable_count,
"endpoints_discovered": run.endpoints_discovered,
"started_at": run.started_at.to_rfc3339(),
"completed_at": run.completed_at.map(|t| t.to_rfc3339()),
}));
}
// Count findings by severity
let mut findings_filter = doc! {};
if let Some(ref target_id) = params.target_id {
findings_filter.insert("target_id", target_id);
}
let total_findings = db
.dast_findings()
.count_documents(findings_filter.clone())
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut exploitable_filter = findings_filter.clone();
exploitable_filter.insert("exploitable", true);
let exploitable_count = db
.dast_findings()
.count_documents(exploitable_filter)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let summary = serde_json::json!({
"total_findings": total_findings,
"exploitable_findings": exploitable_count,
"recent_scan_runs": scan_runs,
});
let json = serde_json::to_string_pretty(&summary)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}

View File

@@ -0,0 +1,163 @@
use mongodb::bson::doc;
use rmcp::{model::*, ErrorData as McpError};
use schemars::JsonSchema;
use serde::Deserialize;
use crate::database::Database;
const MAX_LIMIT: i64 = 200;
const DEFAULT_LIMIT: i64 = 50;
fn cap_limit(limit: Option<i64>) -> i64 {
limit.unwrap_or(DEFAULT_LIMIT).clamp(1, MAX_LIMIT)
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct ListFindingsParams {
/// Filter by repository ID
pub repo_id: Option<String>,
/// Filter by severity: info, low, medium, high, critical
pub severity: Option<String>,
/// Filter by status: open, triaged, false_positive, resolved, ignored
pub status: Option<String>,
/// Filter by scan type: sast, sbom, cve, gdpr, oauth, secret_detection, lint, code_review
pub scan_type: Option<String>,
/// Maximum number of results (default 50, max 200)
pub limit: Option<i64>,
}
pub async fn list_findings(
db: &Database,
params: ListFindingsParams,
) -> Result<CallToolResult, McpError> {
let mut filter = doc! {};
if let Some(ref repo_id) = params.repo_id {
filter.insert("repo_id", repo_id);
}
if let Some(ref severity) = params.severity {
filter.insert("severity", severity);
}
if let Some(ref status) = params.status {
filter.insert("status", status);
}
if let Some(ref scan_type) = params.scan_type {
filter.insert("scan_type", scan_type);
}
let limit = cap_limit(params.limit);
let mut cursor = db
.findings()
.find(filter)
.sort(doc! { "created_at": -1 })
.limit(limit)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut results = Vec::new();
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let finding = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
results.push(finding);
}
let json = serde_json::to_string_pretty(&results)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct GetFindingParams {
/// Finding ID (MongoDB ObjectId hex string)
pub id: String,
}
pub async fn get_finding(
db: &Database,
params: GetFindingParams,
) -> Result<CallToolResult, McpError> {
let oid = bson::oid::ObjectId::parse_str(&params.id)
.map_err(|e| McpError::invalid_params(format!("invalid ObjectId: {e}"), None))?;
let finding = db
.findings()
.find_one(doc! { "_id": oid })
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?
.ok_or_else(|| McpError::invalid_params("finding not found", None))?;
let json = serde_json::to_string_pretty(&finding)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct FindingsSummaryParams {
/// Filter by repository ID
pub repo_id: Option<String>,
}
#[derive(serde::Serialize)]
struct SeverityCount {
severity: String,
count: u64,
}
pub async fn findings_summary(
db: &Database,
params: FindingsSummaryParams,
) -> Result<CallToolResult, McpError> {
let mut base_filter = doc! {};
if let Some(ref repo_id) = params.repo_id {
base_filter.insert("repo_id", repo_id);
}
let severities = ["critical", "high", "medium", "low", "info"];
let mut counts = Vec::new();
for sev in &severities {
let mut filter = base_filter.clone();
filter.insert("severity", sev);
let count = db
.findings()
.count_documents(filter)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
counts.push(SeverityCount {
severity: sev.to_string(),
count,
});
}
let total: u64 = counts.iter().map(|c| c.count).sum();
let mut status_counts = Vec::new();
for status in &["open", "triaged", "false_positive", "resolved", "ignored"] {
let mut filter = base_filter.clone();
filter.insert("status", status);
let count = db
.findings()
.count_documents(filter)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
status_counts.push(serde_json::json!({ "status": status, "count": count }));
}
let summary = serde_json::json!({
"total": total,
"by_severity": counts,
"by_status": status_counts,
});
let json = serde_json::to_string_pretty(&summary)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}

View File

@@ -0,0 +1,3 @@
pub mod dast;
pub mod findings;
pub mod sbom;

View File

@@ -0,0 +1,129 @@
use mongodb::bson::doc;
use rmcp::{model::*, ErrorData as McpError};
use schemars::JsonSchema;
use serde::Deserialize;
use crate::database::Database;
const MAX_LIMIT: i64 = 200;
const DEFAULT_LIMIT: i64 = 50;
fn cap_limit(limit: Option<i64>) -> i64 {
limit.unwrap_or(DEFAULT_LIMIT).clamp(1, MAX_LIMIT)
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct ListSbomPackagesParams {
/// Filter by repository ID
pub repo_id: Option<String>,
/// Only show packages with known vulnerabilities
pub has_vulns: Option<bool>,
/// Filter by package manager (e.g. npm, cargo, pip)
pub package_manager: Option<String>,
/// Filter by license (e.g. MIT, Apache-2.0)
pub license: Option<String>,
/// Maximum number of results (default 50, max 200)
pub limit: Option<i64>,
}
pub async fn list_sbom_packages(
db: &Database,
params: ListSbomPackagesParams,
) -> Result<CallToolResult, McpError> {
let mut filter = doc! {};
if let Some(ref repo_id) = params.repo_id {
filter.insert("repo_id", repo_id);
}
if let Some(ref pm) = params.package_manager {
filter.insert("package_manager", pm);
}
if let Some(ref license) = params.license {
filter.insert("license", license);
}
if params.has_vulns == Some(true) {
filter.insert("known_vulnerabilities.0", doc! { "$exists": true });
}
let limit = cap_limit(params.limit);
let mut cursor = db
.sbom_entries()
.find(filter)
.sort(doc! { "name": 1 })
.limit(limit)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut results = Vec::new();
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let entry = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
results.push(entry);
}
let json = serde_json::to_string_pretty(&results)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct SbomVulnReportParams {
/// Repository ID to generate vulnerability report for
pub repo_id: String,
}
pub async fn sbom_vuln_report(
db: &Database,
params: SbomVulnReportParams,
) -> Result<CallToolResult, McpError> {
let filter = doc! {
"repo_id": &params.repo_id,
"known_vulnerabilities.0": { "$exists": true },
};
let mut cursor = db
.sbom_entries()
.find(filter)
.sort(doc! { "name": 1 })
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut vulnerable_packages = Vec::new();
let mut total_vulns = 0u64;
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let entry = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
total_vulns += entry.known_vulnerabilities.len() as u64;
vulnerable_packages.push(serde_json::json!({
"name": entry.name,
"version": entry.version,
"package_manager": entry.package_manager,
"license": entry.license,
"vulnerabilities": entry.known_vulnerabilities,
}));
}
let report = serde_json::json!({
"repo_id": params.repo_id,
"vulnerable_packages_count": vulnerable_packages.len(),
"total_vulnerabilities": total_vulns,
"packages": vulnerable_packages,
});
let json = serde_json::to_string_pretty(&report)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}

3
docs/.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
node_modules
.vitepress/dist
.vitepress/cache

View File

@@ -0,0 +1,56 @@
import { defineConfig } from 'vitepress'
export default defineConfig({
title: 'Compliance Scanner',
description: 'AI-powered security compliance scanning platform',
ignoreDeadLinks: [
/localhost/,
],
themeConfig: {
nav: [
{ text: 'Guide', link: '/guide/getting-started' },
{ text: 'Features', link: '/features/overview' },
{ text: 'Deployment', link: '/deployment/docker' },
],
sidebar: [
{
text: 'Guide',
items: [
{ text: 'Getting Started', link: '/guide/getting-started' },
{ text: 'Adding Repositories', link: '/guide/repositories' },
{ text: 'Running Scans', link: '/guide/scanning' },
{ text: 'Managing Findings', link: '/guide/findings' },
{ text: 'Configuration', link: '/guide/configuration' },
],
},
{
text: 'Features',
items: [
{ text: 'Dashboard Overview', link: '/features/overview' },
{ text: 'SBOM & License Compliance', link: '/features/sbom' },
{ text: 'Code Knowledge Graph', link: '/features/graph' },
{ text: 'Impact Analysis', link: '/features/impact-analysis' },
{ text: 'DAST Scanning', link: '/features/dast' },
{ text: 'AI Chat (RAG)', link: '/features/ai-chat' },
{ text: 'Issue Tracker Integration', link: '/features/issues' },
{ text: 'MCP Server', link: '/features/mcp-server' },
],
},
{
text: 'Deployment',
items: [
{ text: 'Docker Compose', link: '/deployment/docker' },
{ text: 'Environment Variables', link: '/deployment/environment' },
{ text: 'Keycloak Authentication', link: '/deployment/keycloak' },
{ text: 'OpenTelemetry', link: '/deployment/opentelemetry' },
],
},
],
socialLinks: [
{ icon: 'github', link: 'https://gitea.meghsakha.com/sharang/compliance-scanner-agent' },
],
footer: {
message: 'Compliance Scanner Documentation',
},
},
})

125
docs/deployment/docker.md Normal file
View File

@@ -0,0 +1,125 @@
# Docker Compose Deployment
The recommended way to deploy Compliance Scanner is with Docker Compose.
## Prerequisites
- Docker and Docker Compose installed
- At least 4 GB of available RAM
- Git repository access (tokens configured in `.env`)
## Quick Start
```bash
# Clone the repository
git clone <repo-url> compliance-scanner
cd compliance-scanner
# Configure environment
cp .env.example .env
# Edit .env with your MongoDB credentials, tokens, etc.
# Start all services
docker-compose up -d
```
## Services
The `docker-compose.yml` includes these services:
| Service | Port | Description |
|---------|------|-------------|
| `mongo` | 27017 | MongoDB database |
| `agent` | 3001, 3002 | Compliance agent (REST API + webhooks) |
| `dashboard` | 8080 | Web dashboard |
| `chromium` | 3003 | Headless browser for DAST crawling |
| `otel-collector` | 4317, 4318 | OpenTelemetry collector (optional) |
## Volumes
| Volume | Purpose |
|--------|---------|
| `mongo_data` | Persistent MongoDB data |
| `repos_data` | Cloned repository files |
## Checking Status
```bash
# View running services
docker-compose ps
# View logs
docker-compose logs -f agent
docker-compose logs -f dashboard
# Restart a service
docker-compose restart agent
```
## Accessing the Dashboard
Once running, open [http://localhost:8080](http://localhost:8080) in your browser.
If Keycloak authentication is configured, you'll be redirected to sign in. Otherwise, the dashboard is accessible directly.
## Updating
```bash
# Pull latest changes
git pull
# Rebuild and restart
docker-compose up -d --build
```
## Production Considerations
### MongoDB
For production, use a managed MongoDB instance or configure replication:
```bash
MONGODB_URI=mongodb+srv://user:pass@cluster.mongodb.net/compliance_scanner
```
### Reverse Proxy
Place the dashboard behind a reverse proxy (nginx, Caddy, Traefik) with TLS:
```nginx
server {
listen 443 ssl;
server_name compliance.example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
### Resource Limits
Add resource limits to Docker Compose for production:
```yaml
services:
agent:
deploy:
resources:
limits:
memory: 2G
cpus: '2.0'
dashboard:
deploy:
resources:
limits:
memory: 512M
cpus: '1.0'
```

View File

@@ -0,0 +1,93 @@
# Environment Variables
Complete reference for all environment variables. See [Configuration](/guide/configuration) for detailed descriptions of each variable.
## Required
```bash
# MongoDB connection
MONGODB_URI=mongodb://root:example@localhost:27017/compliance_scanner?authSource=admin
```
## Agent
```bash
AGENT_PORT=3001
SCAN_SCHEDULE=0 0 */6 * * *
CVE_MONITOR_SCHEDULE=0 0 0 * * *
GIT_CLONE_BASE_PATH=/tmp/compliance-scanner/repos
MONGODB_DATABASE=compliance_scanner
```
## Dashboard
```bash
DASHBOARD_PORT=8080
AGENT_API_URL=http://localhost:3001
```
## LLM / AI
```bash
LITELLM_URL=http://localhost:4000
LITELLM_API_KEY=
LITELLM_MODEL=gpt-4o
LITELLM_EMBED_MODEL=text-embedding-3-small
```
## Git Providers
```bash
# GitHub
GITHUB_TOKEN=
GITHUB_WEBHOOK_SECRET=
# GitLab
GITLAB_URL=https://gitlab.com
GITLAB_TOKEN=
GITLAB_WEBHOOK_SECRET=
```
## Issue Trackers
```bash
# Jira
JIRA_URL=
JIRA_EMAIL=
JIRA_API_TOKEN=
JIRA_PROJECT_KEY=
```
## External Services
```bash
SEARXNG_URL=http://localhost:8888
NVD_API_KEY=
```
## Authentication
```bash
KEYCLOAK_URL=http://localhost:8080
KEYCLOAK_REALM=compliance
KEYCLOAK_CLIENT_ID=compliance-dashboard
REDIRECT_URI=http://localhost:8080/auth/callback
APP_URL=http://localhost:8080
```
## MCP Server
```bash
MONGODB_URI=mongodb://root:example@localhost:27017/compliance_scanner?authSource=admin
MONGODB_DATABASE=compliance_scanner
# Set to enable HTTP transport (omit for stdio)
MCP_PORT=8090
```
## Observability
```bash
# Set to enable OpenTelemetry export (omit to disable)
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
OTEL_SERVICE_NAME=compliance-agent
```

104
docs/deployment/keycloak.md Normal file
View File

@@ -0,0 +1,104 @@
# Keycloak Authentication
Compliance Scanner supports Keycloak for SSO authentication. When configured, all dashboard access requires signing in through Keycloak, and all API endpoints are protected.
## How It Works
### Dashboard (OAuth2/OIDC)
The dashboard implements a standard OAuth2 Authorization Code flow with PKCE:
1. User visits the dashboard
2. If not authenticated, a login page shows with a "Sign in with Keycloak" button
3. User is redirected to Keycloak's login page
4. After authentication, Keycloak redirects back with an authorization code
5. The dashboard exchanges the code for tokens and creates a session
6. All subsequent `/api/` server function calls require a valid session
### Agent API (JWT)
The agent API validates JWT Bearer tokens from Keycloak:
1. Dashboard (or other clients) include the access token in requests: `Authorization: Bearer <token>`
2. The agent fetches Keycloak's JWKS (JSON Web Key Set) to validate the token signature
3. Token expiry and claims are verified
4. The health endpoint (`/api/v1/health`) is always public
If `KEYCLOAK_URL` and `KEYCLOAK_REALM` are not set on the agent, JWT validation is disabled and all endpoints are open.
## Keycloak Setup
### 1. Create a Realm
In the Keycloak admin console:
1. Create a new realm (e.g. `compliance`)
2. Note the realm name — you'll need it for `KEYCLOAK_REALM`
### 2. Create a Client
1. Go to **Clients** > **Create client**
2. Set:
- **Client ID**: `compliance-dashboard`
- **Client type**: OpenID Connect
- **Client authentication**: Off (public client)
3. Under **Settings**:
- **Valid redirect URIs**: `http://localhost:8080/auth/callback` (adjust for your domain)
- **Valid post logout redirect URIs**: `http://localhost:8080`
- **Web origins**: `http://localhost:8080`
### 3. Create Users
1. Go to **Users** > **Create user**
2. Set username, email, first name, last name
3. Under **Credentials**, set a password
## Environment Variables
```bash
# Keycloak server URL (no trailing slash)
KEYCLOAK_URL=http://localhost:8080
# Realm name
KEYCLOAK_REALM=compliance
# Client ID (must match the client created above)
KEYCLOAK_CLIENT_ID=compliance-dashboard
# OAuth callback URL (must match valid redirect URI in Keycloak)
REDIRECT_URI=http://localhost:8080/auth/callback
# Application root URL (used for post-logout redirect)
APP_URL=http://localhost:8080
```
## Dashboard Features
When authenticated, the dashboard shows:
- **User avatar** in the sidebar (from Keycloak profile picture, or initials)
- **User name** from Keycloak profile
- **Logout** link that clears the session and redirects through Keycloak's logout flow
## Session Configuration
Sessions use signed cookies with these defaults:
- **Expiry**: 24 hours of inactivity
- **SameSite**: Lax (required for Keycloak redirect flow)
- **Secure**: Disabled by default (enable behind HTTPS)
- **Storage**: In-memory (resets on server restart)
::: tip
For production, consider persisting sessions to Redis or a database so they survive server restarts.
:::
## Running Without Keycloak
If no Keycloak variables are set:
- The **dashboard** serves without authentication (all pages accessible)
- The **agent API** accepts all requests without token validation
- A warning is logged: `Keycloak not configured - API endpoints are unprotected`
This is suitable for local development and testing.

View File

@@ -0,0 +1,139 @@
# OpenTelemetry Observability
Compliance Scanner exports traces and logs via OpenTelemetry Protocol (OTLP) for integration with observability platforms like SigNoz, Grafana (Tempo + Loki), Jaeger, and others.
## Enabling
Set the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable to enable OTLP export:
```bash
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
```
When this variable is not set, telemetry export is disabled and only console logging is active.
## What Is Exported
### Traces
Distributed traces for:
- HTTP request handling (via `tower-http` `TraceLayer`)
- Database operations
- Scan pipeline phases
- External API calls (LiteLLM, Keycloak, Git providers)
### Logs
All `tracing::info!`, `tracing::warn!`, `tracing::error!` log events are exported as OTel log records, including structured fields.
## Configuration
| Variable | Description | Default |
|----------|-------------|---------|
| `OTEL_EXPORTER_OTLP_ENDPOINT` | Collector gRPC endpoint | *(disabled)* |
| `OTEL_SERVICE_NAME` | Service name in traces | `compliance-agent` or `compliance-dashboard` |
| `RUST_LOG` | Log level filter | `info` |
## Docker Compose Setup
The included `docker-compose.yml` provides an OTel Collector service:
```yaml
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
ports:
- "4317:4317" # gRPC
- "4318:4318" # HTTP
volumes:
- ./otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
```
The agent and dashboard are pre-configured to send telemetry to the collector:
```yaml
agent:
environment:
OTEL_EXPORTER_OTLP_ENDPOINT: http://otel-collector:4317
OTEL_SERVICE_NAME: compliance-agent
dashboard:
environment:
OTEL_EXPORTER_OTLP_ENDPOINT: http://otel-collector:4317
OTEL_SERVICE_NAME: compliance-dashboard
```
## Collector Configuration
Edit `otel-collector-config.yaml` to configure your backend. The default exports to debug (stdout) only.
### SigNoz
```yaml
exporters:
otlp/signoz:
endpoint: "signoz-otel-collector:4317"
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/signoz]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp/signoz]
```
### Grafana Tempo (Traces) + Loki (Logs)
```yaml
exporters:
otlp/tempo:
endpoint: "tempo:4317"
tls:
insecure: true
loki:
endpoint: "http://loki:3100/loki/api/v1/push"
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/tempo]
logs:
receivers: [otlp]
processors: [batch]
exporters: [loki]
```
### Jaeger
```yaml
exporters:
otlp/jaeger:
endpoint: "jaeger:4317"
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/jaeger]
```
## Verifying
After starting with telemetry enabled, look for this log on startup:
```
OpenTelemetry OTLP export enabled endpoint=http://otel-collector:4317 service=compliance-agent
```
If the endpoint is unreachable, the application still starts normally — telemetry export fails silently without affecting functionality.

79
docs/features/ai-chat.md Normal file
View File

@@ -0,0 +1,79 @@
# AI Chat (RAG)
The AI Chat feature lets you ask natural language questions about your codebase. It uses Retrieval-Augmented Generation (RAG) to find relevant code and provide accurate, source-referenced answers.
## How It Works
1. **Code graph** is built for the repository (functions, classes, modules)
2. **Embeddings** are generated for each code symbol using an LLM embedding model
3. When you ask a question, your query is **embedded** and compared against code embeddings
4. The **top 8 most relevant** code snippets are retrieved
5. These snippets are sent as context to the LLM along with your question
6. The LLM generates a response **grounded in your actual code**
## Getting Started
### 1. Select a Repository
Navigate to **AI Chat** in the sidebar. You'll see a grid of repository cards. Click one to open the chat interface.
### 2. Build Embeddings
Before chatting, you need to build embeddings for the repository:
1. Click **Build Embeddings**
2. Wait for the process to complete — a progress bar shows `X/Y chunks`
3. Once the status shows **Embeddings ready**, the chat input is enabled
::: info
Embedding builds require:
- A code graph already built for the repository (via the Graph feature)
- A configured embedding model (`LITELLM_EMBED_MODEL`)
The default model is `text-embedding-3-small`.
:::
### 3. Ask Questions
Type your question in the input area and press Enter (or click Send). Examples:
- "How does authentication work in this codebase?"
- "What functions handle database connections?"
- "Explain the error handling pattern used in this project"
- "Where are the API routes defined?"
- "What does the `process_scan` function do?"
## Understanding Responses
### Answer
The AI response is a natural language answer to your question, grounded in the actual source code of your repository.
### Source References
Below each response, you'll see source references showing exactly which code was used to generate the answer:
- **Symbol name** — The qualified name of the function/class/module
- **File path** — Where the code is located, with line range
- **Code snippet** — The first ~10 lines of the relevant code
- **Relevance score** — How closely the code matched your question (0.0 to 1.0)
## Conversation Context
The chat maintains conversation history within a session. You can ask follow-up questions that reference previous answers. The system sends the last 10 messages as context to maintain coherence.
## Configuration
| Variable | Description | Default |
|----------|-------------|---------|
| `LITELLM_URL` | LiteLLM proxy URL | `http://localhost:4000` |
| `LITELLM_API_KEY` | API key for the LLM provider | — |
| `LITELLM_MODEL` | Model for chat responses | `gpt-4o` |
| `LITELLM_EMBED_MODEL` | Model for code embeddings | `text-embedding-3-small` |
## Tips
- **Be specific** — "How does the JWT validation middleware work?" is better than "Tell me about auth"
- **Reference filenames** — "What does `server.rs` do?" helps the retrieval find relevant code
- **Ask about patterns** — "What error handling pattern does this project use?" works well with RAG
- **Rebuild after changes** — If the repository has been updated significantly, rebuild embeddings to include new code

112
docs/features/dast.md Normal file
View File

@@ -0,0 +1,112 @@
# DAST Scanning
DAST (Dynamic Application Security Testing) performs black-box security testing against live web applications and APIs. Unlike SAST which analyzes source code, DAST tests running applications by sending crafted requests and analyzing responses.
## DAST Overview
Navigate to **DAST** in the sidebar to see the overview page with:
- Total DAST scans performed
- Total DAST findings discovered
- Number of active targets
- Recent scan run history with status, phase, and finding counts
## Managing Targets
Navigate to **DAST > Targets** to configure applications to test.
### Adding a Target
1. Enter a **target name** (descriptive label)
2. Enter the **base URL** (e.g. `https://staging.example.com`)
3. Click **Add Target**
### Target Configuration
Each target supports these settings:
| Setting | Description | Default |
|---------|-------------|---------|
| **Target Type** | WebApp, REST API, or GraphQL | WebApp |
| **Max Crawl Depth** | How many link levels to follow | 5 |
| **Rate Limit** | Maximum requests per second | 10 |
| **Destructive Tests** | Allow DELETE/PUT requests | No |
| **Excluded Paths** | URL paths to skip during testing | — |
### Authentication
DAST supports authenticated scanning with multiple methods:
| Method | Configuration |
|--------|--------------|
| **None** | No authentication |
| **Basic** | Username and password (HTTP Basic Auth) |
| **Bearer** | Bearer token (Authorization header) |
| **Cookie** | Session cookie value |
| **Form** | Login URL, username field, password field, and credentials |
::: warning
Authenticated scans access more of the application surface. Only test applications you own or have explicit authorization to test.
:::
## Running a DAST Scan
Click the **Scan** button on any target row. The scan runs through these phases:
1. **Crawl** — Discovers pages, forms, and API endpoints by following links and analyzing JavaScript
2. **Test** — Sends attack payloads to discovered parameters
3. **Report** — Collects results and generates findings
The scan uses a headless Chromium browser (the `chromium` service in Docker Compose) for JavaScript rendering during crawling.
## DAST Scan Agents
The scanner includes specialized testing agents:
### API Fuzzer
Tests API endpoints with malformed inputs, boundary values, and injection payloads.
### XSS Scanner
Detects Cross-Site Scripting vulnerabilities by injecting script payloads into form fields, URL parameters, and headers.
### SSRF Scanner
Tests for Server-Side Request Forgery by injecting internal URLs and cloud metadata endpoints into parameters.
### Auth Bypass Scanner
Tests for authentication and authorization bypass by manipulating tokens, sessions, and access control headers.
## DAST Findings
Navigate to **DAST > Findings** to see all discovered vulnerabilities.
### Finding List
Each finding shows:
| Column | Description |
|--------|-------------|
| Severity | Critical, High, Medium, or Low |
| Type | Vulnerability category (SQL Injection, XSS, SSRF, etc.) |
| Title | Description of the vulnerability |
| Endpoint | The HTTP path that is vulnerable |
| Method | HTTP method (GET, POST, PUT, DELETE) |
| Exploitable | Whether the vulnerability was confirmed exploitable |
### Finding Detail
Click a finding to see full details:
- **Vulnerability type** and CWE identifier
- **Endpoint URL** and HTTP method
- **Parameter** that is vulnerable
- **Exploitability** — Confirmed or Unconfirmed
- **Description** — What the vulnerability is and why it matters
- **Remediation** — How to fix the issue
- **Evidence** — One or more request/response pairs showing:
- The crafted HTTP request (method, URL, headers)
- The payload that triggered the vulnerability
- The HTTP response status and relevant snippet
::: tip
Findings marked as **Confirmed** exploitable were verified by the scanner with a successful attack. **Unconfirmed** findings show suspicious behavior that may indicate a vulnerability but could not be fully exploited.
:::

92
docs/features/graph.md Normal file
View File

@@ -0,0 +1,92 @@
# Code Knowledge Graph
The Code Knowledge Graph feature parses your repository source code and builds an interactive graph of symbols (functions, classes, modules) and their relationships (calls, imports, inheritance).
## Graph Index
Navigate to **Code Graph** in the sidebar to see all repositories. Click a repository card to open its graph explorer.
## Building a Graph
Before exploring, you need to build the graph:
1. Open the graph explorer for a repository
2. Click **Build Graph**
3. The agent parses all source files and constructs the graph
4. A spinner shows build progress
The graph builder supports these languages:
- Rust
- TypeScript
- JavaScript
- Python
## Graph Explorer
The graph explorer provides an interactive network visualization.
### Canvas
The main area renders an interactive network diagram using vis-network:
- **Nodes** represent code symbols (functions, classes, structs, enums, traits, modules, files)
- **Edges** represent relationships between symbols
- Nodes are **color-coded by community** — clusters of highly connected symbols detected using Louvain community detection
- Pan by dragging the background, zoom with scroll wheel
### Node Types
| Type | Description |
|------|-------------|
| Function | Standalone functions |
| Method | Methods on classes/structs |
| Class | Classes (TypeScript, Python) |
| Struct | Structs (Rust) |
| Enum | Enumerations |
| Interface | Interfaces (TypeScript) |
| Trait | Traits (Rust) |
| Module | Modules and namespaces |
| File | Source files |
### Edge Types
| Type | Description |
|------|-------------|
| Calls | Function/method invocation |
| Imports | Module or symbol import |
| Inherits | Class inheritance |
| Implements | Interface/trait implementation |
| Contains | Parent-child containment (module contains function) |
| TypeRef | Type reference or usage |
### Statistics
The statistics panel shows:
- Total node and edge count
- Number of detected communities
- Languages found in the repository
- File tree of the codebase
### Search
Search for symbols by name:
1. Type at least 2 characters in the search box
2. Matching symbols appear in a dropdown
3. Click a result to highlight it on the canvas and open the inspector
### Code Inspector
When you click a node (on the canvas or from search), the inspector panel shows:
- **Symbol name** and kind (function, class, etc.)
- **File path** with line range
- **Source code** excerpt from the file
- **Connected nodes** — what this symbol calls, what calls it, etc.
## Use Cases
- **Onboarding** — Understand unfamiliar codebase structure at a glance
- **Architecture review** — Identify tightly coupled modules and circular dependencies
- **Security** — Trace data flow from entry points to sensitive operations
- **Refactoring** — See what depends on code you plan to change

View File

@@ -0,0 +1,42 @@
# Impact Analysis
Impact Analysis uses the Code Knowledge Graph to determine the blast radius of a security finding. When a vulnerability is found in a specific function or file, impact analysis traces the call graph to show everything that could be affected.
## Accessing Impact Analysis
Impact analysis is linked from the Graph Explorer. When viewing a repository's graph with findings, you can navigate to:
```
/graph/{repo_id}/impact/{finding_id}
```
## What You See
### Blast Radius
A count of the total number of code symbols (functions, methods, classes) affected by the vulnerability, both directly and transitively.
### Entry Points Affected
A list of **public entry points** — main functions, HTTP handlers, API endpoints — that could be impacted by the vulnerable code. These represent the ways an attacker could potentially reach the vulnerability.
### Call Chains
Complete call chain paths showing how execution flows from entry points through intermediate functions to the vulnerable code. Each chain shows the sequence of function calls.
### Direct Callers
The immediate functions that call the vulnerable function. These are the first layer of impact.
## How It Works
1. The finding's file path and line number are matched to a node in the code graph
2. The graph is traversed **backwards** along call edges to find all callers
3. Entry points (functions with no callers, or known patterns like `main`, HTTP handlers) are identified
4. All paths from entry points to the vulnerable node are computed
## Use Cases
- **Prioritization** — A critical vulnerability in a function called by 50 entry points is more urgent than one in dead code
- **Remediation scoping** — Understand what tests need to run after a fix
- **Risk assessment** — Quantify the actual exposure of a vulnerability

72
docs/features/issues.md Normal file
View File

@@ -0,0 +1,72 @@
# Issue Tracker Integration
Compliance Scanner automatically creates issues in your existing issue trackers when new security findings are discovered. This integrates security into your development workflow without requiring teams to check a separate tool.
## Supported Trackers
| Tracker | Configuration Variables |
|---------|----------------------|
| **GitHub Issues** | `GITHUB_TOKEN` |
| **GitLab Issues** | `GITLAB_URL`, `GITLAB_TOKEN` |
| **Jira** | `JIRA_URL`, `JIRA_EMAIL`, `JIRA_API_TOKEN`, `JIRA_PROJECT_KEY` |
## How It Works
1. A scan discovers new findings
2. For each new finding, the agent checks if an issue already exists (by fingerprint)
3. If not, it creates an issue in the configured tracker with:
- Title matching the finding title
- Description with vulnerability details, severity, and file location
- Link back to the finding in the dashboard
4. The finding is updated with the external issue URL
## Viewing Issues
Navigate to **Issues** in the sidebar to see all tracker issues across your repositories.
The issues table shows:
| Column | Description |
|--------|-------------|
| Tracker | Badge showing GitHub, GitLab, or Jira |
| External ID | Issue number in the external system |
| Title | Issue title |
| Status | Open, Closed, or tracker-specific status |
| Created | When the issue was created |
| Link | Direct link to the issue in the external tracker |
Click the **Open** link to go directly to the issue in GitHub, GitLab, or Jira.
## Configuration
### GitHub
```bash
GITHUB_TOKEN=ghp_xxxx
```
Issues are created in the same repository that was scanned.
### GitLab
```bash
GITLAB_URL=https://gitlab.com
GITLAB_TOKEN=glpat-xxxx
```
Issues are created in the same project that was scanned.
### Jira
```bash
JIRA_URL=https://your-org.atlassian.net
JIRA_EMAIL=security-bot@example.com
JIRA_API_TOKEN=your-api-token
JIRA_PROJECT_KEY=SEC
```
All issues are created in the specified Jira project (`JIRA_PROJECT_KEY`).
::: tip
Use a dedicated service account for issue creation so that security findings are clearly attributed to automated scanning rather than individual team members.
:::

155
docs/features/mcp-server.md Normal file
View File

@@ -0,0 +1,155 @@
# MCP Server
The Model Context Protocol (MCP) server exposes compliance data to external LLMs and AI agents. Any MCP-compatible client — such as Claude, Cursor, or a custom agent — can connect and query findings, SBOM data, and DAST results without direct database access.
## How It Works
The `compliance-mcp` crate runs as a standalone service that connects to the same MongoDB database as the agent and dashboard. It registers a set of **tools** that LLM clients can discover and call through the MCP protocol.
```
LLM Client ──MCP──▶ compliance-mcp ──MongoDB──▶ compliance_scanner DB
```
The server supports two transport modes:
| Transport | Use Case | How to Enable |
|-----------|----------|---------------|
| **Stdio** | Local development, piped to a CLI tool | Default (no `MCP_PORT` set) |
| **Streamable HTTP** | Remote deployment, multiple clients | Set `MCP_PORT=8090` |
## Available Tools
The MCP server exposes seven tools:
### Findings
| Tool | Description |
|------|-------------|
| `list_findings` | Query findings with optional filters for repository, severity, status, and scan type. Returns up to 200 results (default 50). |
| `get_finding` | Retrieve a single finding by its MongoDB ObjectId. |
| `findings_summary` | Get finding counts grouped by severity and status, optionally filtered by repository. |
### SBOM
| Tool | Description |
|------|-------------|
| `list_sbom_packages` | List SBOM packages with filters for repository, vulnerabilities, package manager, and license. |
| `sbom_vuln_report` | Generate a vulnerability report for a repository showing all packages with known CVEs. |
### DAST
| Tool | Description |
|------|-------------|
| `list_dast_findings` | Query DAST findings with filters for target, scan run, severity, exploitability, and vulnerability type. |
| `dast_scan_summary` | Get a summary of recent DAST scan runs and finding counts. |
## Running Locally
### Stdio Mode
Run the MCP server directly — it reads from stdin and writes to stdout:
```bash
cd compliance-mcp
cargo run
```
Configure your MCP client to launch it as a subprocess. For example, in a Claude Code `mcp.json`:
```json
{
"mcpServers": {
"compliance": {
"command": "cargo",
"args": ["run", "-p", "compliance-mcp"],
"cwd": "/path/to/compliance-scanner"
}
}
}
```
### HTTP Mode
Set `MCP_PORT` to start the Streamable HTTP server:
```bash
MCP_PORT=8090 cargo run -p compliance-mcp
```
The server listens on `http://0.0.0.0:8090/mcp`. Point your MCP client to this endpoint.
## Configuration
| Variable | Description | Default |
|----------|-------------|---------|
| `MONGODB_URI` | MongoDB connection string | `mongodb://localhost:27017` |
| `MONGODB_DATABASE` | Database name | `compliance_scanner` |
| `MCP_PORT` | Port for HTTP transport (omit for stdio) | — |
| `RUST_LOG` | Log level filter | `compliance_mcp=info` |
Create a `.env` file in the project root or set these as environment variables.
## Deploying with Docker
The `Dockerfile.mcp` builds and runs the MCP server in HTTP mode on port 8090.
```bash
docker build -f Dockerfile.mcp -t compliance-mcp .
docker run -p 8090:8090 \
-e MONGODB_URI=mongodb://mongo:27017 \
-e MONGODB_DATABASE=compliance_scanner \
-e MCP_PORT=8090 \
compliance-mcp
```
### Coolify Deployment
1. Create a new service in your Coolify project
2. Set the **Dockerfile path** to `Dockerfile.mcp`
3. Set the **exposed port** to `8090`
4. Add environment variables: `MONGODB_URI`, `MONGODB_DATABASE`, `MCP_PORT=8090`
5. The MCP endpoint will be available at your configured domain under `/mcp`
The CI pipeline automatically deploys on changes to `compliance-core/`, `compliance-mcp/`, `Dockerfile.mcp`, or `Cargo.toml`/`Cargo.lock`. Add the `COOLIFY_WEBHOOK_MCP` secret to your Gitea repository.
## Managing MCP Servers in the Dashboard
Navigate to **MCP Servers** in the dashboard sidebar to:
- **Register** MCP server instances with their endpoint URL, transport type, port, and database connection
- **View** server configuration, enabled tools, and status
- **Manage access tokens** — reveal, copy, or regenerate bearer tokens for authentication
- **Delete** servers that are no longer needed
Each registered server is assigned a random access token on creation. Use this token in your MCP client configuration for authenticated access.
## Example: Querying Findings from an LLM
Once connected, an LLM can call any of the registered tools. For example:
**"Show me all critical findings"** triggers `list_findings` with `severity: "critical"`:
```json
{
"tool": "list_findings",
"arguments": {
"severity": "critical",
"limit": 10
}
}
```
**"What vulnerable packages does repo X have?"** triggers `sbom_vuln_report`:
```json
{
"tool": "sbom_vuln_report",
"arguments": {
"repo_id": "683abc..."
}
}
```
::: tip
The MCP server is read-only — it only queries data from MongoDB. It cannot modify findings, trigger scans, or change configuration. This makes it safe to expose to external LLM clients.
:::

35
docs/features/overview.md Normal file
View File

@@ -0,0 +1,35 @@
# Dashboard Overview
The Overview page is the landing page of the Compliance Scanner dashboard. It gives you a high-level view of your security posture across all tracked repositories.
## Statistics
The top section displays key metrics:
| Metric | Description |
|--------|-------------|
| **Repositories** | Total number of tracked repositories |
| **Total Findings** | Combined count of all security findings |
| **Critical** | Findings with critical severity |
| **High** | Findings with high severity |
| **Medium** | Findings with medium severity |
| **Low** | Findings with low severity |
| **Dependencies** | Total SBOM entries across all repositories |
| **CVE Alerts** | Active CVE alerts from dependency monitoring |
| **Tracker Issues** | Issues created in external trackers (GitHub, GitLab, Jira) |
## Severity Distribution
A visual bar chart shows the distribution of findings by severity level, giving you an immediate sense of your risk profile.
## Recent Scan Runs
The bottom section lists the 10 most recent scan runs across all repositories, showing:
- Repository name
- Scan status (queued, running, completed, failed)
- Current phase
- Number of findings discovered
- Timestamp
This helps you monitor scanning activity and quickly spot failures.

106
docs/features/sbom.md Normal file
View File

@@ -0,0 +1,106 @@
# SBOM & License Compliance
The SBOM (Software Bill of Materials) feature provides a complete inventory of all dependencies across your repositories, with vulnerability tracking and license compliance analysis.
The SBOM page has three tabs: **Packages**, **License Compliance**, and **Compare**.
## Packages Tab
The packages tab lists all dependencies discovered during scans.
### Filtering
Use the filter bar to narrow results:
- **Repository** — Select a specific repository or view all
- **Package Manager** — npm, cargo, pip, go, maven, nuget, composer, gem
- **Search** — Filter by package name
- **Vulnerabilities** — Show all packages, only those with vulnerabilities, or only clean packages
- **License** — Filter by specific license (MIT, Apache-2.0, BSD-3-Clause, GPL-3.0, etc.)
### Package Details
Each package row shows:
| Column | Description |
|--------|-------------|
| Package | Package name |
| Version | Installed version |
| Manager | Package manager (npm, cargo, pip, etc.) |
| License | License identifier with color-coded badge |
| Vulnerabilities | Count of known vulnerabilities (click to expand) |
### Vulnerability Details
Click the vulnerability count to expand inline details showing:
- Vulnerability ID (e.g. CVE-2024-1234)
- Source database
- Severity level
- Link to the advisory
### Export
Export your SBOM in industry-standard formats:
1. Select a format:
- **CycloneDX 1.5** — JSON format widely supported by security tools
- **SPDX 2.3** — Linux Foundation standard for license compliance
2. Click **Export**
3. The SBOM downloads as a JSON file
::: tip
SBOM exports are useful for compliance audits, customer security questionnaires, and supply chain transparency requirements.
:::
## License Compliance Tab
The license compliance tab helps you understand your licensing obligations.
### Copyleft Warning
If any dependencies use copyleft licenses (GPL, AGPL, LGPL, MPL), a warning banner appears listing the affected packages and noting that they may impose distribution requirements.
### License Distribution
A horizontal bar chart visualizes the percentage breakdown of licenses across your dependencies.
### License Table
A detailed table lists every license found, with:
| Column | Description |
|--------|-------------|
| License | License identifier |
| Type | **Copyleft** or **Permissive** badge |
| Packages | List of packages using this license |
| Count | Number of packages |
**Copyleft licenses** (flagged as potentially restrictive):
- GPL-2.0, GPL-3.0
- AGPL-3.0
- LGPL-2.1, LGPL-3.0
- MPL-2.0
**Permissive licenses** (generally safe for commercial use):
- MIT, Apache-2.0, BSD-2-Clause, BSD-3-Clause, ISC, etc.
## Compare Tab
Compare the dependency profiles of two repositories side by side.
1. Select **Repository A** from the first dropdown
2. Select **Repository B** from the second dropdown
3. View the diff results:
| Section | Description |
|---------|-------------|
| **Only in A** | Packages present in repo A but not in repo B |
| **Only in B** | Packages present in repo B but not in repo A |
| **Version Diffs** | Same package, different versions between repos |
| **Common** | Count of packages that match exactly |
This is useful for:
- Auditing consistency across microservices
- Identifying dependency drift between environments
- Planning dependency upgrades across projects

153
docs/guide/configuration.md Normal file
View File

@@ -0,0 +1,153 @@
# Configuration
Compliance Scanner is configured through environment variables. Copy `.env.example` to `.env` and edit the values.
## Required Settings
### MongoDB
```bash
MONGODB_URI=mongodb://root:example@localhost:27017/compliance_scanner?authSource=admin
MONGODB_DATABASE=compliance_scanner
```
### Agent
```bash
AGENT_PORT=3001
```
### Dashboard
```bash
DASHBOARD_PORT=8080
AGENT_API_URL=http://localhost:3001
```
## LLM Configuration
The AI features (chat, remediation suggestions) use LiteLLM as a proxy to various LLM providers:
```bash
LITELLM_URL=http://localhost:4000
LITELLM_API_KEY=your-key
LITELLM_MODEL=gpt-4o
LITELLM_EMBED_MODEL=text-embedding-3-small
```
The embed model is used for the RAG/AI Chat feature to generate code embeddings.
## Git Provider Tokens
### GitHub
```bash
GITHUB_TOKEN=ghp_xxxx
GITHUB_WEBHOOK_SECRET=your-webhook-secret
```
### GitLab
```bash
GITLAB_URL=https://gitlab.com
GITLAB_TOKEN=glpat-xxxx
GITLAB_WEBHOOK_SECRET=your-webhook-secret
```
## Issue Tracker Integration
### Jira
```bash
JIRA_URL=https://your-org.atlassian.net
JIRA_EMAIL=user@example.com
JIRA_API_TOKEN=your-api-token
JIRA_PROJECT_KEY=SEC
```
When configured, new findings automatically create Jira issues in the specified project.
## Scan Schedules
Cron expressions for automated scanning:
```bash
# Scan every 6 hours
SCAN_SCHEDULE=0 0 */6 * * *
# Check for new CVEs daily at midnight
CVE_MONITOR_SCHEDULE=0 0 0 * * *
```
## Search Engine
SearXNG is used for CVE enrichment and vulnerability research:
```bash
SEARXNG_URL=http://localhost:8888
```
## NVD API
An NVD API key increases rate limits for CVE lookups:
```bash
NVD_API_KEY=your-nvd-api-key
```
Get a free key at [https://nvd.nist.gov/developers/request-an-api-key](https://nvd.nist.gov/developers/request-an-api-key).
## MCP Server
The MCP server exposes compliance data to external LLMs via the Model Context Protocol. See [MCP Server](/features/mcp-server) for full details.
```bash
# Set MCP_PORT to enable HTTP transport (omit for stdio mode)
MCP_PORT=8090
```
The MCP server shares the `MONGODB_URI` and `MONGODB_DATABASE` variables with the rest of the platform.
## Clone Path
Where the agent stores cloned repository files:
```bash
GIT_CLONE_BASE_PATH=/tmp/compliance-scanner/repos
```
## All Environment Variables
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `MONGODB_URI` | Yes | — | MongoDB connection string |
| `MONGODB_DATABASE` | No | `compliance_scanner` | Database name |
| `AGENT_PORT` | No | `3001` | Agent REST API port |
| `DASHBOARD_PORT` | No | `8080` | Dashboard web UI port |
| `AGENT_API_URL` | No | `http://localhost:3001` | Agent URL for dashboard |
| `LITELLM_URL` | No | `http://localhost:4000` | LiteLLM proxy URL |
| `LITELLM_API_KEY` | No | — | LiteLLM API key |
| `LITELLM_MODEL` | No | `gpt-4o` | LLM model for analysis |
| `LITELLM_EMBED_MODEL` | No | `text-embedding-3-small` | Embedding model for RAG |
| `GITHUB_TOKEN` | No | — | GitHub personal access token |
| `GITHUB_WEBHOOK_SECRET` | No | — | GitHub webhook signing secret |
| `GITLAB_URL` | No | `https://gitlab.com` | GitLab instance URL |
| `GITLAB_TOKEN` | No | — | GitLab access token |
| `GITLAB_WEBHOOK_SECRET` | No | — | GitLab webhook signing secret |
| `JIRA_URL` | No | — | Jira instance URL |
| `JIRA_EMAIL` | No | — | Jira account email |
| `JIRA_API_TOKEN` | No | — | Jira API token |
| `JIRA_PROJECT_KEY` | No | — | Jira project key for issues |
| `SEARXNG_URL` | No | `http://localhost:8888` | SearXNG instance URL |
| `NVD_API_KEY` | No | — | NVD API key for CVE lookups |
| `SCAN_SCHEDULE` | No | `0 0 */6 * * *` | Cron schedule for scans |
| `CVE_MONITOR_SCHEDULE` | No | `0 0 0 * * *` | Cron schedule for CVE checks |
| `GIT_CLONE_BASE_PATH` | No | `/tmp/compliance-scanner/repos` | Local clone directory |
| `KEYCLOAK_URL` | No | — | Keycloak server URL |
| `KEYCLOAK_REALM` | No | — | Keycloak realm name |
| `KEYCLOAK_CLIENT_ID` | No | — | Keycloak client ID |
| `REDIRECT_URI` | No | — | OAuth callback URL |
| `APP_URL` | No | — | Application root URL |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | No | — | OTLP collector endpoint |
| `OTEL_SERVICE_NAME` | No | — | OpenTelemetry service name |
| `MCP_PORT` | No | — | MCP HTTP transport port (omit for stdio) |

75
docs/guide/findings.md Normal file
View File

@@ -0,0 +1,75 @@
# Managing Findings
Findings are security issues discovered during scans. The findings workflow lets you triage, track, and resolve vulnerabilities across all your repositories.
## Findings List
Navigate to **Findings** in the sidebar to see all findings. The table shows:
| Column | Description |
|--------|-------------|
| Severity | Color-coded badge: Critical (red), High (orange), Medium (yellow), Low (green) |
| Title | Short description of the vulnerability (clickable) |
| Type | SAST, SBOM, CVE, GDPR, or OAuth |
| Scanner | Tool that found the issue (e.g. semgrep, syft) |
| File | Source file path where the issue was found |
| Status | Current triage status |
## Filtering
Use the filter bar at the top to narrow results:
- **Repository** — Filter to a specific repository or view all
- **Severity** — Critical, High, Medium, Low, or Info
- **Type** — SAST, SBOM, CVE, GDPR, OAuth
- **Status** — Open, Triaged, Resolved, False Positive, Ignored
Filters can be combined. Results are paginated with 20 findings per page.
## Finding Detail
Click any finding title to view its full detail page, which includes:
### Metadata
- Severity level with CWE identifier and CVSS score (when available)
- Scanner tool and scan type
- File path and line number
### Description
Full explanation of the vulnerability, why it's a risk, and what conditions trigger it.
### Code Evidence
The source code snippet where the issue was found, with syntax highlighting and the file path.
### Remediation
Step-by-step guidance on how to fix the vulnerability.
### Suggested Fix
A code example showing the corrected implementation.
### Linked Issue
If the finding was pushed to an issue tracker (GitHub, GitLab, Jira), a direct link to the external issue.
## Updating Status
On the finding detail page, change the finding's status using the status buttons:
| Status | When to Use |
|--------|-------------|
| **Open** | New finding, not yet reviewed |
| **Triaged** | Reviewed and confirmed as a real issue, pending fix |
| **Resolved** | Fix has been applied |
| **False Positive** | Finding is not a real vulnerability in this context |
| **Ignored** | Known issue that won't be fixed (accepted risk) |
Status changes are persisted immediately.
## Severity Levels
| Severity | Description | Typical Examples |
|----------|-------------|-----------------|
| **Critical** | Immediate exploitation risk, data breach likely | SQL injection, RCE, hardcoded secrets |
| **High** | Serious vulnerability, exploitation probable | XSS, authentication bypass, SSRF |
| **Medium** | Moderate risk, exploitation requires specific conditions | Insecure deserialization, weak crypto |
| **Low** | Minor risk, limited impact | Information disclosure, verbose errors |
| **Info** | Informational, no direct security impact | Best practice recommendations |

View File

@@ -0,0 +1,55 @@
# Getting Started
Compliance Scanner is a security compliance platform that scans your Git repositories for vulnerabilities, builds software bills of materials, performs dynamic application testing, and provides AI-powered code intelligence.
## Architecture
The platform consists of three main components:
- **Agent** — Background service that clones repositories, runs scans, builds graphs, and exposes a REST API
- **Dashboard** — Web UI built with Dioxus (Rust full-stack framework) for viewing results and managing repositories
- **MongoDB** — Database for storing all scan results, findings, SBOM data, and graph structures
## Quick Start with Docker Compose
The fastest way to get running:
```bash
# Clone the repository
git clone <repo-url> compliance-scanner
cd compliance-scanner
# Copy and configure environment variables
cp .env.example .env
# Edit .env with your settings (see Configuration)
# Start all services
docker-compose up -d
```
This starts:
- MongoDB on port `27017`
- Agent API on port `3001`
- Dashboard on port `8080`
- Chromium (for DAST crawling) on port `3003`
Open the dashboard at [http://localhost:8080](http://localhost:8080).
## What Happens During a Scan
When you add a repository and trigger a scan, the agent runs through these phases:
1. **Clone** — Clones or pulls the latest code from the Git remote
2. **SAST** — Runs static analysis using Semgrep with rules for OWASP, GDPR, OAuth, and general security
3. **SBOM** — Extracts all dependencies using Syft, identifying packages, versions, licenses, and known vulnerabilities
4. **CVE Check** — Cross-references dependencies against the NVD database for known CVEs
5. **Graph Build** — Parses the codebase to construct a code knowledge graph of functions, classes, and their relationships
6. **Issue Sync** — Creates or updates issues in connected trackers (GitHub, GitLab, Jira) for new findings
Each phase produces results visible in the dashboard immediately.
## Next Steps
- [Add your first repository](/guide/repositories)
- [Understand scan results](/guide/findings)
- [Configure integrations](/guide/configuration)

View File

@@ -0,0 +1,62 @@
# Adding Repositories
Repositories are the core resource in Compliance Scanner. Each tracked repository is scanned on a schedule and its results are available across all features.
## Adding a Repository
1. Navigate to **Repositories** in the sidebar
2. Click **Add Repository** at the top of the page
3. Fill in the form:
- **Name** — A display name for the repository
- **Git URL** — The clone URL (HTTPS or SSH), e.g. `https://github.com/org/repo.git`
- **Default Branch** — The branch to scan, e.g. `main` or `master`
4. Click **Add**
The repository appears in the list immediately. It will not be scanned until you trigger a scan manually or the next scheduled scan runs.
::: tip
For private repositories, configure a GitHub token (`GITHUB_TOKEN`) or GitLab token (`GITLAB_TOKEN`) in your environment. The agent uses these tokens when cloning.
:::
## Repository List
The repositories page shows all tracked repositories with:
| Column | Description |
|--------|-------------|
| Name | Repository display name |
| Git URL | Clone URL |
| Branch | Default branch being scanned |
| Findings | Total number of security findings |
| Last Scanned | Relative timestamp of the most recent scan |
## Triggering a Scan
Click the **Scan** button on any repository row to trigger an immediate scan. The scan runs in the background through all phases (clone, SAST, SBOM, CVE, graph). You can monitor progress on the Overview page under recent scan runs.
## Deleting a Repository
Click the **Delete** button on a repository row. A confirmation dialog appears warning that this action permanently removes:
- All security findings
- SBOM entries and vulnerability data
- Scan run history
- Code graph data
- Embedding vectors (for AI chat)
- CVE alerts
This action cannot be undone.
## Automatic Scanning
Repositories are scanned automatically on a schedule configured by the `SCAN_SCHEDULE` environment variable (cron format). The default is every 6 hours:
```
SCAN_SCHEDULE=0 0 */6 * * *
```
CVE monitoring runs on a separate schedule (default: daily at midnight):
```
CVE_MONITOR_SCHEDULE=0 0 0 * * *
```

83
docs/guide/scanning.md Normal file
View File

@@ -0,0 +1,83 @@
# Running Scans
Scans are the primary workflow in Compliance Scanner. Each scan analyzes a repository for security vulnerabilities, dependency risks, and code structure.
## Scan Types
A full scan consists of multiple phases, each producing different types of findings:
| Scan Type | What It Detects | Scanner |
|-----------|----------------|---------|
| **SAST** | Code-level vulnerabilities (injection, XSS, insecure crypto, etc.) | Semgrep |
| **SBOM** | Dependency inventory, outdated packages, known vulnerabilities | Syft |
| **CVE** | Known CVEs in dependencies cross-referenced against NVD | NVD API |
| **GDPR** | Personal data handling issues, consent violations | Custom rules |
| **OAuth** | OAuth/OIDC misconfigurations, insecure token handling | Custom rules |
## Triggering a Scan
### Manual Scan
1. Go to **Repositories**
2. Click **Scan** on the repository you want to scan
3. The scan starts immediately in the background
### Scheduled Scans
Scans run automatically based on the `SCAN_SCHEDULE` cron expression. The default scans every 6 hours:
```
SCAN_SCHEDULE=0 0 */6 * * *
```
### Webhook-Triggered Scans
Configure GitHub or GitLab webhooks to trigger scans on push events. Set the webhook URL to:
```
http://<agent-host>:3002/webhook/github
http://<agent-host>:3002/webhook/gitlab
```
And configure the corresponding webhook secret:
```
GITHUB_WEBHOOK_SECRET=your-secret
GITLAB_WEBHOOK_SECRET=your-secret
```
## Scan Phases
Each scan progresses through these phases in order:
1. **Queued** — Scan is waiting to start
2. **Cloning** — Repository is being cloned or updated
3. **Scanning** — Static analysis and SBOM extraction are running
4. **Analyzing** — CVE cross-referencing and graph construction
5. **Reporting** — Creating tracker issues for new findings
6. **Completed** — All phases finished successfully
If any phase fails, the scan status is set to **Failed** with an error message.
## Viewing Scan History
The Overview page shows the 10 most recent scan runs across all repositories, including:
- Repository name
- Scan status
- Current phase
- Number of findings discovered
- Start time and duration
## Scan Run Statuses
| Status | Meaning |
|--------|---------|
| `queued` | Waiting to start |
| `running` | Currently executing |
| `completed` | Finished successfully |
| `failed` | Stopped due to an error |
## Deduplication
Findings are deduplicated using a fingerprint hash based on the scanner, file path, line number, and vulnerability type. Repeated scans will not create duplicate findings for the same issue.

29
docs/index.md Normal file
View File

@@ -0,0 +1,29 @@
---
layout: home
hero:
name: Compliance Scanner
text: AI-Powered Security Compliance
tagline: Automated SAST, SBOM, DAST, CVE monitoring, and code intelligence for your repositories
actions:
- theme: brand
text: Get Started
link: /guide/getting-started
- theme: alt
text: Features
link: /features/overview
features:
- title: Static Analysis (SAST)
details: Automated security scanning with Semgrep, detecting vulnerabilities across multiple languages including OWASP patterns, GDPR issues, and OAuth misconfigurations.
- title: SBOM & License Compliance
details: Full software bill of materials with dependency inventory, vulnerability tracking, license compliance analysis, and export to CycloneDX/SPDX formats.
- title: Dynamic Testing (DAST)
details: Black-box security testing of live web applications and APIs. Crawls endpoints, fuzzes parameters, and detects SQL injection, XSS, SSRF, and auth bypass vulnerabilities.
- title: Code Knowledge Graph
details: Interactive visualization of your codebase structure. Understand function calls, class hierarchies, and module dependencies with community detection.
- title: Impact Analysis
details: When a vulnerability is found, see exactly which entry points and call chains are affected. Understand blast radius before prioritizing fixes.
- title: AI-Powered Chat
details: Ask questions about your codebase using RAG-powered AI. Code is embedded as vectors and retrieved contextually to give accurate, source-referenced answers.
---

15
docs/nginx.conf Normal file
View File

@@ -0,0 +1,15 @@
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ $uri.html /index.html;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}

2514
docs/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More