3 Commits

Author SHA1 Message Date
Sharang Parnerkar
bf5a45c958 Add docs link in sidebar, Dockerfile for VitePress, make Keycloak optional
All checks were successful
CI / Tests (push) Has been skipped
CI / Format (push) Successful in 3s
CI / Clippy (push) Successful in 3m7s
CI / Security Audit (push) Has been skipped
CI / Format (pull_request) Successful in 3s
CI / Clippy (pull_request) Successful in 3m5s
CI / Security Audit (pull_request) Has been skipped
CI / Tests (pull_request) Has been skipped
- Add external "Docs" link in sidebar with configurable DOCS_URL
- Add Dockerfile.docs for VitePress static site (nginx)
- Add DOCS_URL build arg to Dockerfile.dashboard
- Run cargo fmt

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 14:52:51 +01:00
Sharang Parnerkar
b8b0f13d8d Make Keycloak authentication optional for local development
Some checks failed
CI / Format (push) Failing after 2s
CI / Clippy (push) Successful in 2m54s
CI / Security Audit (push) Has been skipped
CI / Tests (push) Has been skipped
CI / Clippy (pull_request) Successful in 3m4s
CI / Security Audit (pull_request) Has been skipped
CI / Tests (pull_request) Has been skipped
CI / Format (pull_request) Failing after 2s
When KEYCLOAK_URL is not set, the dashboard runs without auth,
treating all users as authenticated "Local User". Auth middleware
and check-auth endpoint gracefully skip when Keycloak is unconfigured.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 14:32:29 +01:00
Sharang Parnerkar
94552d1626 Add VitePress documentation site with complete user guides
All checks were successful
CI / Format (push) Successful in 3s
CI / Clippy (push) Successful in 3m13s
CI / Security Audit (push) Has been skipped
CI / Tests (push) Has been skipped
Covers getting started, repositories, scanning, findings, configuration,
SBOM, code graph, impact analysis, DAST, AI chat, issue tracker integration,
Docker deployment, environment variables, Keycloak auth, and OpenTelemetry.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 01:18:58 +01:00
34 changed files with 84 additions and 2094 deletions

View File

@@ -70,8 +70,6 @@ jobs:
run: cargo clippy -p compliance-dashboard --features server --no-default-features -- -D warnings run: cargo clippy -p compliance-dashboard --features server --no-default-features -- -D warnings
- name: Clippy (dashboard web) - name: Clippy (dashboard web)
run: cargo clippy -p compliance-dashboard --features web --no-default-features -- -D warnings run: cargo clippy -p compliance-dashboard --features web --no-default-features -- -D warnings
- name: Clippy (mcp)
run: cargo clippy -p compliance-mcp -- -D warnings
- name: Show sccache stats - name: Show sccache stats
run: sccache --show-stats run: sccache --show-stats
if: always() if: always()
@@ -126,119 +124,3 @@ jobs:
- name: Show sccache stats - name: Show sccache stats
run: sccache --show-stats run: sccache --show-stats
if: always() if: always()
# ---------------------------------------------------------------------------
# Stage 3: Deploy (only on main, after tests pass)
# Each service only deploys when its relevant files changed.
# ---------------------------------------------------------------------------
detect-changes:
name: Detect Changes
runs-on: docker
if: github.ref == 'refs/heads/main'
needs: [test]
container:
image: alpine:latest
outputs:
agent: ${{ steps.changes.outputs.agent }}
dashboard: ${{ steps.changes.outputs.dashboard }}
docs: ${{ steps.changes.outputs.docs }}
mcp: ${{ steps.changes.outputs.mcp }}
steps:
- name: Install git
run: apk add --no-cache git
- name: Checkout
run: |
git init
git remote add origin "${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git"
git fetch --depth=2 origin "${GITHUB_SHA}"
git checkout FETCH_HEAD
- name: Detect changed paths
id: changes
run: |
CHANGED=$(git diff --name-only HEAD~1 HEAD 2>/dev/null || echo "")
echo "Changed files:"
echo "$CHANGED"
# Agent: core libs, agent code, agent Dockerfile
if echo "$CHANGED" | grep -qE '^(compliance-core/|compliance-agent/|compliance-graph/|compliance-dast/|Dockerfile\.agent|Cargo\.(toml|lock))'; then
echo "agent=true" >> "$GITHUB_OUTPUT"
else
echo "agent=false" >> "$GITHUB_OUTPUT"
fi
# Dashboard: core libs, dashboard code, dashboard Dockerfile, assets
if echo "$CHANGED" | grep -qE '^(compliance-core/|compliance-dashboard/|Dockerfile\.dashboard|Dioxus\.toml|assets/|bin/|Cargo\.(toml|lock))'; then
echo "dashboard=true" >> "$GITHUB_OUTPUT"
else
echo "dashboard=false" >> "$GITHUB_OUTPUT"
fi
# Docs: docs folder, docs Dockerfile
if echo "$CHANGED" | grep -qE '^(docs/|Dockerfile\.docs)'; then
echo "docs=true" >> "$GITHUB_OUTPUT"
else
echo "docs=false" >> "$GITHUB_OUTPUT"
fi
# MCP: core libs, mcp code, mcp Dockerfile
if echo "$CHANGED" | grep -qE '^(compliance-core/|compliance-mcp/|Dockerfile\.mcp|Cargo\.(toml|lock))'; then
echo "mcp=true" >> "$GITHUB_OUTPUT"
else
echo "mcp=false" >> "$GITHUB_OUTPUT"
fi
deploy-agent:
name: Deploy Agent
runs-on: docker
needs: [detect-changes]
if: needs.detect-changes.outputs.agent == 'true'
container:
image: alpine:latest
steps:
- name: Trigger Coolify deploy
run: |
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK_AGENT }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"
deploy-dashboard:
name: Deploy Dashboard
runs-on: docker
needs: [detect-changes]
if: needs.detect-changes.outputs.dashboard == 'true'
container:
image: alpine:latest
steps:
- name: Trigger Coolify deploy
run: |
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK_DASHBOARD }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"
deploy-docs:
name: Deploy Docs
runs-on: docker
needs: [detect-changes]
if: needs.detect-changes.outputs.docs == 'true'
container:
image: alpine:latest
steps:
- name: Trigger Coolify deploy
run: |
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK_DOCS }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"
deploy-mcp:
name: Deploy MCP
runs-on: docker
needs: [detect-changes]
if: needs.detect-changes.outputs.mcp == 'true'
container:
image: alpine:latest
steps:
- name: Trigger Coolify deploy
run: |
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK_MCP }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"

237
Cargo.lock generated
View File

@@ -167,7 +167,7 @@ dependencies = [
"sync_wrapper", "sync_wrapper",
"tokio", "tokio",
"tokio-tungstenite 0.28.0", "tokio-tungstenite 0.28.0",
"tower", "tower 0.5.3",
"tower-layer", "tower-layer",
"tower-service", "tower-service",
"tracing", "tracing",
@@ -413,17 +413,6 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724" checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
[[package]]
name = "chacha20"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f8d983286843e49675a4b7a2d174efe136dc93a18d69130dd18198a6c167601"
dependencies = [
"cfg-if",
"cpufeatures 0.3.0",
"rand_core 0.10.0",
]
[[package]] [[package]]
name = "charset" name = "charset"
version = "0.1.5" version = "0.1.5"
@@ -615,7 +604,6 @@ version = "0.1.0"
dependencies = [ dependencies = [
"axum", "axum",
"base64", "base64",
"bson",
"chrono", "chrono",
"compliance-core", "compliance-core",
"dioxus", "dioxus",
@@ -639,7 +627,6 @@ dependencies = [
"tower-sessions", "tower-sessions",
"tracing", "tracing",
"url", "url",
"uuid",
"web-sys", "web-sys",
] ]
@@ -688,27 +675,6 @@ dependencies = [
"uuid", "uuid",
] ]
[[package]]
name = "compliance-mcp"
version = "0.1.0"
dependencies = [
"axum",
"bson",
"chrono",
"compliance-core",
"dotenvy",
"mongodb",
"rmcp",
"schemars 1.2.1",
"serde",
"serde_json",
"thiserror 2.0.18",
"tokio",
"tower-http",
"tracing",
"tracing-subscriber",
]
[[package]] [[package]]
name = "console_error_panic_hook" name = "console_error_panic_hook"
version = "0.1.7" version = "0.1.7"
@@ -903,15 +869,6 @@ dependencies = [
"libc", "libc",
] ]
[[package]]
name = "cpufeatures"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b2a41393f66f16b0823bb79094d54ac5fbd34ab292ddafb9a0456ac9f87d201"
dependencies = [
"libc",
]
[[package]] [[package]]
name = "crc32fast" name = "crc32fast"
version = "1.5.0" version = "1.5.0"
@@ -1015,18 +972,8 @@ version = "0.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9cdf337090841a411e2a7f3deb9187445851f91b309c0c0a29e05f74a00a48c0" checksum = "9cdf337090841a411e2a7f3deb9187445851f91b309c0c0a29e05f74a00a48c0"
dependencies = [ dependencies = [
"darling_core 0.21.3", "darling_core",
"darling_macro 0.21.3", "darling_macro",
]
[[package]]
name = "darling"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "25ae13da2f202d56bd7f91c25fba009e7717a1e4a1cc98a76d844b65ae912e9d"
dependencies = [
"darling_core 0.23.0",
"darling_macro 0.23.0",
] ]
[[package]] [[package]]
@@ -1043,37 +990,13 @@ dependencies = [
"syn", "syn",
] ]
[[package]]
name = "darling_core"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9865a50f7c335f53564bb694ef660825eb8610e0a53d3e11bf1b0d3df31e03b0"
dependencies = [
"ident_case",
"proc-macro2",
"quote",
"strsim",
"syn",
]
[[package]] [[package]]
name = "darling_macro" name = "darling_macro"
version = "0.21.3" version = "0.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d38308df82d1080de0afee5d069fa14b0326a88c14f15c5ccda35b4a6c414c81" checksum = "d38308df82d1080de0afee5d069fa14b0326a88c14f15c5ccda35b4a6c414c81"
dependencies = [ dependencies = [
"darling_core 0.21.3", "darling_core",
"quote",
"syn",
]
[[package]]
name = "darling_macro"
version = "0.23.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3984ec7bd6cfa798e62b4a642426a5be0e68f9401cfc2a01e3fa9ea2fcdb8d"
dependencies = [
"darling_core 0.23.0",
"quote", "quote",
"syn", "syn",
] ]
@@ -1412,7 +1335,7 @@ dependencies = [
"tokio-stream", "tokio-stream",
"tokio-tungstenite 0.27.0", "tokio-tungstenite 0.27.0",
"tokio-util", "tokio-util",
"tower", "tower 0.5.3",
"tower-http", "tower-http",
"tower-layer", "tower-layer",
"tracing", "tracing",
@@ -1703,7 +1626,7 @@ dependencies = [
"tokio", "tokio",
"tokio-tungstenite 0.27.0", "tokio-tungstenite 0.27.0",
"tokio-util", "tokio-util",
"tower", "tower 0.5.3",
"tower-http", "tower-http",
"tracing", "tracing",
"tracing-futures", "tracing-futures",
@@ -1904,7 +1827,7 @@ version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f43e744e4ea338060faee68ed933e46e722fb7f3617e722a5772d7e856d8b3ce" checksum = "f43e744e4ea338060faee68ed933e46e722fb7f3617e722a5772d7e856d8b3ce"
dependencies = [ dependencies = [
"darling 0.21.3", "darling",
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn", "syn",
@@ -2181,7 +2104,6 @@ dependencies = [
"cfg-if", "cfg-if",
"libc", "libc",
"r-efi", "r-efi",
"rand_core 0.10.0",
"wasip2", "wasip2",
"wasip3", "wasip3",
] ]
@@ -3575,7 +3497,7 @@ dependencies = [
"serde_urlencoded", "serde_urlencoded",
"snafu", "snafu",
"tokio", "tokio",
"tower", "tower 0.5.3",
"tower-http", "tower-http",
"tracing", "tracing",
"url", "url",
@@ -3677,6 +3599,8 @@ dependencies = [
"prost", "prost",
"reqwest", "reqwest",
"thiserror 2.0.18", "thiserror 2.0.18",
"tokio",
"tonic",
"tracing", "tracing",
] ]
@@ -3744,12 +3668,6 @@ dependencies = [
"windows-link", "windows-link",
] ]
[[package]]
name = "pastey"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b867cad97c0791bbd3aaa6472142568c6c9e8f71937e98379f584cfb0cf35bec"
[[package]] [[package]]
name = "pbkdf2" name = "pbkdf2"
version = "0.12.2" version = "0.12.2"
@@ -4087,17 +4005,6 @@ dependencies = [
"rand_core 0.9.5", "rand_core 0.9.5",
] ]
[[package]]
name = "rand"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc266eb313df6c5c09c1c7b1fbe2510961e5bcd3add930c1e31f7ed9da0feff8"
dependencies = [
"chacha20",
"getrandom 0.4.1",
"rand_core 0.10.0",
]
[[package]] [[package]]
name = "rand_chacha" name = "rand_chacha"
version = "0.3.1" version = "0.3.1"
@@ -4136,12 +4043,6 @@ dependencies = [
"getrandom 0.3.4", "getrandom 0.3.4",
] ]
[[package]]
name = "rand_core"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c8d0fd677905edcbeedbf2edb6494d676f0e98d54d5cf9bda0b061cb8fb8aba"
[[package]] [[package]]
name = "rand_distr" name = "rand_distr"
version = "0.4.3" version = "0.4.3"
@@ -4262,7 +4163,6 @@ dependencies = [
"pin-project-lite", "pin-project-lite",
"quinn", "quinn",
"rustls", "rustls",
"rustls-native-certs",
"rustls-pki-types", "rustls-pki-types",
"serde", "serde",
"serde_json", "serde_json",
@@ -4271,7 +4171,7 @@ dependencies = [
"tokio", "tokio",
"tokio-rustls", "tokio-rustls",
"tokio-util", "tokio-util",
"tower", "tower 0.5.3",
"tower-http", "tower-http",
"tower-service", "tower-service",
"url", "url",
@@ -4302,50 +4202,6 @@ dependencies = [
"windows-sys 0.52.0", "windows-sys 0.52.0",
] ]
[[package]]
name = "rmcp"
version = "0.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cc4c9c94680f75470ee8083a0667988b5d7b5beb70b9f998a8e51de7c682ce60"
dependencies = [
"async-trait",
"base64",
"bytes",
"chrono",
"futures",
"http",
"http-body",
"http-body-util",
"pastey",
"pin-project-lite",
"rand 0.10.0",
"rmcp-macros",
"schemars 1.2.1",
"serde",
"serde_json",
"sse-stream",
"thiserror 2.0.18",
"tokio",
"tokio-stream",
"tokio-util",
"tower-service",
"tracing",
"uuid",
]
[[package]]
name = "rmcp-macros"
version = "0.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "90c23c8f26cae4da838fbc3eadfaecf2d549d97c04b558e7bd90526a9c28b42a"
dependencies = [
"darling 0.23.0",
"proc-macro2",
"quote",
"serde_json",
"syn",
]
[[package]] [[package]]
name = "rust-stemmers" name = "rust-stemmers"
version = "1.2.0" version = "1.2.0"
@@ -4509,26 +4365,12 @@ version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2b42f36aa1cd011945615b92222f6bf73c599a102a300334cd7f8dbeec726cc" checksum = "a2b42f36aa1cd011945615b92222f6bf73c599a102a300334cd7f8dbeec726cc"
dependencies = [ dependencies = [
"chrono",
"dyn-clone", "dyn-clone",
"ref-cast", "ref-cast",
"schemars_derive",
"serde", "serde",
"serde_json", "serde_json",
] ]
[[package]]
name = "schemars_derive"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7d115b50f4aaeea07e79c1912f645c7513d81715d0420f8bc77a18c6260b307f"
dependencies = [
"proc-macro2",
"quote",
"serde_derive_internals",
"syn",
]
[[package]] [[package]]
name = "scopeguard" name = "scopeguard"
version = "1.2.0" version = "1.2.0"
@@ -4668,17 +4510,6 @@ dependencies = [
"syn", "syn",
] ]
[[package]]
name = "serde_derive_internals"
version = "0.29.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "18d26a20a969b9e3fdf2fc2d9f21eda6c40e2de84c9408bb5d3b05d499aae711"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]] [[package]]
name = "serde_json" name = "serde_json"
version = "1.0.149" version = "1.0.149"
@@ -4763,7 +4594,7 @@ version = "3.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a6d4e30573c8cb306ed6ab1dca8423eec9a463ea0e155f45399455e0368b27e0" checksum = "a6d4e30573c8cb306ed6ab1dca8423eec9a463ea0e155f45399455e0368b27e0"
dependencies = [ dependencies = [
"darling 0.21.3", "darling",
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn", "syn",
@@ -4785,7 +4616,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e3bf829a2d51ab4a5ddf1352d8470c140cadc8301b2ae1789db023f01cedd6ba" checksum = "e3bf829a2d51ab4a5ddf1352d8470c140cadc8301b2ae1789db023f01cedd6ba"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"cpufeatures 0.2.17", "cpufeatures",
"digest", "digest",
] ]
@@ -4796,7 +4627,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a7507d819769d01a365ab707794a4084392c824f54a7a6a7862f8c3d0892b283" checksum = "a7507d819769d01a365ab707794a4084392c824f54a7a6a7862f8c3d0892b283"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"cpufeatures 0.2.17", "cpufeatures",
"digest", "digest",
] ]
@@ -4950,19 +4781,6 @@ version = "0.9.8"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6980e8d7511241f8acf4aebddbb1ff938df5eebe98691418c4468d0b72a96a67" checksum = "6980e8d7511241f8acf4aebddbb1ff938df5eebe98691418c4468d0b72a96a67"
[[package]]
name = "sse-stream"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eb4dc4d33c68ec1f27d386b5610a351922656e1fdf5c05bbaad930cd1519479a"
dependencies = [
"bytes",
"futures-util",
"http-body",
"http-body-util",
"pin-project-lite",
]
[[package]] [[package]]
name = "stable_deref_trait" name = "stable_deref_trait"
version = "1.2.1" version = "1.2.1"
@@ -5546,10 +5364,35 @@ dependencies = [
"http", "http",
"http-body", "http-body",
"http-body-util", "http-body-util",
"hyper",
"hyper-timeout",
"hyper-util",
"percent-encoding", "percent-encoding",
"pin-project", "pin-project",
"prost", "prost",
"tokio",
"tokio-stream", "tokio-stream",
"tower 0.4.13",
"tower-layer",
"tower-service",
"tracing",
]
[[package]]
name = "tower"
version = "0.4.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b8fa9be0de6cf49e536ce1851f987bd21a43b771b09473c3549a6c853db37c1c"
dependencies = [
"futures-core",
"futures-util",
"indexmap 1.9.3",
"pin-project",
"pin-project-lite",
"rand 0.8.5",
"slab",
"tokio",
"tokio-util",
"tower-layer", "tower-layer",
"tower-service", "tower-service",
"tracing", "tracing",
@@ -5610,7 +5453,7 @@ dependencies = [
"pin-project-lite", "pin-project-lite",
"tokio", "tokio",
"tokio-util", "tokio-util",
"tower", "tower 0.5.3",
"tower-layer", "tower-layer",
"tower-service", "tower-service",
"tracing", "tracing",

View File

@@ -5,7 +5,6 @@ members = [
"compliance-dashboard", "compliance-dashboard",
"compliance-graph", "compliance-graph",
"compliance-dast", "compliance-dast",
"compliance-mcp",
] ]
resolver = "2" resolver = "2"

View File

@@ -16,7 +16,6 @@ WORKDIR /app
COPY --from=builder /app/target/dx/compliance-dashboard/release/web/compliance-dashboard /app/compliance-dashboard COPY --from=builder /app/target/dx/compliance-dashboard/release/web/compliance-dashboard /app/compliance-dashboard
COPY --from=builder /app/target/dx/compliance-dashboard/release/web/public /app/public COPY --from=builder /app/target/dx/compliance-dashboard/release/web/public /app/public
ENV IP=0.0.0.0
EXPOSE 8080 EXPOSE 8080
ENTRYPOINT ["./compliance-dashboard"] ENTRYPOINT ["./compliance-dashboard"]

View File

@@ -8,7 +8,5 @@ COPY docs/ .
RUN npm run build RUN npm run build
FROM nginx:alpine FROM nginx:alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY docs/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/.vitepress/dist /usr/share/nginx/html COPY --from=builder /app/.vitepress/dist /usr/share/nginx/html
EXPOSE 80 EXPOSE 80

View File

@@ -1,16 +0,0 @@
FROM rust:1.89-bookworm AS builder
WORKDIR /app
COPY . .
RUN cargo build --release -p compliance-mcp
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates libssl3 && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/compliance-mcp /usr/local/bin/compliance-mcp
EXPOSE 8090
ENV MCP_PORT=8090
ENTRYPOINT ["compliance-mcp"]

View File

@@ -300,84 +300,6 @@ tr:hover {
color: var(--text-secondary); color: var(--text-secondary);
} }
/* Sidebar User Section */
.sidebar-user {
display: flex;
align-items: center;
gap: 10px;
padding: 12px 14px;
margin: 8px;
border-top: 1px solid var(--border);
padding-top: 16px;
}
.sidebar-user-collapsed {
flex-direction: column;
gap: 8px;
padding: 12px 4px;
margin: 8px 4px;
}
.user-avatar {
width: 34px;
height: 34px;
border-radius: 10px;
background: linear-gradient(135deg, rgba(56, 189, 248, 0.2), rgba(56, 189, 248, 0.08));
border: 1px solid rgba(56, 189, 248, 0.15);
display: flex;
align-items: center;
justify-content: center;
flex-shrink: 0;
}
.avatar-initials {
font-size: 13px;
font-weight: 700;
color: var(--accent);
line-height: 1;
}
.avatar-img {
width: 100%;
height: 100%;
border-radius: 10px;
object-fit: cover;
}
.user-name {
flex: 1;
font-size: 13px;
font-weight: 500;
color: var(--text-primary);
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
min-width: 0;
}
.logout-btn {
display: flex;
align-items: center;
justify-content: center;
width: 32px;
height: 32px;
border-radius: 8px;
color: var(--text-secondary);
text-decoration: none;
flex-shrink: 0;
transition: all 0.15s;
}
.logout-btn:hover {
background: rgba(239, 68, 68, 0.12);
color: #fca5a5;
}
.logout-btn-collapsed {
width: 34px;
height: 34px;
}
@media (max-width: 768px) { @media (max-width: 768px) {
.sidebar { .sidebar {
transform: translateX(-100%); transform: translateX(-100%);
@@ -391,216 +313,3 @@ tr:hover {
padding: 16px; padding: 16px;
} }
} }
/* ── Utility classes ────────────────────────────────────── */
.mb-3 { margin-bottom: 12px; }
.mb-4 { margin-bottom: 16px; }
.text-secondary { color: var(--text-secondary); }
.btn-sm {
padding: 4px 10px;
font-size: 12px;
}
.btn-danger {
background: var(--danger);
color: #fff;
}
.btn-danger:hover {
background: #dc2626;
}
.btn-secondary {
background: var(--bg-secondary);
color: var(--text-primary);
border: 1px solid var(--border);
}
.btn-secondary:hover {
background: var(--bg-primary);
}
/* ── Modal ──────────────────────────────────────────────── */
.modal-overlay {
position: fixed;
inset: 0;
background: rgba(0, 0, 0, 0.6);
backdrop-filter: blur(4px);
display: flex;
align-items: center;
justify-content: center;
z-index: 1000;
}
.modal-dialog {
background: var(--bg-secondary);
border: 1px solid var(--border);
border-radius: 12px;
padding: 24px;
max-width: 440px;
width: 90%;
}
.modal-dialog h3 {
margin-bottom: 12px;
}
.modal-dialog p {
margin-bottom: 8px;
font-size: 14px;
color: var(--text-secondary);
}
.modal-warning {
color: var(--warning) !important;
font-size: 13px !important;
}
.modal-actions {
display: flex;
gap: 8px;
justify-content: flex-end;
margin-top: 16px;
}
/* ── MCP Servers ────────────────────────────────────────── */
.mcp-server-card {
padding: 20px;
}
.mcp-server-header {
display: flex;
justify-content: space-between;
align-items: flex-start;
margin-bottom: 12px;
}
.mcp-server-title {
display: flex;
align-items: center;
gap: 10px;
}
.mcp-server-title h3 {
font-size: 16px;
font-weight: 600;
margin: 0;
}
.mcp-server-actions {
display: flex;
gap: 6px;
}
.mcp-status {
display: inline-flex;
align-items: center;
padding: 2px 10px;
border-radius: 20px;
font-size: 11px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.5px;
}
.mcp-status-running {
background: rgba(34, 197, 94, 0.15);
color: var(--success);
}
.mcp-status-stopped {
background: rgba(148, 163, 184, 0.15);
color: var(--text-secondary);
}
.mcp-status-error {
background: rgba(239, 68, 68, 0.15);
color: var(--danger);
}
.mcp-config-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
gap: 12px;
margin-bottom: 16px;
}
.mcp-config-item {
display: flex;
flex-direction: column;
gap: 4px;
}
.mcp-config-label {
font-size: 11px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.5px;
color: var(--text-secondary);
}
.mcp-config-value {
font-size: 13px;
color: var(--text-primary);
word-break: break-all;
}
.mcp-form-grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 0 16px;
}
.mcp-tools-section {
margin-bottom: 16px;
}
.mcp-tools-list {
display: flex;
flex-wrap: wrap;
gap: 6px;
margin-top: 6px;
}
.mcp-tool-badge {
display: inline-block;
padding: 3px 10px;
background: rgba(56, 189, 248, 0.1);
border: 1px solid rgba(56, 189, 248, 0.2);
border-radius: 6px;
font-size: 12px;
font-family: 'JetBrains Mono', monospace;
color: var(--accent);
}
.mcp-token-section {
margin-bottom: 12px;
}
.mcp-token-row {
display: flex;
align-items: center;
gap: 8px;
margin-top: 6px;
}
.mcp-token-value {
flex: 1;
padding: 6px 10px;
background: var(--bg-primary);
border: 1px solid var(--border);
border-radius: 6px;
font-size: 12px;
font-family: 'JetBrains Mono', monospace;
color: var(--text-secondary);
word-break: break-all;
}
.mcp-meta {
padding-top: 12px;
border-top: 1px solid var(--border);
font-size: 12px;
}

View File

@@ -32,7 +32,7 @@ bson = { version = "2", features = ["chrono-0_4"] }
mongodb = { workspace = true, optional = true } mongodb = { workspace = true, optional = true }
opentelemetry = { version = "0.29", optional = true } opentelemetry = { version = "0.29", optional = true }
opentelemetry_sdk = { version = "0.29", features = ["rt-tokio"], optional = true } opentelemetry_sdk = { version = "0.29", features = ["rt-tokio"], optional = true }
opentelemetry-otlp = { version = "0.29", features = ["http", "reqwest-rustls"], optional = true } opentelemetry-otlp = { version = "0.29", features = ["grpc-tonic"], optional = true }
opentelemetry-appender-tracing = { version = "0.29", optional = true } opentelemetry-appender-tracing = { version = "0.29", optional = true }
tracing-opentelemetry = { version = "0.30", optional = true } tracing-opentelemetry = { version = "0.30", optional = true }
tracing-subscriber = { workspace = true, optional = true } tracing-subscriber = { workspace = true, optional = true }

View File

@@ -1,67 +0,0 @@
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
/// Transport mode for MCP server
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum McpTransport {
Stdio,
Http,
}
impl std::fmt::Display for McpTransport {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Stdio => write!(f, "stdio"),
Self::Http => write!(f, "http"),
}
}
}
/// Status of a running MCP server
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(rename_all = "snake_case")]
pub enum McpServerStatus {
Running,
Stopped,
Error,
}
impl std::fmt::Display for McpServerStatus {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Running => write!(f, "running"),
Self::Stopped => write!(f, "stopped"),
Self::Error => write!(f, "error"),
}
}
}
/// Configuration for a registered MCP server instance
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct McpServerConfig {
#[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
pub id: Option<bson::oid::ObjectId>,
/// Display name for this MCP server
pub name: String,
/// Endpoint URL (e.g. https://mcp.example.com/mcp)
pub endpoint_url: String,
/// Transport type
pub transport: McpTransport,
/// Port number (for HTTP transport)
pub port: Option<u16>,
/// Current status
pub status: McpServerStatus,
/// Bearer access token for authentication
pub access_token: String,
/// Which tools are enabled on this server
pub tools_enabled: Vec<String>,
/// Optional description / notes
pub description: Option<String>,
/// MongoDB URI this server connects to
pub mongodb_uri: Option<String>,
/// Database name
pub mongodb_database: Option<String>,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
}

View File

@@ -6,7 +6,6 @@ pub mod embedding;
pub mod finding; pub mod finding;
pub mod graph; pub mod graph;
pub mod issue; pub mod issue;
pub mod mcp;
pub mod repository; pub mod repository;
pub mod sbom; pub mod sbom;
pub mod scan; pub mod scan;
@@ -24,7 +23,6 @@ pub use graph::{
CodeEdge, CodeEdgeKind, CodeNode, CodeNodeKind, GraphBuildRun, GraphBuildStatus, ImpactAnalysis, CodeEdge, CodeEdgeKind, CodeNode, CodeNodeKind, GraphBuildRun, GraphBuildStatus, ImpactAnalysis,
}; };
pub use issue::{IssueStatus, TrackerIssue, TrackerType}; pub use issue::{IssueStatus, TrackerIssue, TrackerType};
pub use mcp::{McpServerConfig, McpServerStatus, McpTransport};
pub use repository::{ScanTrigger, TrackedRepository}; pub use repository::{ScanTrigger, TrackedRepository};
pub use sbom::{SbomEntry, VulnRef}; pub use sbom::{SbomEntry, VulnRef};
pub use scan::{ScanPhase, ScanRun, ScanRunStatus, ScanType}; pub use scan::{ScanPhase, ScanRun, ScanRunStatus, ScanType};

View File

@@ -1,6 +1,6 @@
//! OpenTelemetry initialization for traces and logs. //! OpenTelemetry initialization for traces and logs.
//! //!
//! Exports traces and logs via OTLP/HTTP when `OTEL_EXPORTER_OTLP_ENDPOINT` //! Exports traces and logs via OTLP (gRPC) when `OTEL_EXPORTER_OTLP_ENDPOINT`
//! is set. Always includes a `tracing_subscriber::fmt` layer for console output. //! is set. Always includes a `tracing_subscriber::fmt` layer for console output.
//! //!
//! Compatible with SigNoz, Grafana Tempo/Loki, Jaeger, and any OTLP-compatible //! Compatible with SigNoz, Grafana Tempo/Loki, Jaeger, and any OTLP-compatible
@@ -10,7 +10,7 @@
//! //!
//! | Variable | Description | Default | //! | Variable | Description | Default |
//! |---|---|---| //! |---|---|---|
//! | `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP collector endpoint (e.g. `https://otel.example.com`) | *(disabled)* | //! | `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP collector endpoint (e.g. `http://localhost:4317`) | *(disabled)* |
//! | `OTEL_SERVICE_NAME` | Service name for resource | `service_name` param | //! | `OTEL_SERVICE_NAME` | Service name for resource | `service_name` param |
//! | `RUST_LOG` / standard `EnvFilter` | Log level filter | `info` | //! | `RUST_LOG` / standard `EnvFilter` | Log level filter | `info` |
@@ -58,7 +58,7 @@ fn build_resource(service_name: &str) -> Resource {
/// Initialize telemetry (tracing + logging). /// Initialize telemetry (tracing + logging).
/// ///
/// If `OTEL_EXPORTER_OTLP_ENDPOINT` is set, traces and logs are exported /// If `OTEL_EXPORTER_OTLP_ENDPOINT` is set, traces and logs are exported
/// via OTLP/HTTP. Console fmt output is always enabled. /// via OTLP/gRPC. Console fmt output is always enabled.
/// ///
/// Returns a [`TelemetryGuard`] that must be held alive for the application /// Returns a [`TelemetryGuard`] that must be held alive for the application
/// lifetime. Dropping it triggers a graceful shutdown of OTel providers. /// lifetime. Dropping it triggers a graceful shutdown of OTel providers.
@@ -76,14 +76,11 @@ pub fn init_telemetry(service_name: &str) -> TelemetryGuard {
Some(ref endpoint) => { Some(ref endpoint) => {
let resource = build_resource(service_name); let resource = build_resource(service_name);
let traces_endpoint = format!("{endpoint}/v1/traces");
let logs_endpoint = format!("{endpoint}/v1/logs");
// Traces // Traces
#[allow(clippy::expect_used)] #[allow(clippy::expect_used)]
let span_exporter = SpanExporter::builder() let span_exporter = SpanExporter::builder()
.with_http() .with_tonic()
.with_endpoint(&traces_endpoint) .with_endpoint(endpoint)
.build() .build()
.expect("failed to create OTLP span exporter"); .expect("failed to create OTLP span exporter");
@@ -99,8 +96,8 @@ pub fn init_telemetry(service_name: &str) -> TelemetryGuard {
// Logs // Logs
#[allow(clippy::expect_used)] #[allow(clippy::expect_used)]
let log_exporter = LogExporter::builder() let log_exporter = LogExporter::builder()
.with_http() .with_tonic()
.with_endpoint(&logs_endpoint) .with_endpoint(endpoint)
.build() .build()
.expect("failed to create OTLP log exporter"); .expect("failed to create OTLP log exporter");
@@ -114,6 +111,7 @@ pub fn init_telemetry(service_name: &str) -> TelemetryGuard {
// Filter to prevent telemetry-induced-telemetry loops // Filter to prevent telemetry-induced-telemetry loops
let otel_filter = EnvFilter::new("info") let otel_filter = EnvFilter::new("info")
.add_directive("hyper=off".parse().unwrap_or_default()) .add_directive("hyper=off".parse().unwrap_or_default())
.add_directive("tonic=off".parse().unwrap_or_default())
.add_directive("h2=off".parse().unwrap_or_default()) .add_directive("h2=off".parse().unwrap_or_default())
.add_directive("reqwest=off".parse().unwrap_or_default()); .add_directive("reqwest=off".parse().unwrap_or_default());
@@ -127,7 +125,7 @@ pub fn init_telemetry(service_name: &str) -> TelemetryGuard {
tracing::info!( tracing::info!(
endpoint = endpoint.as_str(), endpoint = endpoint.as_str(),
service = service_name, service = service_name,
"OpenTelemetry OTLP/HTTP export enabled" "OpenTelemetry OTLP export enabled"
); );
TelemetryGuard { TelemetryGuard {

View File

@@ -34,8 +34,6 @@ server = [
"dep:url", "dep:url",
"dep:sha2", "dep:sha2",
"dep:base64", "dep:base64",
"dep:uuid",
"dep:bson",
] ]
[dependencies] [dependencies]
@@ -69,5 +67,3 @@ rand = { version = "0.9", optional = true }
url = { version = "2", optional = true } url = { version = "2", optional = true }
sha2 = { workspace = true, optional = true } sha2 = { workspace = true, optional = true }
base64 = { version = "0.22", optional = true } base64 = { version = "0.22", optional = true }
uuid = { workspace = true, optional = true }
bson = { version = "2", features = ["chrono-0_4"], optional = true }

View File

@@ -38,8 +38,6 @@ pub enum Route {
DastFindingsPage {}, DastFindingsPage {},
#[route("/dast/findings/:id")] #[route("/dast/findings/:id")]
DastFindingDetailPage { id: String }, DastFindingDetailPage { id: String },
#[route("/mcp-servers")]
McpServersPage {},
#[route("/settings")] #[route("/settings")]
SettingsPage {}, SettingsPage {},
} }

View File

@@ -24,13 +24,12 @@ pub fn AppShell() -> Element {
} }
} }
} }
Some(Ok(_)) | Some(Err(_)) => { Some(Ok(_)) => {
// Not authenticated — redirect to Keycloak login rsx! { LoginPage {} }
rsx! {
document::Script {
dangerous_inner_html: "window.location.href = '/auth';"
}
} }
Some(Err(e)) => {
tracing::error!("Auth check failed: {e}");
rsx! { LoginPage {} }
} }
None => { None => {
rsx! { rsx! {
@@ -41,3 +40,20 @@ pub fn AppShell() -> Element {
} }
} }
} }
#[component]
fn LoginPage() -> Element {
rsx! {
div { class: "flex items-center justify-center h-screen bg-gray-950",
div { class: "text-center",
h1 { class: "text-3xl font-bold text-white mb-4", "Compliance Scanner" }
p { class: "text-gray-400 mb-8", "Sign in to access the dashboard" }
a {
href: "/auth",
class: "px-6 py-3 bg-blue-600 text-white rounded-lg hover:bg-blue-500 transition-colors font-medium",
"Sign in with Keycloak"
}
}
}
}
}

View File

@@ -57,11 +57,6 @@ pub fn Sidebar() -> Element {
route: Route::DastOverviewPage {}, route: Route::DastOverviewPage {},
icon: rsx! { Icon { icon: BsBug, width: 18, height: 18 } }, icon: rsx! { Icon { icon: BsBug, width: 18, height: 18 } },
}, },
NavItem {
label: "MCP Servers",
route: Route::McpServersPage {},
icon: rsx! { Icon { icon: BsPlug, width: 18, height: 18 } },
},
NavItem { NavItem {
label: "Settings", label: "Settings",
route: Route::SettingsPage {}, route: Route::SettingsPage {},
@@ -135,9 +130,8 @@ pub fn Sidebar() -> Element {
let auth_info = use_context::<Signal<AuthInfo>>(); let auth_info = use_context::<Signal<AuthInfo>>();
let info = auth_info(); let info = auth_info();
let initials = info.name.chars().next().unwrap_or('U').to_uppercase().to_string(); let initials = info.name.chars().next().unwrap_or('U').to_uppercase().to_string();
let user_class = if collapsed() { "sidebar-user sidebar-user-collapsed" } else { "sidebar-user" };
rsx! { rsx! {
div { class: "{user_class}", div { class: "sidebar-user",
div { class: "user-avatar", div { class: "user-avatar",
if info.avatar_url.is_empty() { if info.avatar_url.is_empty() {
span { class: "avatar-initials", "{initials}" } span { class: "avatar-initials", "{initials}" }
@@ -146,13 +140,15 @@ pub fn Sidebar() -> Element {
} }
} }
if !collapsed() { if !collapsed() {
div { class: "user-info",
span { class: "user-name", "{info.name}" } span { class: "user-name", "{info.name}" }
}
a { a {
href: "/logout", href: "/logout",
class: if collapsed() { "logout-btn logout-btn-collapsed" } else { "logout-btn" }, class: "logout-link",
title: "Sign out", Icon { icon: BsBoxArrowRight, width: 14, height: 14 }
Icon { icon: BsBoxArrowRight, width: 16, height: 16 } " Logout"
}
}
} }
} }
} }

View File

@@ -42,8 +42,4 @@ impl Database {
pub fn tracker_issues(&self) -> Collection<TrackerIssue> { pub fn tracker_issues(&self) -> Collection<TrackerIssue> {
self.inner.collection("tracker_issues") self.inner.collection("tracker_issues")
} }
pub fn mcp_servers(&self) -> Collection<McpServerConfig> {
self.inner.collection("mcp_servers")
}
} }

View File

@@ -1,160 +0,0 @@
use dioxus::prelude::*;
use serde::{Deserialize, Serialize};
use compliance_core::models::McpServerConfig;
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct McpServersResponse {
pub data: Vec<McpServerConfig>,
}
#[server]
pub async fn fetch_mcp_servers() -> Result<McpServersResponse, ServerFnError> {
use mongodb::bson::doc;
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let mut cursor = state
.db
.mcp_servers()
.find(doc! {})
.sort(doc! { "created_at": -1 })
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
let mut data = Vec::new();
while cursor
.advance()
.await
.map_err(|e| ServerFnError::new(e.to_string()))?
{
let server = cursor
.deserialize_current()
.map_err(|e| ServerFnError::new(e.to_string()))?;
data.push(server);
}
Ok(McpServersResponse { data })
}
#[server]
pub async fn add_mcp_server(
name: String,
endpoint_url: String,
transport: String,
port: String,
description: String,
mongodb_uri: String,
mongodb_database: String,
) -> Result<(), ServerFnError> {
use chrono::Utc;
use compliance_core::models::{McpServerStatus, McpTransport};
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let transport = match transport.as_str() {
"http" => McpTransport::Http,
_ => McpTransport::Stdio,
};
let port_num: Option<u16> = port.parse().ok();
// Generate a random access token
let token = format!("mcp_{}", uuid::Uuid::new_v4().to_string().replace('-', ""));
let all_tools = vec![
"list_findings".to_string(),
"get_finding".to_string(),
"findings_summary".to_string(),
"list_sbom_packages".to_string(),
"sbom_vuln_report".to_string(),
"list_dast_findings".to_string(),
"dast_scan_summary".to_string(),
];
let now = Utc::now();
let server = McpServerConfig {
id: None,
name,
endpoint_url,
transport,
port: port_num,
status: McpServerStatus::Stopped,
access_token: token,
tools_enabled: all_tools,
description: if description.is_empty() {
None
} else {
Some(description)
},
mongodb_uri: if mongodb_uri.is_empty() {
None
} else {
Some(mongodb_uri)
},
mongodb_database: if mongodb_database.is_empty() {
None
} else {
Some(mongodb_database)
},
created_at: now,
updated_at: now,
};
state
.db
.mcp_servers()
.insert_one(server)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(())
}
#[server]
pub async fn delete_mcp_server(server_id: String) -> Result<(), ServerFnError> {
use mongodb::bson::doc;
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let oid = bson::oid::ObjectId::parse_str(&server_id)
.map_err(|e| ServerFnError::new(e.to_string()))?;
state
.db
.mcp_servers()
.delete_one(doc! { "_id": oid })
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(())
}
#[server]
pub async fn regenerate_mcp_token(server_id: String) -> Result<String, ServerFnError> {
use chrono::Utc;
use mongodb::bson::doc;
let state: super::server_state::ServerState =
dioxus_fullstack::FullstackContext::extract().await?;
let oid = bson::oid::ObjectId::parse_str(&server_id)
.map_err(|e| ServerFnError::new(e.to_string()))?;
let new_token = format!("mcp_{}", uuid::Uuid::new_v4().to_string().replace('-', ""));
state
.db
.mcp_servers()
.update_one(
doc! { "_id": oid },
doc! { "$set": { "access_token": &new_token, "updated_at": Utc::now().to_rfc3339() } },
)
.await
.map_err(|e| ServerFnError::new(e.to_string()))?;
Ok(new_token)
}

View File

@@ -6,7 +6,6 @@ pub mod dast;
pub mod findings; pub mod findings;
pub mod graph; pub mod graph;
pub mod issues; pub mod issues;
pub mod mcp;
pub mod repositories; pub mod repositories;
pub mod sbom; pub mod sbom;
pub mod scans; pub mod scans;

View File

@@ -45,8 +45,7 @@ pub fn server_start(app: fn() -> Element) -> Result<(), DashboardError> {
.with_expiry(tower_sessions::Expiry::OnInactivity(Duration::hours(24))) .with_expiry(tower_sessions::Expiry::OnInactivity(Duration::hours(24)))
.with_signed(key); .with_signed(key);
let port = dioxus_cli_config::server_port().unwrap_or(8080); let addr = dioxus_cli_config::fullstack_address_or_localhost();
let addr = std::net::SocketAddr::from(([0, 0, 0, 0], port));
let listener = tokio::net::TcpListener::bind(addr) let listener = tokio::net::TcpListener::bind(addr)
.await .await
.map_err(|e| DashboardError::Other(format!("Failed to bind: {e}")))?; .map_err(|e| DashboardError::Other(format!("Failed to bind: {e}")))?;

View File

@@ -1,328 +0,0 @@
use dioxus::prelude::*;
use crate::components::page_header::PageHeader;
use crate::components::toast::{ToastType, Toasts};
use crate::infrastructure::mcp::{
add_mcp_server, delete_mcp_server, fetch_mcp_servers, regenerate_mcp_token,
};
#[component]
pub fn McpServersPage() -> Element {
let mut servers = use_resource(|| async { fetch_mcp_servers().await.ok() });
let mut toasts = use_context::<Toasts>();
let mut show_form = use_signal(|| false);
let mut new_name = use_signal(String::new);
let mut new_endpoint = use_signal(String::new);
let mut new_transport = use_signal(|| "http".to_string());
let mut new_port = use_signal(|| "8090".to_string());
let mut new_description = use_signal(String::new);
let mut new_mongo_uri = use_signal(String::new);
let mut new_mongo_db = use_signal(String::new);
// Track which server's token is visible
let mut visible_token: Signal<Option<String>> = use_signal(|| None);
// Track which server is pending delete confirmation
let mut confirm_delete: Signal<Option<(String, String)>> = use_signal(|| None);
rsx! {
PageHeader {
title: "MCP Servers",
description: "Manage Model Context Protocol servers for LLM integrations",
}
div { class: "mb-4",
button {
class: "btn btn-primary",
onclick: move |_| show_form.set(!show_form()),
if show_form() { "Cancel" } else { "Register Server" }
}
}
if show_form() {
div { class: "card mb-4",
div { class: "card-header", "Register MCP Server" }
div { class: "mcp-form-grid",
div { class: "form-group",
label { "Name" }
input {
r#type: "text",
placeholder: "Production MCP",
value: "{new_name}",
oninput: move |e| new_name.set(e.value()),
}
}
div { class: "form-group",
label { "Endpoint URL" }
input {
r#type: "text",
placeholder: "https://mcp.example.com/mcp",
value: "{new_endpoint}",
oninput: move |e| new_endpoint.set(e.value()),
}
}
div { class: "form-group",
label { "Transport" }
select {
value: "{new_transport}",
oninput: move |e| new_transport.set(e.value()),
option { value: "http", "HTTP (Streamable)" }
option { value: "stdio", "Stdio" }
}
}
div { class: "form-group",
label { "Port" }
input {
r#type: "text",
placeholder: "8090",
value: "{new_port}",
oninput: move |e| new_port.set(e.value()),
}
}
div { class: "form-group",
label { "MongoDB URI" }
input {
r#type: "text",
placeholder: "mongodb://localhost:27017",
value: "{new_mongo_uri}",
oninput: move |e| new_mongo_uri.set(e.value()),
}
}
div { class: "form-group",
label { "Database Name" }
input {
r#type: "text",
placeholder: "compliance_scanner",
value: "{new_mongo_db}",
oninput: move |e| new_mongo_db.set(e.value()),
}
}
}
div { class: "form-group",
label { "Description" }
input {
r#type: "text",
placeholder: "Optional notes about this server",
value: "{new_description}",
oninput: move |e| new_description.set(e.value()),
}
}
button {
class: "btn btn-primary",
onclick: move |_| {
let name = new_name();
let endpoint = new_endpoint();
let transport = new_transport();
let port = new_port();
let desc = new_description();
let mongo_uri = new_mongo_uri();
let mongo_db = new_mongo_db();
spawn(async move {
match add_mcp_server(name, endpoint, transport, port, desc, mongo_uri, mongo_db).await {
Ok(_) => {
toasts.push(ToastType::Success, "MCP server registered");
servers.restart();
}
Err(e) => toasts.push(ToastType::Error, e.to_string()),
}
});
show_form.set(false);
new_name.set(String::new());
new_endpoint.set(String::new());
new_transport.set("http".to_string());
new_port.set("8090".to_string());
new_description.set(String::new());
new_mongo_uri.set(String::new());
new_mongo_db.set(String::new());
},
"Register"
}
}
}
// Delete confirmation modal
if let Some((ref del_id, ref del_name)) = *confirm_delete.read() {
div { class: "modal-overlay",
onclick: move |_| confirm_delete.set(None),
div { class: "modal-dialog",
onclick: move |e| e.stop_propagation(),
h3 { "Delete MCP Server" }
p { "Are you sure you want to remove " strong { "{del_name}" } "?" }
p { class: "text-secondary", "Connected LLM clients will lose access." }
div { class: "modal-actions",
button {
class: "btn btn-ghost",
onclick: move |_| confirm_delete.set(None),
"Cancel"
}
button {
class: "btn btn-danger",
onclick: {
let id = del_id.clone();
move |_| {
let id = id.clone();
spawn(async move {
match delete_mcp_server(id).await {
Ok(_) => {
toasts.push(ToastType::Success, "Server removed");
servers.restart();
}
Err(e) => toasts.push(ToastType::Error, e.to_string()),
}
});
confirm_delete.set(None);
}
},
"Delete"
}
}
}
}
}
match &*servers.read() {
Some(Some(resp)) => {
if resp.data.is_empty() {
rsx! {
div { class: "card",
p { class: "text-secondary", "No MCP servers registered. Add one to get started." }
}
}
} else {
rsx! {
for server in resp.data.iter() {
{
let sid = server.id.map(|id| id.to_hex()).unwrap_or_default();
let name = server.name.clone();
let status_class = match server.status {
compliance_core::models::McpServerStatus::Running => "mcp-status-running",
compliance_core::models::McpServerStatus::Stopped => "mcp-status-stopped",
compliance_core::models::McpServerStatus::Error => "mcp-status-error",
};
let is_token_visible = visible_token().as_deref() == Some(sid.as_str());
let created_str = server.created_at.format("%Y-%m-%d %H:%M").to_string();
rsx! {
div { class: "card mcp-server-card mb-4",
div { class: "mcp-server-header",
div { class: "mcp-server-title",
h3 { "{server.name}" }
span { class: "mcp-status {status_class}",
"{server.status}"
}
}
div { class: "mcp-server-actions",
button {
class: "btn btn-sm btn-ghost",
title: "Delete server",
onclick: {
let id = sid.clone();
let name = name.clone();
move |_| {
confirm_delete.set(Some((id.clone(), name.clone())));
}
},
"Delete"
}
}
}
if let Some(ref desc) = server.description {
p { class: "text-secondary mb-3", "{desc}" }
}
div { class: "mcp-config-grid",
div { class: "mcp-config-item",
span { class: "mcp-config-label", "Endpoint" }
code { class: "mcp-config-value", "{server.endpoint_url}" }
}
div { class: "mcp-config-item",
span { class: "mcp-config-label", "Transport" }
span { class: "mcp-config-value", "{server.transport}" }
}
if let Some(port) = server.port {
div { class: "mcp-config-item",
span { class: "mcp-config-label", "Port" }
span { class: "mcp-config-value", "{port}" }
}
}
if let Some(ref db) = server.mongodb_database {
div { class: "mcp-config-item",
span { class: "mcp-config-label", "Database" }
span { class: "mcp-config-value", "{db}" }
}
}
}
div { class: "mcp-tools-section",
span { class: "mcp-config-label", "Enabled Tools" }
div { class: "mcp-tools-list",
for tool in server.tools_enabled.iter() {
span { class: "mcp-tool-badge", "{tool}" }
}
}
}
div { class: "mcp-token-section",
span { class: "mcp-config-label", "Access Token" }
div { class: "mcp-token-row",
code { class: "mcp-token-value",
if is_token_visible {
"{server.access_token}"
} else {
"mcp_••••••••••••••••••••••••••••"
}
}
button {
class: "btn btn-sm btn-ghost",
onclick: {
let id = sid.clone();
move |_| {
if visible_token().as_deref() == Some(id.as_str()) {
visible_token.set(None);
} else {
visible_token.set(Some(id.clone()));
}
}
},
if is_token_visible { "Hide" } else { "Reveal" }
}
button {
class: "btn btn-sm btn-ghost",
onclick: {
let id = sid.clone();
move |_| {
let id = id.clone();
spawn(async move {
match regenerate_mcp_token(id).await {
Ok(_) => {
toasts.push(ToastType::Success, "Token regenerated");
servers.restart();
}
Err(e) => toasts.push(ToastType::Error, e.to_string()),
}
});
}
},
"Regenerate"
}
}
}
div { class: "mcp-meta",
span { class: "text-secondary",
"Created {created_str}"
}
}
}
}
}
}
}
}
},
Some(None) => rsx! { div { class: "card", p { "Failed to load MCP servers." } } },
None => rsx! { div { class: "card", p { "Loading..." } } },
}
}
}

View File

@@ -10,7 +10,6 @@ pub mod graph_explorer;
pub mod graph_index; pub mod graph_index;
pub mod impact_analysis; pub mod impact_analysis;
pub mod issues; pub mod issues;
pub mod mcp_servers;
pub mod overview; pub mod overview;
pub mod repositories; pub mod repositories;
pub mod sbom; pub mod sbom;
@@ -28,7 +27,6 @@ pub use graph_explorer::GraphExplorerPage;
pub use graph_index::GraphIndexPage; pub use graph_index::GraphIndexPage;
pub use impact_analysis::ImpactAnalysisPage; pub use impact_analysis::ImpactAnalysisPage;
pub use issues::IssuesPage; pub use issues::IssuesPage;
pub use mcp_servers::McpServersPage;
pub use overview::OverviewPage; pub use overview::OverviewPage;
pub use repositories::RepositoriesPage; pub use repositories::RepositoriesPage;
pub use sbom::SbomPage; pub use sbom::SbomPage;

View File

@@ -1,21 +0,0 @@
[package]
name = "compliance-mcp"
version = "0.1.0"
edition = "2021"
[dependencies]
compliance-core = { workspace = true, features = ["mongodb"] }
rmcp = { version = "0.16", features = ["server", "macros", "transport-io", "transport-streamable-http-server"] }
tokio = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
mongodb = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
dotenvy = "0.15"
thiserror = { workspace = true }
chrono = { workspace = true }
bson = { version = "2", features = ["chrono-0_4"] }
schemars = "1.0"
axum = "0.8"
tower-http = { version = "0.6", features = ["cors"] }

View File

@@ -1,34 +0,0 @@
use mongodb::{Client, Collection};
use compliance_core::models::*;
#[derive(Clone, Debug)]
pub struct Database {
inner: mongodb::Database,
}
impl Database {
pub async fn connect(uri: &str, db_name: &str) -> Result<Self, mongodb::error::Error> {
let client = Client::with_uri_str(uri).await?;
let db = client.database(db_name);
db.run_command(mongodb::bson::doc! { "ping": 1 }).await?;
tracing::info!("MCP server connected to MongoDB '{db_name}'");
Ok(Self { inner: db })
}
pub fn findings(&self) -> Collection<Finding> {
self.inner.collection("findings")
}
pub fn sbom_entries(&self) -> Collection<SbomEntry> {
self.inner.collection("sbom_entries")
}
pub fn dast_findings(&self) -> Collection<DastFinding> {
self.inner.collection("dast_findings")
}
pub fn dast_scan_runs(&self) -> Collection<DastScanRun> {
self.inner.collection("dast_scan_runs")
}
}

View File

@@ -1,58 +0,0 @@
mod database;
mod server;
mod tools;
use std::sync::Arc;
use database::Database;
use rmcp::transport::{
streamable_http_server::session::local::LocalSessionManager, StreamableHttpServerConfig,
StreamableHttpService,
};
use server::ComplianceMcpServer;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let _ = dotenvy::dotenv();
tracing_subscriber::fmt()
.with_env_filter(
tracing_subscriber::EnvFilter::from_default_env()
.add_directive("compliance_mcp=info".parse()?),
)
.init();
let mongo_uri =
std::env::var("MONGODB_URI").unwrap_or_else(|_| "mongodb://localhost:27017".to_string());
let db_name =
std::env::var("MONGODB_DATABASE").unwrap_or_else(|_| "compliance_scanner".to_string());
let db = Database::connect(&mongo_uri, &db_name).await?;
// If MCP_PORT is set, run as Streamable HTTP server; otherwise use stdio.
if let Ok(port_str) = std::env::var("MCP_PORT") {
let port: u16 = port_str.parse()?;
tracing::info!("Starting MCP server on HTTP port {port}");
let db_clone = db.clone();
let service = StreamableHttpService::new(
move || Ok(ComplianceMcpServer::new(db_clone.clone())),
Arc::new(LocalSessionManager::default()),
StreamableHttpServerConfig::default(),
);
let router = axum::Router::new().nest_service("/mcp", service);
let listener = tokio::net::TcpListener::bind(("0.0.0.0", port)).await?;
tracing::info!("MCP HTTP server listening on 0.0.0.0:{port}");
axum::serve(listener, router).await?;
} else {
tracing::info!("Starting MCP server on stdio");
let server = ComplianceMcpServer::new(db);
let transport = rmcp::transport::stdio();
use rmcp::ServiceExt;
let handle = server.serve(transport).await?;
handle.waiting().await?;
}
Ok(())
}

View File

@@ -1,109 +0,0 @@
use rmcp::{
handler::server::wrapper::Parameters, model::*, tool, tool_handler, tool_router, ServerHandler,
};
use crate::database::Database;
use crate::tools::{dast, findings, sbom};
pub struct ComplianceMcpServer {
db: Database,
#[allow(dead_code)]
tool_router: rmcp::handler::server::router::tool::ToolRouter<Self>,
}
#[tool_router]
impl ComplianceMcpServer {
pub fn new(db: Database) -> Self {
Self {
db,
tool_router: Self::tool_router(),
}
}
// ── Findings ──────────────────────────────────────────
#[tool(
description = "List security findings with optional filters for repo, severity, status, and scan type"
)]
async fn list_findings(
&self,
Parameters(params): Parameters<findings::ListFindingsParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
findings::list_findings(&self.db, params).await
}
#[tool(description = "Get a single finding by its ID")]
async fn get_finding(
&self,
Parameters(params): Parameters<findings::GetFindingParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
findings::get_finding(&self.db, params).await
}
#[tool(description = "Get a summary of findings counts grouped by severity and status")]
async fn findings_summary(
&self,
Parameters(params): Parameters<findings::FindingsSummaryParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
findings::findings_summary(&self.db, params).await
}
// ── SBOM ──────────────────────────────────────────────
#[tool(
description = "List SBOM packages with optional filters for repo, vulnerabilities, package manager, and license"
)]
async fn list_sbom_packages(
&self,
Parameters(params): Parameters<sbom::ListSbomPackagesParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
sbom::list_sbom_packages(&self.db, params).await
}
#[tool(
description = "Generate a vulnerability report for a repository showing all packages with known CVEs"
)]
async fn sbom_vuln_report(
&self,
Parameters(params): Parameters<sbom::SbomVulnReportParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
sbom::sbom_vuln_report(&self.db, params).await
}
// ── DAST ──────────────────────────────────────────────
#[tool(
description = "List DAST findings with optional filters for target, scan run, severity, exploitability, and vulnerability type"
)]
async fn list_dast_findings(
&self,
Parameters(params): Parameters<dast::ListDastFindingsParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
dast::list_dast_findings(&self.db, params).await
}
#[tool(description = "Get a summary of recent DAST scan runs and finding counts")]
async fn dast_scan_summary(
&self,
Parameters(params): Parameters<dast::DastScanSummaryParams>,
) -> Result<CallToolResult, rmcp::ErrorData> {
dast::dast_scan_summary(&self.db, params).await
}
}
#[tool_handler]
impl ServerHandler for ComplianceMcpServer {
fn get_info(&self) -> ServerInfo {
ServerInfo {
protocol_version: ProtocolVersion::V_2024_11_05,
capabilities: ServerCapabilities::builder()
.enable_tools()
.build(),
server_info: Implementation::from_build_env(),
instructions: Some(
"Compliance Scanner MCP server. Query security findings, SBOM data, and DAST results."
.to_string(),
),
}
}
}

View File

@@ -1,154 +0,0 @@
use mongodb::bson::doc;
use rmcp::{model::*, ErrorData as McpError};
use schemars::JsonSchema;
use serde::Deserialize;
use crate::database::Database;
const MAX_LIMIT: i64 = 200;
const DEFAULT_LIMIT: i64 = 50;
fn cap_limit(limit: Option<i64>) -> i64 {
limit.unwrap_or(DEFAULT_LIMIT).clamp(1, MAX_LIMIT)
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct ListDastFindingsParams {
/// Filter by DAST target ID
pub target_id: Option<String>,
/// Filter by scan run ID
pub scan_run_id: Option<String>,
/// Filter by severity: info, low, medium, high, critical
pub severity: Option<String>,
/// Only show confirmed exploitable findings
pub exploitable: Option<bool>,
/// Filter by vulnerability type (e.g. sql_injection, xss, ssrf)
pub vuln_type: Option<String>,
/// Maximum number of results (default 50, max 200)
pub limit: Option<i64>,
}
pub async fn list_dast_findings(
db: &Database,
params: ListDastFindingsParams,
) -> Result<CallToolResult, McpError> {
let mut filter = doc! {};
if let Some(ref target_id) = params.target_id {
filter.insert("target_id", target_id);
}
if let Some(ref scan_run_id) = params.scan_run_id {
filter.insert("scan_run_id", scan_run_id);
}
if let Some(ref severity) = params.severity {
filter.insert("severity", severity);
}
if let Some(exploitable) = params.exploitable {
filter.insert("exploitable", exploitable);
}
if let Some(ref vuln_type) = params.vuln_type {
filter.insert("vuln_type", vuln_type);
}
let limit = cap_limit(params.limit);
let mut cursor = db
.dast_findings()
.find(filter)
.sort(doc! { "created_at": -1 })
.limit(limit)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut results = Vec::new();
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let finding = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
results.push(finding);
}
let json = serde_json::to_string_pretty(&results)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct DastScanSummaryParams {
/// Filter by DAST target ID
pub target_id: Option<String>,
}
pub async fn dast_scan_summary(
db: &Database,
params: DastScanSummaryParams,
) -> Result<CallToolResult, McpError> {
let mut filter = doc! {};
if let Some(ref target_id) = params.target_id {
filter.insert("target_id", target_id);
}
// Get recent scan runs
let mut cursor = db
.dast_scan_runs()
.find(filter.clone())
.sort(doc! { "started_at": -1 })
.limit(10)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut scan_runs = Vec::new();
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let run = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
scan_runs.push(serde_json::json!({
"id": run.id.map(|id| id.to_hex()),
"target_id": run.target_id,
"status": run.status,
"findings_count": run.findings_count,
"exploitable_count": run.exploitable_count,
"endpoints_discovered": run.endpoints_discovered,
"started_at": run.started_at.to_rfc3339(),
"completed_at": run.completed_at.map(|t| t.to_rfc3339()),
}));
}
// Count findings by severity
let mut findings_filter = doc! {};
if let Some(ref target_id) = params.target_id {
findings_filter.insert("target_id", target_id);
}
let total_findings = db
.dast_findings()
.count_documents(findings_filter.clone())
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut exploitable_filter = findings_filter.clone();
exploitable_filter.insert("exploitable", true);
let exploitable_count = db
.dast_findings()
.count_documents(exploitable_filter)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let summary = serde_json::json!({
"total_findings": total_findings,
"exploitable_findings": exploitable_count,
"recent_scan_runs": scan_runs,
});
let json = serde_json::to_string_pretty(&summary)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}

View File

@@ -1,163 +0,0 @@
use mongodb::bson::doc;
use rmcp::{model::*, ErrorData as McpError};
use schemars::JsonSchema;
use serde::Deserialize;
use crate::database::Database;
const MAX_LIMIT: i64 = 200;
const DEFAULT_LIMIT: i64 = 50;
fn cap_limit(limit: Option<i64>) -> i64 {
limit.unwrap_or(DEFAULT_LIMIT).clamp(1, MAX_LIMIT)
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct ListFindingsParams {
/// Filter by repository ID
pub repo_id: Option<String>,
/// Filter by severity: info, low, medium, high, critical
pub severity: Option<String>,
/// Filter by status: open, triaged, false_positive, resolved, ignored
pub status: Option<String>,
/// Filter by scan type: sast, sbom, cve, gdpr, oauth
pub scan_type: Option<String>,
/// Maximum number of results (default 50, max 200)
pub limit: Option<i64>,
}
pub async fn list_findings(
db: &Database,
params: ListFindingsParams,
) -> Result<CallToolResult, McpError> {
let mut filter = doc! {};
if let Some(ref repo_id) = params.repo_id {
filter.insert("repo_id", repo_id);
}
if let Some(ref severity) = params.severity {
filter.insert("severity", severity);
}
if let Some(ref status) = params.status {
filter.insert("status", status);
}
if let Some(ref scan_type) = params.scan_type {
filter.insert("scan_type", scan_type);
}
let limit = cap_limit(params.limit);
let mut cursor = db
.findings()
.find(filter)
.sort(doc! { "created_at": -1 })
.limit(limit)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut results = Vec::new();
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let finding = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
results.push(finding);
}
let json = serde_json::to_string_pretty(&results)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct GetFindingParams {
/// Finding ID (MongoDB ObjectId hex string)
pub id: String,
}
pub async fn get_finding(
db: &Database,
params: GetFindingParams,
) -> Result<CallToolResult, McpError> {
let oid = bson::oid::ObjectId::parse_str(&params.id)
.map_err(|e| McpError::invalid_params(format!("invalid ObjectId: {e}"), None))?;
let finding = db
.findings()
.find_one(doc! { "_id": oid })
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?
.ok_or_else(|| McpError::invalid_params("finding not found", None))?;
let json = serde_json::to_string_pretty(&finding)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct FindingsSummaryParams {
/// Filter by repository ID
pub repo_id: Option<String>,
}
#[derive(serde::Serialize)]
struct SeverityCount {
severity: String,
count: u64,
}
pub async fn findings_summary(
db: &Database,
params: FindingsSummaryParams,
) -> Result<CallToolResult, McpError> {
let mut base_filter = doc! {};
if let Some(ref repo_id) = params.repo_id {
base_filter.insert("repo_id", repo_id);
}
let severities = ["critical", "high", "medium", "low", "info"];
let mut counts = Vec::new();
for sev in &severities {
let mut filter = base_filter.clone();
filter.insert("severity", sev);
let count = db
.findings()
.count_documents(filter)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
counts.push(SeverityCount {
severity: sev.to_string(),
count,
});
}
let total: u64 = counts.iter().map(|c| c.count).sum();
let mut status_counts = Vec::new();
for status in &["open", "triaged", "false_positive", "resolved", "ignored"] {
let mut filter = base_filter.clone();
filter.insert("status", status);
let count = db
.findings()
.count_documents(filter)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
status_counts.push(serde_json::json!({ "status": status, "count": count }));
}
let summary = serde_json::json!({
"total": total,
"by_severity": counts,
"by_status": status_counts,
});
let json = serde_json::to_string_pretty(&summary)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}

View File

@@ -1,3 +0,0 @@
pub mod dast;
pub mod findings;
pub mod sbom;

View File

@@ -1,129 +0,0 @@
use mongodb::bson::doc;
use rmcp::{model::*, ErrorData as McpError};
use schemars::JsonSchema;
use serde::Deserialize;
use crate::database::Database;
const MAX_LIMIT: i64 = 200;
const DEFAULT_LIMIT: i64 = 50;
fn cap_limit(limit: Option<i64>) -> i64 {
limit.unwrap_or(DEFAULT_LIMIT).clamp(1, MAX_LIMIT)
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct ListSbomPackagesParams {
/// Filter by repository ID
pub repo_id: Option<String>,
/// Only show packages with known vulnerabilities
pub has_vulns: Option<bool>,
/// Filter by package manager (e.g. npm, cargo, pip)
pub package_manager: Option<String>,
/// Filter by license (e.g. MIT, Apache-2.0)
pub license: Option<String>,
/// Maximum number of results (default 50, max 200)
pub limit: Option<i64>,
}
pub async fn list_sbom_packages(
db: &Database,
params: ListSbomPackagesParams,
) -> Result<CallToolResult, McpError> {
let mut filter = doc! {};
if let Some(ref repo_id) = params.repo_id {
filter.insert("repo_id", repo_id);
}
if let Some(ref pm) = params.package_manager {
filter.insert("package_manager", pm);
}
if let Some(ref license) = params.license {
filter.insert("license", license);
}
if params.has_vulns == Some(true) {
filter.insert("known_vulnerabilities.0", doc! { "$exists": true });
}
let limit = cap_limit(params.limit);
let mut cursor = db
.sbom_entries()
.find(filter)
.sort(doc! { "name": 1 })
.limit(limit)
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut results = Vec::new();
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let entry = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
results.push(entry);
}
let json = serde_json::to_string_pretty(&results)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}
#[derive(Debug, Deserialize, JsonSchema)]
pub struct SbomVulnReportParams {
/// Repository ID to generate vulnerability report for
pub repo_id: String,
}
pub async fn sbom_vuln_report(
db: &Database,
params: SbomVulnReportParams,
) -> Result<CallToolResult, McpError> {
let filter = doc! {
"repo_id": &params.repo_id,
"known_vulnerabilities.0": { "$exists": true },
};
let mut cursor = db
.sbom_entries()
.find(filter)
.sort(doc! { "name": 1 })
.await
.map_err(|e| McpError::internal_error(format!("DB error: {e}"), None))?;
let mut vulnerable_packages = Vec::new();
let mut total_vulns = 0u64;
while cursor
.advance()
.await
.map_err(|e| McpError::internal_error(format!("cursor error: {e}"), None))?
{
let entry = cursor
.deserialize_current()
.map_err(|e| McpError::internal_error(format!("deserialize error: {e}"), None))?;
total_vulns += entry.known_vulnerabilities.len() as u64;
vulnerable_packages.push(serde_json::json!({
"name": entry.name,
"version": entry.version,
"package_manager": entry.package_manager,
"license": entry.license,
"vulnerabilities": entry.known_vulnerabilities,
}));
}
let report = serde_json::json!({
"repo_id": params.repo_id,
"vulnerable_packages_count": vulnerable_packages.len(),
"total_vulnerabilities": total_vulns,
"packages": vulnerable_packages,
});
let json = serde_json::to_string_pretty(&report)
.map_err(|e| McpError::internal_error(format!("json error: {e}"), None))?;
Ok(CallToolResult::success(vec![Content::text(json)]))
}

View File

@@ -33,7 +33,6 @@ export default defineConfig({
{ text: 'DAST Scanning', link: '/features/dast' }, { text: 'DAST Scanning', link: '/features/dast' },
{ text: 'AI Chat (RAG)', link: '/features/ai-chat' }, { text: 'AI Chat (RAG)', link: '/features/ai-chat' },
{ text: 'Issue Tracker Integration', link: '/features/issues' }, { text: 'Issue Tracker Integration', link: '/features/issues' },
{ text: 'MCP Server', link: '/features/mcp-server' },
], ],
}, },
{ {

View File

@@ -75,15 +75,6 @@ REDIRECT_URI=http://localhost:8080/auth/callback
APP_URL=http://localhost:8080 APP_URL=http://localhost:8080
``` ```
## MCP Server
```bash
MONGODB_URI=mongodb://root:example@localhost:27017/compliance_scanner?authSource=admin
MONGODB_DATABASE=compliance_scanner
# Set to enable HTTP transport (omit for stdio)
MCP_PORT=8090
```
## Observability ## Observability
```bash ```bash

View File

@@ -1,155 +0,0 @@
# MCP Server
The Model Context Protocol (MCP) server exposes compliance data to external LLMs and AI agents. Any MCP-compatible client — such as Claude, Cursor, or a custom agent — can connect and query findings, SBOM data, and DAST results without direct database access.
## How It Works
The `compliance-mcp` crate runs as a standalone service that connects to the same MongoDB database as the agent and dashboard. It registers a set of **tools** that LLM clients can discover and call through the MCP protocol.
```
LLM Client ──MCP──▶ compliance-mcp ──MongoDB──▶ compliance_scanner DB
```
The server supports two transport modes:
| Transport | Use Case | How to Enable |
|-----------|----------|---------------|
| **Stdio** | Local development, piped to a CLI tool | Default (no `MCP_PORT` set) |
| **Streamable HTTP** | Remote deployment, multiple clients | Set `MCP_PORT=8090` |
## Available Tools
The MCP server exposes seven tools:
### Findings
| Tool | Description |
|------|-------------|
| `list_findings` | Query findings with optional filters for repository, severity, status, and scan type. Returns up to 200 results (default 50). |
| `get_finding` | Retrieve a single finding by its MongoDB ObjectId. |
| `findings_summary` | Get finding counts grouped by severity and status, optionally filtered by repository. |
### SBOM
| Tool | Description |
|------|-------------|
| `list_sbom_packages` | List SBOM packages with filters for repository, vulnerabilities, package manager, and license. |
| `sbom_vuln_report` | Generate a vulnerability report for a repository showing all packages with known CVEs. |
### DAST
| Tool | Description |
|------|-------------|
| `list_dast_findings` | Query DAST findings with filters for target, scan run, severity, exploitability, and vulnerability type. |
| `dast_scan_summary` | Get a summary of recent DAST scan runs and finding counts. |
## Running Locally
### Stdio Mode
Run the MCP server directly — it reads from stdin and writes to stdout:
```bash
cd compliance-mcp
cargo run
```
Configure your MCP client to launch it as a subprocess. For example, in a Claude Code `mcp.json`:
```json
{
"mcpServers": {
"compliance": {
"command": "cargo",
"args": ["run", "-p", "compliance-mcp"],
"cwd": "/path/to/compliance-scanner"
}
}
}
```
### HTTP Mode
Set `MCP_PORT` to start the Streamable HTTP server:
```bash
MCP_PORT=8090 cargo run -p compliance-mcp
```
The server listens on `http://0.0.0.0:8090/mcp`. Point your MCP client to this endpoint.
## Configuration
| Variable | Description | Default |
|----------|-------------|---------|
| `MONGODB_URI` | MongoDB connection string | `mongodb://localhost:27017` |
| `MONGODB_DATABASE` | Database name | `compliance_scanner` |
| `MCP_PORT` | Port for HTTP transport (omit for stdio) | — |
| `RUST_LOG` | Log level filter | `compliance_mcp=info` |
Create a `.env` file in the project root or set these as environment variables.
## Deploying with Docker
The `Dockerfile.mcp` builds and runs the MCP server in HTTP mode on port 8090.
```bash
docker build -f Dockerfile.mcp -t compliance-mcp .
docker run -p 8090:8090 \
-e MONGODB_URI=mongodb://mongo:27017 \
-e MONGODB_DATABASE=compliance_scanner \
-e MCP_PORT=8090 \
compliance-mcp
```
### Coolify Deployment
1. Create a new service in your Coolify project
2. Set the **Dockerfile path** to `Dockerfile.mcp`
3. Set the **exposed port** to `8090`
4. Add environment variables: `MONGODB_URI`, `MONGODB_DATABASE`, `MCP_PORT=8090`
5. The MCP endpoint will be available at your configured domain under `/mcp`
The CI pipeline automatically deploys on changes to `compliance-core/`, `compliance-mcp/`, `Dockerfile.mcp`, or `Cargo.toml`/`Cargo.lock`. Add the `COOLIFY_WEBHOOK_MCP` secret to your Gitea repository.
## Managing MCP Servers in the Dashboard
Navigate to **MCP Servers** in the dashboard sidebar to:
- **Register** MCP server instances with their endpoint URL, transport type, port, and database connection
- **View** server configuration, enabled tools, and status
- **Manage access tokens** — reveal, copy, or regenerate bearer tokens for authentication
- **Delete** servers that are no longer needed
Each registered server is assigned a random access token on creation. Use this token in your MCP client configuration for authenticated access.
## Example: Querying Findings from an LLM
Once connected, an LLM can call any of the registered tools. For example:
**"Show me all critical findings"** triggers `list_findings` with `severity: "critical"`:
```json
{
"tool": "list_findings",
"arguments": {
"severity": "critical",
"limit": 10
}
}
```
**"What vulnerable packages does repo X have?"** triggers `sbom_vuln_report`:
```json
{
"tool": "sbom_vuln_report",
"arguments": {
"repo_id": "683abc..."
}
}
```
::: tip
The MCP server is read-only — it only queries data from MongoDB. It cannot modify findings, trigger scans, or change configuration. This makes it safe to expose to external LLM clients.
:::

View File

@@ -97,17 +97,6 @@ NVD_API_KEY=your-nvd-api-key
Get a free key at [https://nvd.nist.gov/developers/request-an-api-key](https://nvd.nist.gov/developers/request-an-api-key). Get a free key at [https://nvd.nist.gov/developers/request-an-api-key](https://nvd.nist.gov/developers/request-an-api-key).
## MCP Server
The MCP server exposes compliance data to external LLMs via the Model Context Protocol. See [MCP Server](/features/mcp-server) for full details.
```bash
# Set MCP_PORT to enable HTTP transport (omit for stdio mode)
MCP_PORT=8090
```
The MCP server shares the `MONGODB_URI` and `MONGODB_DATABASE` variables with the rest of the platform.
## Clone Path ## Clone Path
Where the agent stores cloned repository files: Where the agent stores cloned repository files:
@@ -150,4 +139,3 @@ GIT_CLONE_BASE_PATH=/tmp/compliance-scanner/repos
| `APP_URL` | No | — | Application root URL | | `APP_URL` | No | — | Application root URL |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | No | — | OTLP collector endpoint | | `OTEL_EXPORTER_OTLP_ENDPOINT` | No | — | OTLP collector endpoint |
| `OTEL_SERVICE_NAME` | No | — | OpenTelemetry service name | | `OTEL_SERVICE_NAME` | No | — | OpenTelemetry service name |
| `MCP_PORT` | No | — | MCP HTTP transport port (omit for stdio) |

View File

@@ -1,15 +0,0 @@
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ $uri.html /index.html;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}