diff --git a/.claude/CLAUDE.md b/.claude/CLAUDE.md index 664642d..724410e 100644 --- a/.claude/CLAUDE.md +++ b/.claude/CLAUDE.md @@ -14,53 +14,72 @@ ## Entwicklungsumgebung (WICHTIG - IMMER ZUERST LESEN) -### Zwei-Rechner-Setup + Hetzner +### Zwei-Rechner-Setup + Coolify | Geraet | Rolle | Aufgaben | |--------|-------|----------| | **MacBook** | Entwicklung | Claude Terminal, Code-Entwicklung, Browser (Frontend-Tests) | -| **Mac Mini** | Lokaler Server | Docker fuer lokale Dev/Tests (NICHT mehr fuer Production!) | -| **Hetzner** | Production | CI/CD Build + Deploy via Gitea Actions | +| **Mac Mini** | Lokaler Server | Docker fuer lokale Dev/Tests (NICHT fuer Production!) | +| **Coolify** | Production | Automatisches Build + Deploy bei Push auf gitea | -**WICHTIG:** Code wird auf dem MacBook bearbeitet. Production-Deployment laeuft automatisch auf Hetzner via CI/CD. +**WICHTIG:** Code wird auf dem MacBook bearbeitet. Production-Deployment laeuft automatisch ueber Coolify. -### Entwicklungsworkflow (CI/CD — seit 2026-03-11) +### Entwicklungsworkflow (CI/CD — Coolify) ```bash # 1. Code auf MacBook bearbeiten (dieses Verzeichnis) # 2. Committen und zu BEIDEN Remotes pushen: git push origin main && git push gitea main -# 3. FERTIG! Gitea Actions auf Hetzner uebernimmt automatisch: -# Push auf main → Lint → Tests → Build → Deploy -# Pipeline: .gitea/workflows/ci.yaml +# 3. FERTIG! Push auf gitea triggert automatisch: +# - Gitea Actions: Lint → Tests → Validierung +# - Coolify: Build → Deploy # Dauer: ca. 3 Minuten # Status pruefen: https://gitea.meghsakha.com/Benjamin_Boenisch/breakpilot-compliance/actions ``` -**NICHT MEHR NOETIG:** Manuelles `ssh macmini "docker compose build"` — das macht jetzt die CI/CD Pipeline! +**NICHT MEHR NOETIG:** Manuelles `ssh macmini "docker compose build"` fuer Production. +**NIEMALS** manuell in Coolify auf "Redeploy" klicken — Gitea Actions triggert Coolify automatisch. -### CI/CD Pipeline (Gitea Actions → Hetzner) +### Post-Push Deploy-Monitoring (PFLICHT nach jedem Push auf gitea) + +**IMMER wenn Claude auf gitea pusht, MUSS danach automatisch das Deploy-Monitoring laufen:** + +1. Dem User sofort mitteilen: "Deploy gestartet, ich ueberwache den Status..." +2. Im Hintergrund Health-Checks pollen (alle 20 Sekunden, max 5 Minuten): + ```bash + # Compliance Health-Endpoints: + curl -sf https://api-dev.breakpilot.ai/health # Backend Compliance + curl -sf https://sdk-dev.breakpilot.ai/health # AI Compliance SDK + ``` +3. Sobald ALLE Endpoints healthy sind, dem User im Chat melden: + **"Deploy abgeschlossen! Du kannst jetzt testen: https://admin-dev.breakpilot.ai"** +4. Falls nach 5 Minuten noch nicht healthy → Fehlermeldung mit Hinweis auf Coolify-Logs. + +**Ablauf im Terminal:** +``` +> git push gitea main ✓ +> "Deploy gestartet, ich ueberwache den Status..." +> [Hintergrund-Polling laeuft] +> "Deploy abgeschlossen! Alle Services healthy. Du kannst jetzt testen." +``` + +### CI/CD Pipeline (Gitea Actions → Coolify) ``` -Push auf main → go-lint/python-lint/nodejs-lint (nur PRs) - → test-go-ai-compliance - → test-python-backend-compliance - → test-python-document-crawler - → test-python-dsms-gateway - → deploy-hetzner (nur wenn ALLE Tests gruen) +Push auf gitea main → go-lint/python-lint/nodejs-lint (nur PRs) + → test-go-ai-compliance + → test-python-backend-compliance + → test-python-document-crawler + → test-python-dsms-gateway + → validate-canonical-controls + → Coolify: Build + Deploy (automatisch bei Push) ``` **Dateien:** -- `.gitea/workflows/ci.yaml` — Pipeline-Definition -- `docker-compose.hetzner.yml` — Override: arm64→amd64 fuer Hetzner (x86_64) -- Deploy-Pfad auf Hetzner: `/opt/breakpilot-compliance/` - -**Ablauf deploy-hetzner:** -1. `git pull` im Deploy-Dir -2. `docker compose -f docker-compose.yml -f docker-compose.hetzner.yml build --parallel` -3. `docker compose up -d --remove-orphans` -4. Health Checks +- `.gitea/workflows/ci.yaml` — Pipeline-Definition (Tests + Validierung) +- `docker-compose.yml` — Haupt-Compose +- `docker-compose.hetzner.yml` — Override: arm64→amd64 fuer Coolify Production (x86_64) ### Lokale Entwicklung (Mac Mini — optional) @@ -88,20 +107,18 @@ rsync -avz --exclude node_modules --exclude .next --exclude .git \ - RAG-Service (Vektorsuche fuer Compliance-Dokumente) - Nginx (Reverse Proxy) -**Externe Services (Hetzner/meghshakka) — seit 2026-03-06:** -- PostgreSQL 17 @ `46.225.100.82:54321` (sslmode=require) — Schemas: `compliance` (51), `public` (compliance_* + training_* + ucca_* + academy_*) +**Externe Services (Production):** +- PostgreSQL 17 (sslmode=require) — Schemas: `compliance`, `public` - Qdrant @ `qdrant-dev.breakpilot.ai` (HTTPS, API-Key) -- Object Storage @ `nbg1.your-objectstorage.com` (S3-kompatibel, TLS) +- Object Storage (S3-kompatibel, TLS) -Config via `.env` auf Mac Mini (nicht im Repo): `COMPLIANCE_DATABASE_URL`, `QDRANT_URL`, `QDRANT_API_KEY` - -Pruefen: `curl -sf http://macmini:8099/health` +Config via `.env` (nicht im Repo): `COMPLIANCE_DATABASE_URL`, `QDRANT_URL`, `QDRANT_API_KEY` --- ## Haupt-URLs -### Production (Hetzner — primaer) +### Production (Coolify-deployed) | URL | Service | Beschreibung | |-----|---------|--------------| @@ -157,18 +174,6 @@ Pruefen: `curl -sf http://macmini:8099/health` | docs | MkDocs/nginx | 8011 | bp-compliance-docs | | core-wait | curl health-check | - | bp-compliance-core-wait | -### compliance-tts-service -- Piper TTS + FFmpeg fuer Schulungsvideos -- Speichert Audio/Video in Hetzner Object Storage (nbg1.your-objectstorage.com) -- TTS-Modell: `de_DE-thorsten-high.onnx` -- Dateien: `main.py`, `tts_engine.py`, `video_generator.py`, `storage.py` - -### document-crawler -- Dokument-Analyse: PDF, DOCX, XLSX, PPTX -- Gap-Analyse zwischen bestehenden Dokumenten und Compliance-Anforderungen -- IPFS-Archivierung via dsms-gateway -- Kommuniziert mit ai-compliance-sdk (LLM Gateway) - ### Docker-Netzwerk Nutzt das externe Core-Netzwerk: ```yaml @@ -214,8 +219,8 @@ breakpilot-compliance/ ├── dsms-gateway/ # IPFS Gateway ├── scripts/ # Helper Scripts ├── docker-compose.yml # Compliance Compose (~10 Services, platform: arm64) -├── docker-compose.hetzner.yml # Override: arm64→amd64 fuer Hetzner -└── .gitea/workflows/ci.yaml # CI/CD Pipeline (Lint → Tests → Deploy) +├── docker-compose.hetzner.yml # Override: arm64→amd64 fuer Coolify Production +└── .gitea/workflows/ci.yaml # CI/CD Pipeline (Lint → Tests → Validierung) ``` --- @@ -225,7 +230,7 @@ breakpilot-compliance/ ### Deployment (CI/CD — Standardweg) ```bash -# Committen und pushen → CI/CD deployt automatisch auf Hetzner: +# Committen und pushen → Coolify deployt automatisch: git push origin main && git push gitea main # CI-Status pruefen (im Browser): @@ -338,10 +343,6 @@ DELETE /api/v1/projects/{project_id} → Projekt archivieren (Soft Delete) - `app/sdk/layout.tsx` — liest `?project=` aus searchParams - `app/api/sdk/v1/projects/` — Next.js Proxy zum Backend -**Multi-Tab:** Tab A (Projekt X) und Tab B (Projekt Y) interferieren nicht — separate BroadcastChannel + localStorage Keys. - -**Stammdaten-Kopie:** Neues Projekt mit `copy_from_project_id` → Backend kopiert `companyProfile` aus dem Quell-State. Danach unabhaengig editierbar. - ### Backend-Compliance APIs ``` POST/GET /api/v1/compliance/risks @@ -352,7 +353,7 @@ POST/GET /api/v1/dsr/requests POST/GET /api/v1/gdpr/exports POST/GET /api/v1/consent/admin -# Stammdaten, Versionierung & Change-Requests (Phase 1-6, 2026-03-07) +# Stammdaten, Versionierung & Change-Requests GET/POST/DELETE /api/compliance/company-profile GET /api/compliance/company-profile/template-context GET /api/compliance/change-requests @@ -370,24 +371,6 @@ GET /api/compliance/{doc}/{id}/versions - UUID-Format, kein `"default"` mehr - Header `X-Tenant-ID` > Query `tenant_id` > ENV-Fallback -### Migrations (035-038) -| Nr | Datei | Beschreibung | -|----|-------|--------------| -| 035 | `migrations/035_vvt_tenant_isolation.sql` | VVT tenant_id + DSFA/Vendor default→UUID | -| 036 | `migrations/036_company_profile_extend.sql` | Stammdaten JSONB + Regulierungs-Flags | -| 037 | `migrations/037_document_versions.sql` | 5 Versions-Tabellen + current_version | -| 038 | `migrations/038_change_requests.sql` | Change-Requests + Audit-Log | - -### Neue Backend-Module -| Datei | Beschreibung | -|-------|--------------| -| `compliance/api/tenant_utils.py` | Shared Tenant-ID Dependency | -| `compliance/api/versioning_utils.py` | Shared Versioning Helper | -| `compliance/api/change_request_routes.py` | CR CRUD + Accept/Reject/Edit | -| `compliance/api/change_request_engine.py` | Regelbasierte CR-Generierung | -| `compliance/api/generation_routes.py` | Dokumentengenerierung aus Stammdaten | -| `compliance/api/document_templates/` | 5 Template-Generatoren (DSFA, VVT, TOM, etc.) | - --- ## Wichtige Dateien (Referenz) @@ -395,9 +378,7 @@ GET /api/compliance/{doc}/{id}/versions | Datei | Beschreibung | |-------|--------------| | `admin-compliance/app/(sdk)/` | Alle 37+ SDK-Routes | -| `admin-compliance/app/(sdk)/sdk/change-requests/page.tsx` | Change-Request Inbox | -| `admin-compliance/components/sdk/Sidebar/SDKSidebar.tsx` | SDK Navigation (mit CR-Badge) | -| `admin-compliance/components/sdk/VersionHistory.tsx` | Versions-Timeline-Komponente | +| `admin-compliance/components/sdk/Sidebar/SDKSidebar.tsx` | SDK Navigation | | `admin-compliance/components/sdk/CommandBar.tsx` | Command Palette | | `admin-compliance/lib/sdk/context.tsx` | SDK State (Provider) | | `backend-compliance/compliance/` | Haupt-Package (50+ Dateien) | diff --git a/.gitea/workflows/rag-ingest.yaml b/.gitea/workflows/rag-ingest.yaml index 84563e3..0bfde96 100644 --- a/.gitea/workflows/rag-ingest.yaml +++ b/.gitea/workflows/rag-ingest.yaml @@ -5,8 +5,8 @@ # # Phasen: gesetze, eu, templates, datenschutz, verbraucherschutz, verify, version, all # -# Voraussetzung: RAG-Service und Qdrant muessen auf Hetzner laufen. -# Die BreakPilot-Services muessen deployed sein (ci.yaml deploy-hetzner). +# Voraussetzung: RAG-Service und Qdrant muessen auf Coolify laufen. +# Die BreakPilot-Services muessen deployed sein (ci.yaml deploy-coolify). name: RAG Ingestion diff --git a/.gitignore b/.gitignore index 073575e..a8dffe4 100644 --- a/.gitignore +++ b/.gitignore @@ -18,6 +18,7 @@ __pycache__/ venv/ .venv/ .coverage +coverage.out test_*.db # Docker diff --git a/admin-compliance/__tests__/ingest-industry-compliance.test.ts b/admin-compliance/__tests__/ingest-industry-compliance.test.ts index 76505d0..b4f9879 100644 --- a/admin-compliance/__tests__/ingest-industry-compliance.test.ts +++ b/admin-compliance/__tests__/ingest-industry-compliance.test.ts @@ -48,12 +48,12 @@ describe('Ingestion Script: ingest-industry-compliance.sh', () => { expect(scriptContent).toContain('chunk_strategy=recursive') }) - it('should use chunk_size=512', () => { - expect(scriptContent).toContain('chunk_size=512') + it('should use chunk_size=1024', () => { + expect(scriptContent).toContain('chunk_size=1024') }) - it('should use chunk_overlap=50', () => { - expect(scriptContent).toContain('chunk_overlap=50') + it('should use chunk_overlap=128', () => { + expect(scriptContent).toContain('chunk_overlap=128') }) it('should validate minimum file size', () => { diff --git a/admin-compliance/app/api/sdk/drafting-engine/draft/route.ts b/admin-compliance/app/api/sdk/drafting-engine/draft/route.ts index b1293ff..272a015 100644 --- a/admin-compliance/app/api/sdk/drafting-engine/draft/route.ts +++ b/admin-compliance/app/api/sdk/drafting-engine/draft/route.ts @@ -591,12 +591,43 @@ async function handleV2Draft(body: Record): Promise {/* fire-and-forget */}) + } catch { + // LLM audit persistence failure should not block the response + } + return NextResponse.json({ draft, constraintCheck, tokensUsed: Math.round(totalTokens), pipelineVersion: 'v2', auditTrail, + truthLabel, }) } diff --git a/admin-compliance/app/api/sdk/drafting-engine/validate/route.ts b/admin-compliance/app/api/sdk/drafting-engine/validate/route.ts index 7d07f33..e440f6e 100644 --- a/admin-compliance/app/api/sdk/drafting-engine/validate/route.ts +++ b/admin-compliance/app/api/sdk/drafting-engine/validate/route.ts @@ -14,6 +14,76 @@ import { buildCrossCheckPrompt } from '@/lib/sdk/drafting-engine/prompts/validat const OLLAMA_URL = process.env.OLLAMA_URL || 'http://host.docker.internal:11434' const LLM_MODEL = process.env.COMPLIANCE_LLM_MODEL || 'qwen2.5vl:32b' +/** + * Anti-Fake-Evidence: Verbotene Formulierungen + * + * Flags formulations that falsely claim compliance without evidence. + * Only allowed when: control_status=pass AND confidence >= E2 AND + * truth_status in (validated_internal, accepted_by_auditor). + */ +interface EvidenceContext { + controlStatus?: string + confidenceLevel?: string + truthStatus?: string +} + +const FORBIDDEN_PATTERNS: Array<{ + pattern: RegExp + label: string + safeAlternative: string +}> = [ + { pattern: /ist\s+compliant/gi, label: 'ist compliant', safeAlternative: 'soll compliant sein' }, + { pattern: /erfüllt\s+vollständig/gi, label: 'erfüllt vollständig', safeAlternative: 'soll vollständig erfüllt werden' }, + { pattern: /wurde\s+geprüft/gi, label: 'wurde geprüft', safeAlternative: 'soll geprüft werden' }, + { pattern: /wurde\s+umgesetzt/gi, label: 'wurde umgesetzt', safeAlternative: 'ist zur Umsetzung vorgesehen' }, + { pattern: /ist\s+auditiert/gi, label: 'ist auditiert', safeAlternative: 'soll auditiert werden' }, + { pattern: /vollständig\s+implementiert/gi, label: 'vollständig implementiert', safeAlternative: 'Implementierung ist vorgesehen' }, + { pattern: /nachweislich\s+konform/gi, label: 'nachweislich konform', safeAlternative: 'Konformität ist nachzuweisen' }, +] + +const CONFIDENCE_ORDER: Record = { E0: 0, E1: 1, E2: 2, E3: 3, E4: 4 } +const VALID_TRUTH_STATUSES = new Set(['validated_internal', 'accepted_by_auditor']) + +function checkForbiddenFormulations( + content: string, + evidenceContext?: EvidenceContext, +): ValidationFinding[] { + const findings: ValidationFinding[] = [] + + if (!content) return findings + + // If evidence context shows sufficient proof, allow the formulations + if (evidenceContext) { + const { controlStatus, confidenceLevel, truthStatus } = evidenceContext + const confLevel = CONFIDENCE_ORDER[confidenceLevel ?? 'E0'] ?? 0 + if ( + controlStatus === 'pass' && + confLevel >= CONFIDENCE_ORDER.E2 && + VALID_TRUTH_STATUSES.has(truthStatus ?? '') + ) { + return findings // Formulations are backed by real evidence + } + } + + for (const { pattern, label, safeAlternative } of FORBIDDEN_PATTERNS) { + // Reset regex state for global patterns + pattern.lastIndex = 0 + if (pattern.test(content)) { + findings.push({ + id: `AFE-FORBIDDEN-${label.replace(/\s+/g, '_').toUpperCase()}`, + severity: 'error', + category: 'forbidden_formulation' as ValidationFinding['category'], + title: `Verbotene Formulierung: "${label}"`, + description: `Die Formulierung "${label}" impliziert eine nachgewiesene Compliance, die ohne ausreichenden Nachweis (Evidence >= E2, validiert) nicht verwendet werden darf.`, + documentType: 'vvt' as ScopeDocumentType, + suggestion: `Verwende stattdessen: "${safeAlternative}"`, + }) + } + } + + return findings +} + /** * Stufe 1: Deterministische Pruefung */ @@ -221,10 +291,18 @@ export async function POST(request: NextRequest) { // LLM unavailable, continue with deterministic results only } + // --------------------------------------------------------------- + // Stufe 1b: Verbotene Formulierungen (Anti-Fake-Evidence) + // --------------------------------------------------------------- + const forbiddenFindings = checkForbiddenFormulations( + draftContent || '', + validationContext.evidenceContext, + ) + // --------------------------------------------------------------- // Combine results // --------------------------------------------------------------- - const allFindings = [...deterministicFindings, ...llmFindings] + const allFindings = [...deterministicFindings, ...forbiddenFindings, ...llmFindings] const errors = allFindings.filter(f => f.severity === 'error') const warnings = allFindings.filter(f => f.severity === 'warning') const suggestions = allFindings.filter(f => f.severity === 'suggestion') diff --git a/admin-compliance/app/api/sdk/v1/canonical/route.ts b/admin-compliance/app/api/sdk/v1/canonical/route.ts index 2a79300..19c662b 100644 --- a/admin-compliance/app/api/sdk/v1/canonical/route.ts +++ b/admin-compliance/app/api/sdk/v1/canonical/route.ts @@ -25,16 +25,44 @@ export async function GET(request: NextRequest) { break case 'controls': { - const severity = searchParams.get('severity') - const domain = searchParams.get('domain') - const params = new URLSearchParams() - if (severity) params.set('severity', severity) - if (domain) params.set('domain', domain) - const qs = params.toString() + const controlParams = new URLSearchParams() + const passthrough = ['severity', 'domain', 'release_state', 'verification_method', 'category', 'evidence_type', + 'target_audience', 'source', 'search', 'control_type', 'exclude_duplicates', 'sort', 'order', 'limit', 'offset'] + for (const key of passthrough) { + const val = searchParams.get(key) + if (val) controlParams.set(key, val) + } + const qs = controlParams.toString() backendPath = `/api/compliance/v1/canonical/controls${qs ? `?${qs}` : ''}` break } + case 'controls-count': { + const countParams = new URLSearchParams() + const countPassthrough = ['severity', 'domain', 'release_state', 'verification_method', 'category', 'evidence_type', + 'target_audience', 'source', 'search', 'control_type', 'exclude_duplicates'] + for (const key of countPassthrough) { + const val = searchParams.get(key) + if (val) countParams.set(key, val) + } + const countQs = countParams.toString() + backendPath = `/api/compliance/v1/canonical/controls-count${countQs ? `?${countQs}` : ''}` + break + } + + case 'controls-meta': { + const metaParams = new URLSearchParams() + const metaPassthrough = ['severity', 'domain', 'release_state', 'verification_method', 'category', 'evidence_type', + 'target_audience', 'source', 'search', 'control_type', 'exclude_duplicates'] + for (const key of metaPassthrough) { + const val = searchParams.get(key) + if (val) metaParams.set(key, val) + } + const metaQs = metaParams.toString() + backendPath = `/api/compliance/v1/canonical/controls-meta${metaQs ? `?${metaQs}` : ''}` + break + } + case 'control': { const controlId = searchParams.get('id') if (!controlId) { @@ -76,10 +104,63 @@ export async function GET(request: NextRequest) { backendPath = '/api/compliance/v1/canonical/generate/processed-stats' break + case 'categories': + backendPath = '/api/compliance/v1/canonical/categories' + break + + case 'traceability': { + const traceId = searchParams.get('id') + if (!traceId) { + return NextResponse.json({ error: 'Missing control id' }, { status: 400 }) + } + backendPath = `/api/compliance/v1/canonical/controls/${encodeURIComponent(traceId)}/traceability` + break + } + + case 'provenance': { + const provId = searchParams.get('id') + if (!provId) { + return NextResponse.json({ error: 'Missing control id' }, { status: 400 }) + } + backendPath = `/api/compliance/v1/canonical/controls/${encodeURIComponent(provId)}/provenance` + break + } + + case 'atomic-stats': + backendPath = '/api/compliance/v1/canonical/controls/atomic-stats' + break + + case 'similar': { + const simControlId = searchParams.get('id') + if (!simControlId) { + return NextResponse.json({ error: 'Missing control id' }, { status: 400 }) + } + const simThreshold = searchParams.get('threshold') || '0.85' + backendPath = `/api/compliance/v1/canonical/controls/${encodeURIComponent(simControlId)}/similar?threshold=${simThreshold}` + break + } + case 'blocked-sources': backendPath = '/api/compliance/v1/canonical/blocked-sources' break + case 'v1-matches': { + const matchId = searchParams.get('id') + if (!matchId) { + return NextResponse.json({ error: 'Missing control id' }, { status: 400 }) + } + backendPath = `/api/compliance/v1/canonical/controls/${encodeURIComponent(matchId)}/v1-matches` + break + } + + case 'v1-enrichment-stats': + backendPath = '/api/compliance/v1/canonical/controls/v1-enrichment-stats' + break + + case 'obligation-dedup-stats': + backendPath = '/api/compliance/v1/canonical/obligations/dedup-stats' + break + case 'controls-customer': { const custSeverity = searchParams.get('severity') const custDomain = searchParams.get('domain') @@ -142,8 +223,20 @@ export async function POST(request: NextRequest) { return NextResponse.json({ error: 'Missing control id' }, { status: 400 }) } backendPath = `/api/compliance/v1/canonical/generate/review/${encodeURIComponent(controlId)}` + } else if (endpoint === 'bulk-review') { + backendPath = '/api/compliance/v1/canonical/generate/bulk-review' } else if (endpoint === 'blocked-sources-cleanup') { backendPath = '/api/compliance/v1/canonical/blocked-sources/cleanup' + } else if (endpoint === 'enrich-v1-matches') { + const dryRun = searchParams.get('dry_run') ?? 'true' + const batchSize = searchParams.get('batch_size') ?? '100' + const enrichOffset = searchParams.get('offset') ?? '0' + backendPath = `/api/compliance/v1/canonical/controls/enrich-v1-matches?dry_run=${dryRun}&batch_size=${batchSize}&offset=${enrichOffset}` + } else if (endpoint === 'obligation-dedup') { + const dryRun = searchParams.get('dry_run') ?? 'true' + const batchSize = searchParams.get('batch_size') ?? '0' + const dedupOffset = searchParams.get('offset') ?? '0' + backendPath = `/api/compliance/v1/canonical/obligations/dedup?dry_run=${dryRun}&batch_size=${batchSize}&offset=${dedupOffset}` } else if (endpoint === 'similarity-check') { const controlId = searchParams.get('id') if (!controlId) { diff --git a/admin-compliance/app/api/sdk/v1/compliance/evidence-checks/[[...path]]/route.ts b/admin-compliance/app/api/sdk/v1/compliance/evidence-checks/[[...path]]/route.ts new file mode 100644 index 0000000..0369ead --- /dev/null +++ b/admin-compliance/app/api/sdk/v1/compliance/evidence-checks/[[...path]]/route.ts @@ -0,0 +1,129 @@ +/** + * Evidence Checks API Proxy - Catch-all route + * Proxies all /api/sdk/v1/compliance/evidence-checks/* requests to backend-compliance + */ + +import { NextRequest, NextResponse } from 'next/server' + +const BACKEND_URL = process.env.BACKEND_URL || 'http://backend-compliance:8002' + +async function proxyRequest( + request: NextRequest, + pathSegments: string[] | undefined, + method: string +) { + const pathStr = pathSegments?.join('/') || '' + const searchParams = request.nextUrl.searchParams.toString() + const basePath = `${BACKEND_URL}/api/compliance/evidence-checks` + const url = pathStr + ? `${basePath}/${pathStr}${searchParams ? `?${searchParams}` : ''}` + : `${basePath}${searchParams ? `?${searchParams}` : ''}` + + try { + const headers: HeadersInit = { + 'Content-Type': 'application/json', + 'X-Tenant-Id': '9282a473-5c95-4b3a-bf78-0ecc0ec71d3e', + 'X-User-Id': 'admin', + } + + const authHeader = request.headers.get('authorization') + if (authHeader) { + headers['Authorization'] = authHeader + } + + const tenantHeader = request.headers.get('x-tenant-id') + if (tenantHeader) { + headers['X-Tenant-Id'] = tenantHeader + } + + const userIdHeader = request.headers.get('x-user-id') + if (userIdHeader) { + headers['X-User-Id'] = userIdHeader + } + + const fetchOptions: RequestInit = { + method, + headers, + signal: AbortSignal.timeout(30000), + } + + if (['POST', 'PUT', 'PATCH'].includes(method)) { + const contentType = request.headers.get('content-type') + if (contentType?.includes('application/json')) { + try { + const text = await request.text() + if (text && text.trim()) { + fetchOptions.body = text + } + } catch { + // Empty or invalid body + } + } + } + + const response = await fetch(url, fetchOptions) + + if (!response.ok) { + const errorText = await response.text() + let errorJson + try { + errorJson = JSON.parse(errorText) + } catch { + errorJson = { error: errorText } + } + return NextResponse.json( + { error: `Backend Error: ${response.status}`, ...errorJson }, + { status: response.status } + ) + } + + const data = await response.json() + return NextResponse.json(data) + } catch (error) { + console.error('Evidence Checks API proxy error:', error) + return NextResponse.json( + { error: 'Verbindung zum Backend fehlgeschlagen' }, + { status: 503 } + ) + } +} + +export async function GET( + request: NextRequest, + { params }: { params: Promise<{ path?: string[] }> } +) { + const { path } = await params + return proxyRequest(request, path, 'GET') +} + +export async function POST( + request: NextRequest, + { params }: { params: Promise<{ path?: string[] }> } +) { + const { path } = await params + return proxyRequest(request, path, 'POST') +} + +export async function PUT( + request: NextRequest, + { params }: { params: Promise<{ path?: string[] }> } +) { + const { path } = await params + return proxyRequest(request, path, 'PUT') +} + +export async function PATCH( + request: NextRequest, + { params }: { params: Promise<{ path?: string[] }> } +) { + const { path } = await params + return proxyRequest(request, path, 'PATCH') +} + +export async function DELETE( + request: NextRequest, + { params }: { params: Promise<{ path?: string[] }> } +) { + const { path } = await params + return proxyRequest(request, path, 'DELETE') +} diff --git a/admin-compliance/app/api/sdk/v1/compliance/process-tasks/[[...path]]/route.ts b/admin-compliance/app/api/sdk/v1/compliance/process-tasks/[[...path]]/route.ts new file mode 100644 index 0000000..df5f11e --- /dev/null +++ b/admin-compliance/app/api/sdk/v1/compliance/process-tasks/[[...path]]/route.ts @@ -0,0 +1,129 @@ +/** + * Process Tasks API Proxy - Catch-all route + * Proxies all /api/sdk/v1/compliance/process-tasks/* requests to backend-compliance + */ + +import { NextRequest, NextResponse } from 'next/server' + +const BACKEND_URL = process.env.BACKEND_URL || 'http://backend-compliance:8002' + +async function proxyRequest( + request: NextRequest, + pathSegments: string[] | undefined, + method: string +) { + const pathStr = pathSegments?.join('/') || '' + const searchParams = request.nextUrl.searchParams.toString() + const basePath = `${BACKEND_URL}/api/compliance/process-tasks` + const url = pathStr + ? `${basePath}/${pathStr}${searchParams ? `?${searchParams}` : ''}` + : `${basePath}${searchParams ? `?${searchParams}` : ''}` + + try { + const headers: HeadersInit = { + 'Content-Type': 'application/json', + 'X-Tenant-Id': '9282a473-5c95-4b3a-bf78-0ecc0ec71d3e', + 'X-User-Id': 'admin', + } + + const authHeader = request.headers.get('authorization') + if (authHeader) { + headers['Authorization'] = authHeader + } + + const tenantHeader = request.headers.get('x-tenant-id') + if (tenantHeader) { + headers['X-Tenant-Id'] = tenantHeader + } + + const userIdHeader = request.headers.get('x-user-id') + if (userIdHeader) { + headers['X-User-Id'] = userIdHeader + } + + const fetchOptions: RequestInit = { + method, + headers, + signal: AbortSignal.timeout(30000), + } + + if (['POST', 'PUT', 'PATCH'].includes(method)) { + const contentType = request.headers.get('content-type') + if (contentType?.includes('application/json')) { + try { + const text = await request.text() + if (text && text.trim()) { + fetchOptions.body = text + } + } catch { + // Empty or invalid body + } + } + } + + const response = await fetch(url, fetchOptions) + + if (!response.ok) { + const errorText = await response.text() + let errorJson + try { + errorJson = JSON.parse(errorText) + } catch { + errorJson = { error: errorText } + } + return NextResponse.json( + { error: `Backend Error: ${response.status}`, ...errorJson }, + { status: response.status } + ) + } + + const data = await response.json() + return NextResponse.json(data) + } catch (error) { + console.error('Process Tasks API proxy error:', error) + return NextResponse.json( + { error: 'Verbindung zum Backend fehlgeschlagen' }, + { status: 503 } + ) + } +} + +export async function GET( + request: NextRequest, + { params }: { params: Promise<{ path?: string[] }> } +) { + const { path } = await params + return proxyRequest(request, path, 'GET') +} + +export async function POST( + request: NextRequest, + { params }: { params: Promise<{ path?: string[] }> } +) { + const { path } = await params + return proxyRequest(request, path, 'POST') +} + +export async function PUT( + request: NextRequest, + { params }: { params: Promise<{ path?: string[] }> } +) { + const { path } = await params + return proxyRequest(request, path, 'PUT') +} + +export async function PATCH( + request: NextRequest, + { params }: { params: Promise<{ path?: string[] }> } +) { + const { path } = await params + return proxyRequest(request, path, 'PATCH') +} + +export async function DELETE( + request: NextRequest, + { params }: { params: Promise<{ path?: string[] }> } +) { + const { path } = await params + return proxyRequest(request, path, 'DELETE') +} diff --git a/admin-compliance/app/api/sdk/v1/training/[[...path]]/route.ts b/admin-compliance/app/api/sdk/v1/training/[[...path]]/route.ts index f448906..83111d8 100644 --- a/admin-compliance/app/api/sdk/v1/training/[[...path]]/route.ts +++ b/admin-compliance/app/api/sdk/v1/training/[[...path]]/route.ts @@ -53,7 +53,18 @@ async function proxyRequest( } } - const response = await fetch(url, fetchOptions) + const response = await fetch(url, { + ...fetchOptions, + redirect: 'manual', + }) + + // Handle redirects (e.g. media stream presigned URL) + if (response.status === 307 || response.status === 302) { + const location = response.headers.get('location') + if (location) { + return NextResponse.redirect(location) + } + } if (!response.ok) { const errorText = await response.text() @@ -69,6 +80,19 @@ async function proxyRequest( ) } + // Handle binary responses (PDF, octet-stream) + const contentType = response.headers.get('content-type') || '' + if (contentType.includes('application/pdf') || contentType.includes('application/octet-stream')) { + const buffer = await response.arrayBuffer() + return new NextResponse(buffer, { + status: response.status, + headers: { + 'Content-Type': contentType, + 'Content-Disposition': response.headers.get('content-disposition') || '', + }, + }) + } + const data = await response.json() return NextResponse.json(data) } catch (error) { diff --git a/admin-compliance/app/sdk/architecture/architecture-data.ts b/admin-compliance/app/sdk/architecture/architecture-data.ts index 1256c89..71a6335 100644 --- a/admin-compliance/app/sdk/architecture/architecture-data.ts +++ b/admin-compliance/app/sdk/architecture/architecture-data.ts @@ -160,6 +160,8 @@ export const ARCH_SERVICES: ArchService[] = [ 'security_backlog', 'quality_entries', 'notfallplan_incidents', 'notfallplan_templates', 'data_processing_agreement', + 'vendor_vendors', 'vendor_contracts', 'vendor_findings', + 'vendor_control_instances', 'compliance_templates', 'compliance_isms_scope', 'compliance_isms_context', 'compliance_isms_policy', 'compliance_security_objectives', 'compliance_soa', 'compliance_audit_findings', 'compliance_corrective_actions', @@ -178,6 +180,10 @@ export const ARCH_SERVICES: ArchService[] = [ 'CRUD /api/compliance/vvt', 'CRUD /api/compliance/loeschfristen', 'CRUD /api/compliance/obligations', + 'CRUD /api/sdk/v1/vendor-compliance/vendors', + 'CRUD /api/sdk/v1/vendor-compliance/contracts', + 'CRUD /api/sdk/v1/vendor-compliance/findings', + 'CRUD /api/sdk/v1/vendor-compliance/control-instances', 'CRUD /api/isms/scope', 'CRUD /api/isms/policies', 'CRUD /api/isms/objectives', diff --git a/admin-compliance/app/sdk/assertions/page.tsx b/admin-compliance/app/sdk/assertions/page.tsx new file mode 100644 index 0000000..d1fa6cf --- /dev/null +++ b/admin-compliance/app/sdk/assertions/page.tsx @@ -0,0 +1,468 @@ +'use client' + +import React, { useState, useEffect } from 'react' + +// ============================================================================= +// TYPES +// ============================================================================= + +interface Assertion { + id: string + tenant_id: string | null + entity_type: string + entity_id: string + sentence_text: string + sentence_index: number + assertion_type: string // 'assertion' | 'fact' | 'rationale' + evidence_ids: string[] + confidence: number + normative_tier: string | null // 'pflicht' | 'empfehlung' | 'kann' + verified_by: string | null + verified_at: string | null + created_at: string | null + updated_at: string | null +} + +interface AssertionSummary { + total_assertions: number + total_facts: number + total_rationale: number + unverified_count: number +} + +// ============================================================================= +// CONSTANTS +// ============================================================================= + +const TIER_COLORS: Record = { + pflicht: 'bg-red-100 text-red-700', + empfehlung: 'bg-yellow-100 text-yellow-700', + kann: 'bg-blue-100 text-blue-700', +} + +const TIER_LABELS: Record = { + pflicht: 'Pflicht', + empfehlung: 'Empfehlung', + kann: 'Kann', +} + +const TYPE_COLORS: Record = { + assertion: 'bg-orange-100 text-orange-700', + fact: 'bg-green-100 text-green-700', + rationale: 'bg-purple-100 text-purple-700', +} + +const TYPE_LABELS: Record = { + assertion: 'Behauptung', + fact: 'Fakt', + rationale: 'Begruendung', +} + +const API_BASE = '/api/sdk/v1/compliance' + +type TabKey = 'overview' | 'list' | 'extract' + +// ============================================================================= +// ASSERTION CARD +// ============================================================================= + +function AssertionCard({ + assertion, + onVerify, +}: { + assertion: Assertion + onVerify: (id: string) => void +}) { + const tierColor = assertion.normative_tier ? TIER_COLORS[assertion.normative_tier] || 'bg-gray-100 text-gray-600' : 'bg-gray-100 text-gray-600' + const tierLabel = assertion.normative_tier ? TIER_LABELS[assertion.normative_tier] || assertion.normative_tier : '—' + const typeColor = TYPE_COLORS[assertion.assertion_type] || 'bg-gray-100 text-gray-600' + const typeLabel = TYPE_LABELS[assertion.assertion_type] || assertion.assertion_type + + return ( +
+
+
+
+ + {tierLabel} + + + {typeLabel} + + {assertion.entity_type && ( + + {assertion.entity_type}: {assertion.entity_id?.slice(0, 8) || '—'} + + )} + {assertion.confidence > 0 && ( + + Konfidenz: {(assertion.confidence * 100).toFixed(0)}% + + )} +
+

+ “{assertion.sentence_text}” +

+
+ {assertion.verified_by && ( + + Verifiziert von {assertion.verified_by} am {assertion.verified_at ? new Date(assertion.verified_at).toLocaleDateString('de-DE') : '—'} + + )} + {assertion.evidence_ids.length > 0 && ( + + {assertion.evidence_ids.length} Evidence verknuepft + + )} +
+
+
+ {assertion.assertion_type !== 'fact' && ( + + )} +
+
+
+ ) +} + +// ============================================================================= +// MAIN PAGE +// ============================================================================= + +export default function AssertionsPage() { + const [activeTab, setActiveTab] = useState('overview') + const [summary, setSummary] = useState(null) + const [assertions, setAssertions] = useState([]) + const [loading, setLoading] = useState(true) + const [error, setError] = useState(null) + + // Filters + const [filterEntityType, setFilterEntityType] = useState('') + const [filterAssertionType, setFilterAssertionType] = useState('') + + // Extract tab + const [extractText, setExtractText] = useState('') + const [extractEntityType, setExtractEntityType] = useState('control') + const [extractEntityId, setExtractEntityId] = useState('') + const [extracting, setExtracting] = useState(false) + const [extractedAssertions, setExtractedAssertions] = useState([]) + + // Verify dialog + const [verifyingId, setVerifyingId] = useState(null) + const [verifyEmail, setVerifyEmail] = useState('') + + useEffect(() => { + loadSummary() + }, []) + + useEffect(() => { + if (activeTab === 'list') loadAssertions() + }, [activeTab, filterEntityType, filterAssertionType]) // eslint-disable-line react-hooks/exhaustive-deps + + const loadSummary = async () => { + try { + const res = await fetch(`${API_BASE}/assertions/summary`) + if (res.ok) setSummary(await res.json()) + } catch { /* silent */ } + finally { setLoading(false) } + } + + const loadAssertions = async () => { + setLoading(true) + try { + const params = new URLSearchParams() + if (filterEntityType) params.set('entity_type', filterEntityType) + if (filterAssertionType) params.set('assertion_type', filterAssertionType) + params.set('limit', '200') + + const res = await fetch(`${API_BASE}/assertions?${params}`) + if (res.ok) { + const data = await res.json() + setAssertions(data.assertions || []) + } + } catch { + setError('Assertions konnten nicht geladen werden') + } finally { + setLoading(false) + } + } + + const handleExtract = async () => { + if (!extractText.trim()) { setError('Bitte Text eingeben'); return } + setExtracting(true) + setError(null) + setExtractedAssertions([]) + try { + const res = await fetch(`${API_BASE}/assertions/extract`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ + text: extractText, + entity_type: extractEntityType || 'control', + entity_id: extractEntityId || undefined, + }), + }) + if (!res.ok) { + const err = await res.json().catch(() => ({ detail: 'Extraktion fehlgeschlagen' })) + throw new Error(typeof err.detail === 'string' ? err.detail : JSON.stringify(err.detail)) + } + const data = await res.json() + setExtractedAssertions(data.assertions || []) + // Refresh summary + loadSummary() + } catch (err) { + setError(err instanceof Error ? err.message : 'Extraktion fehlgeschlagen') + } finally { + setExtracting(false) + } + } + + const handleVerify = async (assertionId: string) => { + setVerifyingId(assertionId) + } + + const submitVerify = async () => { + if (!verifyingId || !verifyEmail.trim()) return + try { + const res = await fetch(`${API_BASE}/assertions/${verifyingId}/verify?verified_by=${encodeURIComponent(verifyEmail)}`, { + method: 'POST', + }) + if (res.ok) { + setVerifyingId(null) + setVerifyEmail('') + loadAssertions() + loadSummary() + } else { + const err = await res.json().catch(() => ({ detail: 'Verifizierung fehlgeschlagen' })) + setError(typeof err.detail === 'string' ? err.detail : 'Verifizierung fehlgeschlagen') + } + } catch { + setError('Netzwerkfehler') + } + } + + const tabs: { key: TabKey; label: string }[] = [ + { key: 'overview', label: 'Uebersicht' }, + { key: 'list', label: 'Assertion-Liste' }, + { key: 'extract', label: 'Extraktion' }, + ] + + return ( +
+ {/* Header */} +
+

Assertions

+

+ Behauptungen vs. Fakten in Compliance-Texten trennen und verifizieren. +

+
+ + {/* Tabs */} +
+
+ {tabs.map(tab => ( + + ))} +
+
+ + {/* Error */} + {error && ( +
+ {error} + +
+ )} + + {/* ============================================================ */} + {/* TAB: Uebersicht */} + {/* ============================================================ */} + {activeTab === 'overview' && ( + <> + {loading ? ( +
+
+
+ ) : summary ? ( +
+
+
Gesamt Assertions
+
{summary.total_assertions}
+
+
+
Verifizierte Fakten
+
{summary.total_facts}
+
+
+
Begruendungen
+
{summary.total_rationale}
+
+
+
Unverifizizt
+
{summary.unverified_count}
+
+
+ ) : ( +
+

Keine Assertions vorhanden. Nutzen Sie die Extraktion, um Behauptungen aus Texten zu identifizieren.

+
+ )} + + )} + + {/* ============================================================ */} + {/* TAB: Assertion-Liste */} + {/* ============================================================ */} + {activeTab === 'list' && ( + <> + {/* Filters */} +
+
+ + +
+
+ + +
+
+ + {loading ? ( +
+
+
+ ) : assertions.length === 0 ? ( +
+

Keine Assertions gefunden.

+
+ ) : ( +
+

{assertions.length} Assertions

+ {assertions.map(a => ( + + ))} +
+ )} + + )} + + {/* ============================================================ */} + {/* TAB: Extraktion */} + {/* ============================================================ */} + {activeTab === 'extract' && ( +
+

Assertions aus Text extrahieren

+

+ Geben Sie einen Compliance-Text ein. Das System identifiziert automatisch Behauptungen, Fakten und Begruendungen. +

+ +
+
+ + +
+
+ + setExtractEntityId(e.target.value)} + placeholder="z.B. GOV-001 oder UUID" + className="w-full border border-gray-300 rounded-lg px-3 py-2 text-sm focus:ring-2 focus:ring-purple-500 focus:border-transparent" /> +
+
+ +
+ +