docs: Use-Case Compiler instruction for next session
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,351 @@
|
||||
# Auftrag: Use-Case Compiler — MCs in interaktive Prüfungen verwandeln
|
||||
|
||||
**Priorität:** HOCH — nächster großer Baustein nach MC Quality Overhaul
|
||||
**Repos:** breakpilot-compliance (Hauptarbeit) + breakpilot-core (MC-Daten)
|
||||
**Voraussetzung:** 13.588 Master Controls in DB (lokal + Production), Gap-Engine funktioniert
|
||||
|
||||
---
|
||||
|
||||
## Problem
|
||||
|
||||
Die 13.588 MCs sind eine Wissensbasis, aber kein Produkt. Wenn ein Nutzer "Vendor Check" machen will, bekommt er 4.036 Controls — aber keine konkreten Fragen, keinen Fragebogen, keine Nachweissammlung.
|
||||
|
||||
## Ziel
|
||||
|
||||
Ein **Use-Case Compiler** der aus MCs automatisch interaktive Prüfungen generiert:
|
||||
|
||||
```
|
||||
Use Case (z.B. "Vendor Check Cloud-Anbieter")
|
||||
→ Relevante MCs filtern
|
||||
→ Pro MC: 1-3 binäre Prüffragen ableiten
|
||||
→ Fragebogen generieren (sortiert, gruppiert)
|
||||
→ Nutzer beantwortet + lädt Nachweise hoch
|
||||
→ Compliance-Score + Gap-Report
|
||||
→ Fehlende Rechtsquellen automatisch identifizieren
|
||||
```
|
||||
|
||||
## Was bereits existiert (WIEDERVERWENDEN!)
|
||||
|
||||
### Master Controls (Core DB)
|
||||
```sql
|
||||
-- 13.588 MCs mit 83.073 Members
|
||||
SELECT mc.canonical_name, mc.total_controls, mc.phases_covered
|
||||
FROM compliance.master_controls mc;
|
||||
|
||||
-- Members mit Regulation Source
|
||||
SELECT cc.control_id, cc.title, cc.objective, mcm.phase, mcm.action,
|
||||
pc.source_citation::jsonb->>'source' as regulation
|
||||
FROM compliance.master_control_members mcm
|
||||
JOIN compliance.canonical_controls cc ON cc.id = mcm.control_uuid
|
||||
LEFT JOIN compliance.canonical_controls pc ON pc.id = cc.parent_control_uuid;
|
||||
```
|
||||
|
||||
### doc_check_controls (1.874 Controls mit check_question)
|
||||
```sql
|
||||
-- Bereits existierende binäre Prüffragen
|
||||
SELECT doc_type, control_id, check_question, pass_criteria, fail_criteria, severity
|
||||
FROM compliance.doc_check_controls WHERE doc_type = 'dse';
|
||||
```
|
||||
→ Das ist das TEMPLATE: Für jeden Use Case brauchen wir das Gleiche.
|
||||
|
||||
### Gap-Engine (Compliance SDK)
|
||||
- `internal/gap/` — Product Profile, Classifier, Priority Engine
|
||||
- `internal/gap/gap_engine.go` — assessGapStatus mit IST-Zustand
|
||||
- `internal/gap/norm_mapping.go` — 20 Normen → MC-Topic Mapping
|
||||
- API: `POST /sdk/v1/gap/analyze`
|
||||
|
||||
### IACE Pattern Engine
|
||||
- Tag-basiertes Pattern Matching → domain-agnostisch
|
||||
- `PatternEngine.Match(input)` → requirements
|
||||
- Completeness Gates (25 Prüfpunkte)
|
||||
|
||||
### ZeroClaw Agent
|
||||
- `backend-compliance/compliance/api/agent_doc_check_routes.py` — Doc-Check Agent
|
||||
- LLM-basierte Verifikation mit Tool Calling
|
||||
- `ai-compliance-sdk/internal/llm/` — Ollama/Claude Adapter
|
||||
|
||||
### Tenant Document Upload (Core RAG)
|
||||
- `rag-service/api/tenant_documents.py` — tenant-isolierter Upload
|
||||
- Funktioniert auf Mac Mini
|
||||
|
||||
---
|
||||
|
||||
## Architektur
|
||||
|
||||
### Neue Dateien (Go, im Compliance-Repo)
|
||||
|
||||
```
|
||||
ai-compliance-sdk/internal/usecase/
|
||||
├── models.go — UseCaseTemplate, Question, Evidence, ComplianceResult
|
||||
├── compiler.go — MC → Fragen kompilieren
|
||||
├── question_generator.go — Aus MC + Controls Prüffragen ableiten
|
||||
├── scoring.go — Antworten → Compliance-Score
|
||||
├── gap_detector.go — Fehlende Rechtsquellen identifizieren
|
||||
├── store.go — DB CRUD (Audits, Antworten, Evidence)
|
||||
└── templates.go — Vordefinierte Use-Case Templates
|
||||
```
|
||||
|
||||
```
|
||||
ai-compliance-sdk/internal/api/handlers/
|
||||
└── usecase_handler.go — API Endpoints
|
||||
```
|
||||
|
||||
```
|
||||
ai-compliance-sdk/migrations/
|
||||
└── 026_usecase_audits.sql — Tabellen für Audits + Antworten
|
||||
```
|
||||
|
||||
### Frontend (Next.js, im Compliance-Repo)
|
||||
|
||||
```
|
||||
admin-compliance/app/sdk/use-case-audit/
|
||||
├── page.tsx — Use Case auswählen oder erstellen
|
||||
├── [auditId]/
|
||||
│ └── page.tsx — Fragebogen ausfüllen + Ergebnis
|
||||
└── _components/
|
||||
├── UseCaseSelector.tsx — Template-Auswahl oder Custom
|
||||
├── QuestionnaireView.tsx — Dynamischer Fragebogen
|
||||
└── AuditResult.tsx — Score + Gaps + Report
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Datenmodell
|
||||
|
||||
### UseCaseTemplate
|
||||
```go
|
||||
type UseCaseTemplate struct {
|
||||
ID string // "vendor_check", "sast_dast_audit", "dsgvo_quick_check"
|
||||
Name string // "Vendor Check (Cloud-Anbieter)"
|
||||
Description string
|
||||
MCFilters []string // ["third_party_management_*", "data_processing_agreement_*"]
|
||||
Regulations []string // ["dsgvo", "nis2"] — welche Regulierungen relevant
|
||||
Questions []Question // vorgenerierte Fragen (oder leer → LLM generiert)
|
||||
}
|
||||
```
|
||||
|
||||
### Question (kompiliert aus MC)
|
||||
```go
|
||||
type Question struct {
|
||||
ID string // "Q1"
|
||||
MCID string // "MC-19012"
|
||||
MCName string // "third_party_management_vendor_assessment"
|
||||
Question string // "Hat der Anbieter ISO 27001?"
|
||||
QuestionType string // "yes_no", "file_upload", "text", "multi_select"
|
||||
EvidenceRequired bool // true → Nutzer muss Nachweis hochladen
|
||||
PassCriteria []string // ["Gültiges ISO 27001 Zertifikat vorhanden"]
|
||||
FailCriteria []string // ["Kein Zertifikat, nur Selbstauskunft"]
|
||||
Severity string // "HIGH", "MEDIUM", "LOW"
|
||||
Regulation string // "DSGVO Art. 28"
|
||||
DependsOn string // "Q3" → nur anzeigen wenn Q3 = "Ja"
|
||||
}
|
||||
```
|
||||
|
||||
### Audit (laufende Prüfung)
|
||||
```sql
|
||||
CREATE TABLE compliance.usecase_audits (
|
||||
id UUID PRIMARY KEY,
|
||||
tenant_id UUID NOT NULL,
|
||||
template_id VARCHAR(100) NOT NULL,
|
||||
name VARCHAR(200) NOT NULL, -- "Vendor Check: AWS"
|
||||
target_name VARCHAR(200), -- "Amazon Web Services"
|
||||
status VARCHAR(20) DEFAULT 'draft', -- draft, in_progress, completed
|
||||
total_questions INT DEFAULT 0,
|
||||
answered_questions INT DEFAULT 0,
|
||||
compliance_score FLOAT DEFAULT 0,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
completed_at TIMESTAMPTZ
|
||||
);
|
||||
|
||||
CREATE TABLE compliance.usecase_answers (
|
||||
id UUID PRIMARY KEY,
|
||||
audit_id UUID REFERENCES usecase_audits(id),
|
||||
question_id VARCHAR(50) NOT NULL,
|
||||
mc_id VARCHAR(50),
|
||||
answer JSONB NOT NULL, -- {"value": true, "comment": "..."}
|
||||
evidence_ids JSONB DEFAULT '[]', -- UUIDs von hochgeladenen Dateien
|
||||
status VARCHAR(20) DEFAULT 'answered', -- answered, skipped, escalated
|
||||
answered_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Use-Case Templates (Erstausstattung)
|
||||
|
||||
### 1. Vendor Check (Cloud-Anbieter)
|
||||
```yaml
|
||||
id: vendor_check_cloud
|
||||
mc_filters: ["third_party_management_*", "data_processing_agreement_*", "data_transfer_*"]
|
||||
regulations: ["dsgvo", "nis2"]
|
||||
questions:
|
||||
- "Hat der Anbieter ISO 27001 oder vergleichbare Zertifizierung?"
|
||||
- "Ist ein AVV nach Art. 28 DSGVO geschlossen?"
|
||||
- "Werden Daten in Drittländer übermittelt?"
|
||||
- "Gibt es SCC oder Angemessenheitsbeschluss?"
|
||||
- "Hat der Anbieter ein Schwachstellenmanagement?"
|
||||
- "Gibt es einen Incident-Response-Prozess?"
|
||||
- "Sind Sub-Auftragsverarbeiter dokumentiert?"
|
||||
...
|
||||
```
|
||||
|
||||
### 2. SAST/DAST Security Audit
|
||||
```yaml
|
||||
id: sast_dast_audit
|
||||
mc_filters: ["secure_development_*", "vulnerability_*", "input_validation_*", "api_security_*"]
|
||||
regulations: ["cra", "owasp"]
|
||||
```
|
||||
|
||||
### 3. DSGVO Quick-Check
|
||||
```yaml
|
||||
id: dsgvo_quick_check
|
||||
mc_filters: ["personal_data_*", "consent_*", "data_subject_rights_*", "dpia_*", "data_retention_*"]
|
||||
regulations: ["dsgvo"]
|
||||
```
|
||||
|
||||
### 4. NIS2 Readiness
|
||||
```yaml
|
||||
id: nis2_readiness
|
||||
mc_filters: ["critical_infrastructure_*", "incident_*", "network_security_*", "risk_management_*"]
|
||||
regulations: ["nis2"]
|
||||
```
|
||||
|
||||
### 5. CRA Product Check
|
||||
```yaml
|
||||
id: cra_product_check
|
||||
mc_filters: ["vulnerability_*", "patch_management_*", "encryption_*", "asset_management_inventory*"]
|
||||
regulations: ["cra"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Lücken-Erkennung
|
||||
|
||||
### Automatische Identifikation fehlender Rechtsquellen
|
||||
|
||||
```go
|
||||
func (d *GapDetector) DetectMissingRegulations(templateID string) []MissingSource {
|
||||
// 1. Lade alle MCs für diesen Use Case
|
||||
// 2. Für jeden MC: zähle Controls mit source_citation
|
||||
// 3. Wenn MC >20 Controls aber <3 mit Source → LÜCKE
|
||||
// 4. Wenn MC-Topic eine bekannte Regulation hat die nicht
|
||||
// in source_citation vorkommt → FEHLENDE QUELLE
|
||||
//
|
||||
// Beispiel:
|
||||
// MC "aml_transaction_monitoring" → 18 Controls
|
||||
// Source Citations: nur "Geldwäschegesetz (GwG)"
|
||||
// FEHLEND: "5. Geldwäscherichtlinie (EU) 2024/1624"
|
||||
// → Ingestion-Auftrag: EUR-Lex CELEX:32024L1624
|
||||
}
|
||||
```
|
||||
|
||||
### Output: Ingestion-Aufträge
|
||||
```json
|
||||
{
|
||||
"missing_sources": [
|
||||
{
|
||||
"regulation": "5. Geldwäscherichtlinie (EU) 2024/1624",
|
||||
"affects_mcs": ["aml_transaction_monitoring", "aml_customer_due_diligence"],
|
||||
"estimated_controls": 50,
|
||||
"source_url": "https://eur-lex.europa.eu/legal-content/DE/TXT/?uri=CELEX:32024L1624",
|
||||
"priority": "high"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## API Endpoints
|
||||
|
||||
```
|
||||
POST /sdk/v1/use-case/templates — Neues Template erstellen
|
||||
GET /sdk/v1/use-case/templates — Verfügbare Templates
|
||||
GET /sdk/v1/use-case/templates/:id — Template Detail + Fragen
|
||||
POST /sdk/v1/use-case/compile — MC → Fragen kompilieren (ad-hoc)
|
||||
POST /sdk/v1/use-case/audits — Neuen Audit starten
|
||||
GET /sdk/v1/use-case/audits — Meine Audits
|
||||
GET /sdk/v1/use-case/audits/:id — Audit Detail + Antworten
|
||||
POST /sdk/v1/use-case/audits/:id/answer — Frage beantworten
|
||||
GET /sdk/v1/use-case/audits/:id/score — Compliance-Score berechnen
|
||||
GET /sdk/v1/use-case/audits/:id/report — Gap-Report generieren
|
||||
GET /sdk/v1/use-case/audits/:id/gaps — Fehlende Rechtsquellen
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fragen-Generierung (2 Modi)
|
||||
|
||||
### Modus A: Deterministisch (kein LLM nötig)
|
||||
Für MCs die bereits doc_check_controls haben → direkt `check_question` nutzen.
|
||||
Für MCs mit `scanner_hint` → in Prüffrage umwandeln (Regex/Template).
|
||||
|
||||
### Modus B: LLM-basiert (für neue Use Cases)
|
||||
Wenn kein check_question existiert → Haiku/Qwen generiert aus MC-Title + Objective:
|
||||
```
|
||||
Input: MC "third_party_management_vendor_assessment",
|
||||
Title: "Cybersicherheitsrichtlinien kritischer Lieferanten prüfen"
|
||||
Output: {
|
||||
"question": "Hat der Lieferant formalisierte Cybersicherheitsrichtlinien?",
|
||||
"pass_criteria": ["Dokumentierte Security Policy vorhanden", "Letzte Aktualisierung < 12 Monate"],
|
||||
"fail_criteria": ["Keine dokumentierte Policy", "Policy älter als 2 Jahre"]
|
||||
}
|
||||
```
|
||||
|
||||
Kosten: ~$0.001/Frage mit Haiku → $0.05 für 50 Fragen pro Template.
|
||||
|
||||
---
|
||||
|
||||
## Implementierungs-Reihenfolge
|
||||
|
||||
| Phase | Was | Aufwand |
|
||||
|-------|-----|--------|
|
||||
| 1 | Datenmodell + Store + Migration | 1h |
|
||||
| 2 | Compiler (MC → Fragen) + Templates (5 Stück) | 2h |
|
||||
| 3 | API Endpoints (8 Stück) | 1.5h |
|
||||
| 4 | Frontend: Template-Auswahl + Fragebogen | 2h |
|
||||
| 5 | Scoring Engine + Report | 1h |
|
||||
| 6 | Lücken-Erkennung | 1h |
|
||||
| 7 | LLM-basierte Fragen-Generierung | 1h |
|
||||
| **Gesamt** | | **~10h (2 Sessions)** |
|
||||
|
||||
---
|
||||
|
||||
## Verifikation
|
||||
|
||||
Test-Szenario: Vendor Check für AWS
|
||||
|
||||
1. Template "vendor_check_cloud" auswählen
|
||||
2. System generiert 25 Fragen aus MCs
|
||||
3. Nutzer beantwortet alle Fragen + lädt AVV hoch
|
||||
4. Score: 85% (3 Gaps: SCC fehlt, Sub-Processor-Liste unvollständig, Löschkonzept fehlt)
|
||||
5. Report als PDF exportieren
|
||||
6. Lücken-Erkennung: "Für NIS2 Supply Chain fehlen Controls aus ENISA Guidelines"
|
||||
|
||||
---
|
||||
|
||||
## Verbindung zur bestehenden Infrastruktur
|
||||
|
||||
| Bestehend | Nutzt der Compiler für |
|
||||
|-----------|----------------------|
|
||||
| Master Controls (13.588) | MC → Fragen Mapping |
|
||||
| doc_check_controls (1.874) | Fertige Prüffragen (DSE, Cookie, etc.) |
|
||||
| Gap-Engine | IST-Zustand Assessment + Priority |
|
||||
| IACE Pattern Engine | CE-spezifische Prüfungen |
|
||||
| Applicability Engine | Scope-Filter (Branche, Größe, Signale) |
|
||||
| RAG Service | Kontext für LLM-Fragen-Generierung |
|
||||
| Tenant Document Upload | Evidence-Speicherung |
|
||||
| ZeroClaw Agent | LLM-basierte Verifikation |
|
||||
| DSMS/IPFS | Revisionssichere Archivierung |
|
||||
|
||||
---
|
||||
|
||||
## DB-Zugang
|
||||
|
||||
```bash
|
||||
# Lokal (Mac Mini)
|
||||
ssh macmini "/usr/local/bin/docker exec bp-core-postgres psql -U breakpilot -d breakpilot_db"
|
||||
|
||||
# Production
|
||||
PROD_DB="postgresql://postgres:GmyFD3wnU1NrKBdpU1nwLdE8MLts0A0eez8L5XXdvUCe05lWnWfVp3C6JJ8Yrmt2@46.225.100.82:54321/postgres?sslmode=require"
|
||||
```
|
||||
Reference in New Issue
Block a user