This repository has been archived on 2026-02-15. You can view files and clone it. You cannot open issues or pull requests or push a commit.
Files
breakpilot-pwa/backend/compliance/README_AI.md
Benjamin Admin 21a844cb8a fix: Restore all files lost during destructive rebase
A previous `git pull --rebase origin main` dropped 177 local commits,
losing 3400+ files across admin-v2, backend, studio-v2, website,
klausur-service, and many other services. The partial restore attempt
(660295e2) only recovered some files.

This commit restores all missing files from pre-rebase ref 98933f5e
while preserving post-rebase additions (night-scheduler, night-mode UI,
NightModeWidget dashboard integration).

Restored features include:
- AI Module Sidebar (FAB), OCR Labeling, OCR Compare
- GPU Dashboard, RAG Pipeline, Magic Help
- Klausur-Korrektur (8 files), Abitur-Archiv (5+ files)
- Companion, Zeugnisse-Crawler, Screen Flow
- Full backend, studio-v2, website, klausur-service
- All compliance SDKs, agent-core, voice-service
- CI/CD configs, documentation, scripts

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 09:51:32 +01:00

5.7 KiB

Compliance AI Integration - Quick Start

Schnellstart (5 Minuten)

1. Environment Variables setzen

# In backend/.env
COMPLIANCE_LLM_PROVIDER=mock  # Für Testing ohne API-Key
# ODER
COMPLIANCE_LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...

2. Backend starten

cd backend
docker-compose up -d
# ODER
uvicorn main:app --reload

3. Datenbank seeden

# Requirements und Module laden
curl -X POST http://localhost:8000/api/v1/compliance/seed \
  -H "Content-Type: application/json" \
  -d '{"force": false}'

curl -X POST http://localhost:8000/api/v1/compliance/modules/seed \
  -H "Content-Type: application/json" \
  -d '{"force": false}'

4. AI-Features testen

# Test-Script ausfuhren
python backend/scripts/test_compliance_ai_endpoints.py

API Endpoints

Alle Endpoints unter: http://localhost:8000/api/v1/compliance/ai/

1. Status prufen

curl http://localhost:8000/api/v1/compliance/ai/status

2. Requirement interpretieren

curl -X POST http://localhost:8000/api/v1/compliance/ai/interpret \
  -H "Content-Type: application/json" \
  -d '{
    "requirement_id": "YOUR_REQUIREMENT_ID"
  }'

3. Controls vorschlagen

curl -X POST http://localhost:8000/api/v1/compliance/ai/suggest-controls \
  -H "Content-Type: application/json" \
  -d '{
    "requirement_id": "YOUR_REQUIREMENT_ID"
  }'

4. Modul-Risiko bewerten

curl -X POST http://localhost:8000/api/v1/compliance/ai/assess-risk \
  -H "Content-Type: application/json" \
  -d '{
    "module_id": "consent-service"
  }'

5. Gap-Analyse

curl -X POST http://localhost:8000/api/v1/compliance/ai/gap-analysis \
  -H "Content-Type: application/json" \
  -d '{
    "requirement_id": "YOUR_REQUIREMENT_ID"
  }'

6. Batch-Interpretation

curl -X POST http://localhost:8000/api/v1/compliance/ai/batch-interpret \
  -H "Content-Type: application/json" \
  -d '{
    "requirement_ids": ["id1", "id2"],
    "rate_limit": 1.0
  }'

Provider-Konfiguration

Option 1: Mock (Testing)

export COMPLIANCE_LLM_PROVIDER=mock

Vorteile:

  • Keine API-Keys erforderlich
  • Schnell
  • Deterministisch

Nachteile:

  • Keine echten AI-Antworten

Option 2: Anthropic Claude (Empfohlen)

export COMPLIANCE_LLM_PROVIDER=anthropic
export ANTHROPIC_API_KEY=sk-ant-...
export ANTHROPIC_MODEL=claude-sonnet-4-20250514

Vorteile:

  • Beste Qualitat
  • Zuverlassig
  • Breakpilot-optimiert

Nachteile:

  • API-Kosten (~$3 per 1M input tokens)

Option 3: Self-Hosted (Ollama/vLLM)

export COMPLIANCE_LLM_PROVIDER=self_hosted
export SELF_HOSTED_LLM_URL=http://localhost:11434
export SELF_HOSTED_LLM_MODEL=llama3.1:8b

Vorteile:

  • Kostenlos
  • Privacy (on-premise)
  • Keine Rate-Limits

Nachteile:

  • Geringere Qualitat als Claude
  • Benotigt GPU/CPU-Ressourcen

Beispiel-Response

Interpretation

{
  "requirement_id": "req-123",
  "summary": "Art. 32 DSGVO verlangt angemessene technische Maßnahmen zur Datensicherheit.",
  "applicability": "Gilt für alle Breakpilot-Module die personenbezogene Daten verarbeiten.",
  "technical_measures": [
    "Verschlüsselung personenbezogener Daten (AES-256)",
    "TLS 1.3 für Datenübertragung",
    "Regelmäßige Sicherheitsaudits",
    "Zugriffskontrolle mit IAM"
  ],
  "affected_modules": [
    "consent-service",
    "klausur-service",
    "backend"
  ],
  "risk_level": "high",
  "implementation_hints": [
    "SOPS mit Age-Keys für Secret-Management",
    "PostgreSQL transparent encryption",
    "Nginx TLS-Konfiguration prüfen"
  ],
  "confidence_score": 0.85,
  "error": null
}

Control-Suggestion

{
  "requirement_id": "req-123",
  "suggestions": [
    {
      "control_id": "PRIV-042",
      "domain": "priv",
      "title": "Verschlüsselung personenbezogener Daten",
      "description": "Alle personenbezogenen Daten müssen verschlüsselt gespeichert werden",
      "pass_criteria": "100% der PII in PostgreSQL sind AES-256 verschlüsselt",
      "implementation_guidance": "Verwende SOPS mit Age-Keys für Secrets. Aktiviere PostgreSQL transparent data encryption.",
      "is_automated": true,
      "automation_tool": "SOPS",
      "priority": "high",
      "confidence_score": 0.9
    }
  ]
}

Troubleshooting

Problem: "AI provider is not available"

Lösung:

# Prüfe Status
curl http://localhost:8000/api/v1/compliance/ai/status

# Prüfe Environment Variables
echo $COMPLIANCE_LLM_PROVIDER
echo $ANTHROPIC_API_KEY

# Fallback auf Mock
export COMPLIANCE_LLM_PROVIDER=mock

Problem: "Requirement not found"

Lösung:

# Datenbank seeden
curl -X POST http://localhost:8000/api/v1/compliance/seed \
  -d '{"force": false}'

# Requirements auflisten
curl http://localhost:8000/api/v1/compliance/requirements

Problem: Timeout bei Anthropic

Lösung:

# Timeout erhöhen
export COMPLIANCE_LLM_TIMEOUT=120.0

# Oder Mock-Provider verwenden
export COMPLIANCE_LLM_PROVIDER=mock

Unit Tests

cd backend

# Alle Tests
pytest tests/test_compliance_ai.py -v

# Nur Mock-Tests
pytest tests/test_compliance_ai.py::TestMockProvider -v

# Integration Tests (benötigt API-Key)
pytest tests/test_compliance_ai.py -v --integration

Weitere Dokumentation

  • Vollständige Dokumentation: backend/docs/compliance_ai_integration.md
  • API Schemas: backend/compliance/api/schemas.py
  • LLM Provider: backend/compliance/services/llm_provider.py
  • AI Assistant: backend/compliance/services/ai_compliance_assistant.py

Support

Bei Problemen:

  1. Prüfe /api/v1/compliance/ai/status
  2. Prüfe Logs: docker logs breakpilot-backend
  3. Teste mit Mock: COMPLIANCE_LLM_PROVIDER=mock
  4. Siehe: backend/docs/compliance_ai_integration.md