# Compliance AI Integration - Quick Start ## Schnellstart (5 Minuten) ### 1. Environment Variables setzen ```bash # In backend/.env COMPLIANCE_LLM_PROVIDER=mock # Für Testing ohne API-Key # ODER COMPLIANCE_LLM_PROVIDER=anthropic ANTHROPIC_API_KEY=sk-ant-... ``` ### 2. Backend starten ```bash cd backend docker-compose up -d # ODER uvicorn main:app --reload ``` ### 3. Datenbank seeden ```bash # Requirements und Module laden curl -X POST http://localhost:8000/api/v1/compliance/seed \ -H "Content-Type: application/json" \ -d '{"force": false}' curl -X POST http://localhost:8000/api/v1/compliance/modules/seed \ -H "Content-Type: application/json" \ -d '{"force": false}' ``` ### 4. AI-Features testen ```bash # Test-Script ausfuhren python backend/scripts/test_compliance_ai_endpoints.py ``` ## API Endpoints Alle Endpoints unter: `http://localhost:8000/api/v1/compliance/ai/` ### 1. Status prufen ```bash curl http://localhost:8000/api/v1/compliance/ai/status ``` ### 2. Requirement interpretieren ```bash curl -X POST http://localhost:8000/api/v1/compliance/ai/interpret \ -H "Content-Type: application/json" \ -d '{ "requirement_id": "YOUR_REQUIREMENT_ID" }' ``` ### 3. Controls vorschlagen ```bash curl -X POST http://localhost:8000/api/v1/compliance/ai/suggest-controls \ -H "Content-Type: application/json" \ -d '{ "requirement_id": "YOUR_REQUIREMENT_ID" }' ``` ### 4. Modul-Risiko bewerten ```bash curl -X POST http://localhost:8000/api/v1/compliance/ai/assess-risk \ -H "Content-Type: application/json" \ -d '{ "module_id": "consent-service" }' ``` ### 5. Gap-Analyse ```bash curl -X POST http://localhost:8000/api/v1/compliance/ai/gap-analysis \ -H "Content-Type: application/json" \ -d '{ "requirement_id": "YOUR_REQUIREMENT_ID" }' ``` ### 6. Batch-Interpretation ```bash curl -X POST http://localhost:8000/api/v1/compliance/ai/batch-interpret \ -H "Content-Type: application/json" \ -d '{ "requirement_ids": ["id1", "id2"], "rate_limit": 1.0 }' ``` ## Provider-Konfiguration ### Option 1: Mock (Testing) ```bash export COMPLIANCE_LLM_PROVIDER=mock ``` Vorteile: - Keine API-Keys erforderlich - Schnell - Deterministisch Nachteile: - Keine echten AI-Antworten ### Option 2: Anthropic Claude (Empfohlen) ```bash export COMPLIANCE_LLM_PROVIDER=anthropic export ANTHROPIC_API_KEY=sk-ant-... export ANTHROPIC_MODEL=claude-sonnet-4-20250514 ``` Vorteile: - Beste Qualitat - Zuverlassig - Breakpilot-optimiert Nachteile: - API-Kosten (~$3 per 1M input tokens) ### Option 3: Self-Hosted (Ollama/vLLM) ```bash export COMPLIANCE_LLM_PROVIDER=self_hosted export SELF_HOSTED_LLM_URL=http://localhost:11434 export SELF_HOSTED_LLM_MODEL=llama3.1:8b ``` Vorteile: - Kostenlos - Privacy (on-premise) - Keine Rate-Limits Nachteile: - Geringere Qualitat als Claude - Benotigt GPU/CPU-Ressourcen ## Beispiel-Response ### Interpretation ```json { "requirement_id": "req-123", "summary": "Art. 32 DSGVO verlangt angemessene technische Maßnahmen zur Datensicherheit.", "applicability": "Gilt für alle Breakpilot-Module die personenbezogene Daten verarbeiten.", "technical_measures": [ "Verschlüsselung personenbezogener Daten (AES-256)", "TLS 1.3 für Datenübertragung", "Regelmäßige Sicherheitsaudits", "Zugriffskontrolle mit IAM" ], "affected_modules": [ "consent-service", "klausur-service", "backend" ], "risk_level": "high", "implementation_hints": [ "SOPS mit Age-Keys für Secret-Management", "PostgreSQL transparent encryption", "Nginx TLS-Konfiguration prüfen" ], "confidence_score": 0.85, "error": null } ``` ### Control-Suggestion ```json { "requirement_id": "req-123", "suggestions": [ { "control_id": "PRIV-042", "domain": "priv", "title": "Verschlüsselung personenbezogener Daten", "description": "Alle personenbezogenen Daten müssen verschlüsselt gespeichert werden", "pass_criteria": "100% der PII in PostgreSQL sind AES-256 verschlüsselt", "implementation_guidance": "Verwende SOPS mit Age-Keys für Secrets. Aktiviere PostgreSQL transparent data encryption.", "is_automated": true, "automation_tool": "SOPS", "priority": "high", "confidence_score": 0.9 } ] } ``` ## Troubleshooting ### Problem: "AI provider is not available" Lösung: ```bash # Prüfe Status curl http://localhost:8000/api/v1/compliance/ai/status # Prüfe Environment Variables echo $COMPLIANCE_LLM_PROVIDER echo $ANTHROPIC_API_KEY # Fallback auf Mock export COMPLIANCE_LLM_PROVIDER=mock ``` ### Problem: "Requirement not found" Lösung: ```bash # Datenbank seeden curl -X POST http://localhost:8000/api/v1/compliance/seed \ -d '{"force": false}' # Requirements auflisten curl http://localhost:8000/api/v1/compliance/requirements ``` ### Problem: Timeout bei Anthropic Lösung: ```bash # Timeout erhöhen export COMPLIANCE_LLM_TIMEOUT=120.0 # Oder Mock-Provider verwenden export COMPLIANCE_LLM_PROVIDER=mock ``` ## Unit Tests ```bash cd backend # Alle Tests pytest tests/test_compliance_ai.py -v # Nur Mock-Tests pytest tests/test_compliance_ai.py::TestMockProvider -v # Integration Tests (benötigt API-Key) pytest tests/test_compliance_ai.py -v --integration ``` ## Weitere Dokumentation - **Vollständige Dokumentation**: `backend/docs/compliance_ai_integration.md` - **API Schemas**: `backend/compliance/api/schemas.py` - **LLM Provider**: `backend/compliance/services/llm_provider.py` - **AI Assistant**: `backend/compliance/services/ai_compliance_assistant.py` ## Support Bei Problemen: 1. Prüfe `/api/v1/compliance/ai/status` 2. Prüfe Logs: `docker logs breakpilot-backend` 3. Teste mit Mock: `COMPLIANCE_LLM_PROVIDER=mock` 4. Siehe: `backend/docs/compliance_ai_integration.md`