Compare commits
101 Commits
ffa3540d1a
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
038fc2f749 | ||
|
|
a9bc16791f | ||
|
|
00f778ca9b | ||
|
|
38059ebfe3 | ||
|
|
cf01db2c3c | ||
|
|
70f2b0ae64 | ||
|
|
b464366341 | ||
|
|
ac1bb1d97b | ||
|
|
71cde313d5 | ||
|
|
557305db5d | ||
|
|
d7ba705562 | ||
|
|
1246d5e792 | ||
|
|
a5243f7d51 | ||
|
|
902848ca24 | ||
|
|
80ca8c1c92 | ||
|
|
9ffe54ce9f | ||
|
|
8c56741908 | ||
|
|
72a0409c16 | ||
|
|
1723d6ecef | ||
|
|
206183670d | ||
|
|
f927c0c205 | ||
|
|
9fe0a27a60 | ||
|
|
81fb1a4499 | ||
|
|
e636b8cef8 | ||
|
|
f09e24d52c | ||
|
|
626f4966e2 | ||
|
|
36603259c6 | ||
|
|
b3e9604d72 | ||
|
|
9ec5a88af9 | ||
|
|
81536d9738 | ||
|
|
34f3dbdfc3 | ||
|
|
084e9539e9 | ||
|
|
8ef30e2a76 | ||
|
|
a66bec3ee7 | ||
|
|
50ea4fc44f | ||
|
|
bf70d903fc | ||
|
|
7b9930596b | ||
|
|
5a3d392512 | ||
|
|
4ed290ccf3 | ||
|
|
28c122ca63 | ||
|
|
945b955b54 | ||
|
|
76b108a29f | ||
|
|
dd1771be1e | ||
|
|
70dd834137 | ||
|
|
8c77df494b | ||
|
|
035f1e88ba | ||
|
|
d4a23e8d99 | ||
|
|
07c3015fa7 | ||
|
|
0320219d57 | ||
|
|
870302a82b | ||
|
|
dff2ef796b | ||
|
|
ee0c4b859c | ||
|
|
53219e3eaf | ||
|
|
ed0e5ede65 | ||
|
|
46cb873190 | ||
|
|
2dd36099f1 | ||
|
|
fa958d31f6 | ||
|
|
981e5477a5 | ||
|
|
916ecef476 | ||
|
|
0f7be76e41 | ||
|
|
754a812d4b | ||
|
|
2f8ffb7352 | ||
|
|
a7a5674818 | ||
|
|
09dd1487b4 | ||
|
|
eef650bf61 | ||
|
|
bba975be28 | ||
|
|
32afd5ce47 | ||
|
|
8f3ad33ae4 | ||
|
|
3c181565e0 | ||
|
|
aa0fbc0e64 | ||
|
|
95e0a327c4 | ||
|
|
503706c380 | ||
|
|
3899c86b29 | ||
|
|
62a5635246 | ||
|
|
e74a4d3930 | ||
|
|
f72be6acf9 | ||
|
|
302565dbac | ||
|
|
d843fabc09 | ||
|
|
4c06953a7a | ||
|
|
c36af8d7d4 | ||
|
|
fa5fe4bace | ||
|
|
9ab4234ed5 | ||
|
|
3ae05a0a2f | ||
|
|
e10c4e1ef5 | ||
|
|
4ba7babc76 | ||
|
|
67b540bbc2 | ||
|
|
770fbdce24 | ||
|
|
ed275f4909 | ||
|
|
bd70b59c5e | ||
|
|
f3b291693d | ||
|
|
613b36be83 | ||
|
|
c72b18cad3 | ||
|
|
5f55692ef0 | ||
|
|
9cc357962f | ||
|
|
72f6f8dc33 | ||
|
|
10d0f4c949 | ||
|
|
81cfd6ba24 | ||
|
|
bfdaf63ba9 | ||
|
|
21a844cb8a | ||
|
|
18838b5273 | ||
|
|
f7487ee240 |
@@ -1,24 +1,57 @@
|
||||
# BreakPilot PWA - Projekt-Kontext für Claude
|
||||
|
||||
## SSH-Verbindung (WICHTIG - IMMER ZUERST LESEN)
|
||||
## Entwicklungsumgebung (WICHTIG - IMMER ZUERST LESEN)
|
||||
|
||||
### Zwei-Rechner-Setup
|
||||
|
||||
| Gerät | Rolle | Aufgaben |
|
||||
|-------|-------|----------|
|
||||
| **MacBook** | Client | Claude Terminal, Browser (Frontend-Tests) |
|
||||
| **Mac Mini** | Server | Docker, alle Services, Code-Ausführung, Tests, Git |
|
||||
|
||||
**WICHTIG:** Die Entwicklung findet vollständig auf dem **Mac Mini** statt!
|
||||
- Alle Befehle (docker, git, tests, builds) per SSH auf dem Mac Mini ausführen
|
||||
- Das MacBook dient nur als Terminal und Browser für Frontend-Tests
|
||||
- Dateien werden auf dem Mac Mini bearbeitet, nicht lokal auf dem MacBook
|
||||
|
||||
### SSH-Verbindung
|
||||
|
||||
```bash
|
||||
# Verbindung zum Mac Mini im lokalen Netzwerk
|
||||
ssh macmini
|
||||
|
||||
# Projektverzeichnis
|
||||
# Projektverzeichnis auf Mac Mini
|
||||
cd /Users/benjaminadmin/Projekte/breakpilot-pwa
|
||||
|
||||
# Oder direkt:
|
||||
ssh macmini "cd /Users/benjaminadmin/Projekte/breakpilot-pwa && <befehl>"
|
||||
# Oder direkt (BEVORZUGT für einzelne Befehle):
|
||||
ssh macmini "<befehl>"
|
||||
```
|
||||
|
||||
**Hostname:** `macmini` (im lokalen Netzwerk via Bonjour)
|
||||
**User:** `benjaminadmin`
|
||||
**Projekt:** `/Users/benjaminadmin/Projekte/breakpilot-pwa`
|
||||
|
||||
---
|
||||
### Beispiele für korrekte Befehlsausführung
|
||||
|
||||
```bash
|
||||
# ✅ RICHTIG: Befehle auf Mac Mini ausführen
|
||||
ssh macmini "docker compose ps"
|
||||
ssh macmini "cd /Users/benjaminadmin/Projekte/breakpilot-pwa && git status"
|
||||
ssh macmini "cd /Users/benjaminadmin/Projekte/breakpilot-pwa/backend && source venv/bin/activate && pytest -v"
|
||||
|
||||
# ❌ FALSCH: Lokale Befehle auf MacBook (Docker/Services laufen dort nicht!)
|
||||
docker compose ps
|
||||
pytest -v
|
||||
```
|
||||
|
||||
### Browser-Tests (auf MacBook)
|
||||
|
||||
Frontend im Browser testen via:
|
||||
- https://macmini/ (Studio)
|
||||
- https://macmini:3002/ (Admin)
|
||||
- https://macmini:3000/ (Website)
|
||||
|
||||
---
|
||||
## Kernprinzipien (IMMER BEACHTEN)
|
||||
|
||||
### 1. Open Source Policy
|
||||
@@ -57,6 +90,35 @@ Alle Security-Tools müssen nach der Pipeline durchlaufen:
|
||||
|
||||
---
|
||||
|
||||
## Drei Docker Compose Projekte (WICHTIG!)
|
||||
|
||||
Das System besteht aus **drei separaten Docker Compose Projekten** auf dem Mac Mini:
|
||||
|
||||
| Projekt | Pfad | Container-Prefix | Beschreibung |
|
||||
|---------|------|-------------------|--------------|
|
||||
| **breakpilot-pwa** | `/Users/benjaminadmin/Projekte/breakpilot-pwa/` | `breakpilot-pwa-*` | Haupt-Repo: Studio, Admin, Backend, alle Services |
|
||||
| **breakpilot-core** | `/Users/benjaminadmin/Projekte/breakpilot-core/` | `bp-core-*` | Nginx Reverse Proxy (`bp-core-nginx`) |
|
||||
| **breakpilot-compliance** | `/Users/benjaminadmin/Projekte/breakpilot-compliance/` | `bp-compliance-*` | Compliance-System: Developer Portal, Admin, Backend, AI SDK |
|
||||
|
||||
### Wichtige Hinweise zu den Compose-Projekten
|
||||
|
||||
- **Nginx** (`bp-core-nginx`) läuft in `breakpilot-core`, NICHT in `breakpilot-pwa`
|
||||
- **Developer Portal** (`bp-compliance-developer-portal`) läuft in `breakpilot-compliance`
|
||||
- Wenn ein Container in `breakpilot-pwa` nicht existiert, prüfe die anderen Projekte!
|
||||
|
||||
```bash
|
||||
# breakpilot-pwa Container verwalten
|
||||
ssh macmini "/usr/local/bin/docker compose -f /Users/benjaminadmin/Projekte/breakpilot-pwa/docker-compose.yml <cmd>"
|
||||
|
||||
# breakpilot-core Container verwalten (Nginx)
|
||||
ssh macmini "/usr/local/bin/docker compose -f /Users/benjaminadmin/Projekte/breakpilot-core/docker-compose.yml <cmd>"
|
||||
|
||||
# breakpilot-compliance Container verwalten (Developer Portal, Compliance)
|
||||
ssh macmini "/usr/local/bin/docker compose -f /Users/benjaminadmin/Projekte/breakpilot-compliance/docker-compose.yml <cmd>"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Haupt-URLs (HTTPS via Nginx)
|
||||
|
||||
| URL | Service | Beschreibung |
|
||||
@@ -68,6 +130,7 @@ Alle Security-Tools müssen nach der Pipeline durchlaufen:
|
||||
| https://macmini:8086/ | Klausur Service | Prüfungs-/Klausurservice |
|
||||
| https://macmini:8443/ | Jitsi Meet | Videokonferenzen |
|
||||
| wss://macmini:8091/ | Voice Service | Spracheingabe WebSocket |
|
||||
| https://macmini:3002/infrastructure/night-mode | Night Mode | Nachtabschaltung UI |
|
||||
|
||||
### AI Compliance SDK (DSGVO-Tools)
|
||||
|
||||
@@ -81,6 +144,19 @@ Alle Security-Tools müssen nach der Pipeline durchlaufen:
|
||||
| https://macmini:3002/developers | Developer Portal | API-Dokumentation für Kunden |
|
||||
| https://macmini:8093/ | SDK API | Backend-API für SDK |
|
||||
|
||||
### Developer Portal (Compliance-Dokumentation)
|
||||
|
||||
| URL | Beschreibung |
|
||||
|-----|--------------|
|
||||
| https://macmini:3006/ | Developer Portal Startseite |
|
||||
| https://macmini:3006/development/docs | **Systemdokumentation Compliance Service** |
|
||||
| https://macmini:3006/sdk | SDK Dokumentation |
|
||||
| https://macmini:3006/api | API Referenz |
|
||||
| https://macmini:3006/guides | Guides |
|
||||
| https://macmini:3006/changelog | Changelog |
|
||||
|
||||
**Hinweis:** Das Developer Portal läuft als `bp-compliance-developer-portal` im Compose-Projekt `breakpilot-compliance` auf Port 3006 (via `bp-core-nginx`).
|
||||
|
||||
### Interne Dienste
|
||||
|
||||
| URL | Service |
|
||||
@@ -91,26 +167,32 @@ Alle Security-Tools müssen nach der Pipeline durchlaufen:
|
||||
| http://macmini:3003/ | Gitea (Git-Server) |
|
||||
| http://macmini:8090/ | Woodpecker CI |
|
||||
| http://macmini:8089/ | Camunda (BPMN) |
|
||||
| http://macmini:8096/ | Night Scheduler API |
|
||||
| http://macmini:8009/ | MkDocs (Projekt-Doku) |
|
||||
|
||||
### Studio URLs
|
||||
### AI Tools (Admin v2)
|
||||
|
||||
| URL | Beschreibung |
|
||||
|-----|--------------|
|
||||
| https://macmini/korrektur | Lehrer-Korrekturplattform |
|
||||
| https://macmini:8000/app | Dashboard (alte Version) |
|
||||
| URL | Tool | Beschreibung |
|
||||
|-----|------|--------------|
|
||||
| https://macmini:3002/ai/llm-compare | LLM Vergleich | KI-Provider vergleichen |
|
||||
| https://macmini:3002/ai/ocr-compare | OCR Vergleich | OCR-Methoden & Vokabel-Extraktion |
|
||||
| https://macmini:3002/ai/ocr-labeling | OCR Labeling | Trainingsdaten erstellen |
|
||||
| https://macmini:3002/ai/test-quality | Test Quality (BQAS) | Golden Suite & Tests |
|
||||
| https://macmini:3002/ai/gpu | GPU Infrastruktur | vast.ai Management |
|
||||
| https://macmini:3002/ai/rag-pipeline | RAG Pipeline | Retrieval Augmented Generation |
|
||||
| https://macmini:3002/ai/magic-help | Magic Help | KI-Assistent |
|
||||
|
||||
---
|
||||
| http://macmini:8200/ | Vault UI (Secrets) |
|
||||
| http://macmini:8025/ | Mailpit (E-Mail Dev) |
|
||||
| http://macmini:9001/ | MinIO Console (S3) |
|
||||
| http://macmini:3003/ | Gitea (Git-Server) |
|
||||
| http://macmini:8090/ | Woodpecker CI |
|
||||
| http://macmini:8089/ | Camunda (BPMN) |
|
||||
### Lehrer-Tools (Studio v2)
|
||||
|
||||
| URL | Tool | Beschreibung |
|
||||
|-----|------|--------------|
|
||||
| https://macmini/vocab-worksheet | Vokabel-Arbeitsblatt | OCR-Scan & Arbeitsblatt-Generator |
|
||||
| https://macmini/korrektur | Korrekturplattform | Abiturklausur-Korrektur |
|
||||
| https://macmini:8000/app | Dashboard (alt) | Altes Dashboard |
|
||||
|
||||
---
|
||||
|
||||
## Services (49 Container)
|
||||
## Services
|
||||
|
||||
### Kern-Applikationen
|
||||
|
||||
@@ -129,7 +211,6 @@ Alle Security-Tools müssen nach der Pipeline durchlaufen:
|
||||
| `klausur-service` | Python/FastAPI | 8086 | Prüfungen, OCR, RAG |
|
||||
| `school-service` | Python | 8082 | Schulverwaltung |
|
||||
| `edu-search-service` | Python | 8088 | Bildungssuche |
|
||||
| `breakpilot-drive` | Node.js | 8087 | Dateiablage (IPFS) |
|
||||
| `geo-service` | Python | 8084 | Geo-Daten (PostGIS) |
|
||||
| `voice-service` | Python | 8091 | Spracheingabe |
|
||||
|
||||
@@ -142,6 +223,15 @@ Alle Security-Tools müssen nach der Pipeline durchlaufen:
|
||||
| `paddleocr-service` | Python | - | OCR für Dokumente |
|
||||
| `transcription-worker` | Python | - | Audio-Transkription |
|
||||
|
||||
### Compliance (breakpilot-compliance Projekt)
|
||||
|
||||
| Service | Tech | Port | Container |
|
||||
|---------|------|------|-----------|
|
||||
| `developer-portal` | Next.js | 3006 | `bp-compliance-developer-portal` |
|
||||
| `compliance-admin` | Next.js | - | `bp-compliance-admin` |
|
||||
| `compliance-backend` | Go | - | `bp-compliance-backend` |
|
||||
| `compliance-ai-sdk` | Go | 8090 | `bp-compliance-ai-sdk` |
|
||||
|
||||
### Kommunikation
|
||||
|
||||
| Service | Tech | Port | Beschreibung |
|
||||
@@ -166,7 +256,7 @@ Alle Security-Tools müssen nach der Pipeline durchlaufen:
|
||||
|
||||
| Service | Tech | Port | Beschreibung |
|
||||
|---------|------|------|--------------|
|
||||
| `nginx` | Nginx | 80/443 | Reverse Proxy + TLS |
|
||||
| `nginx` | Nginx | 80/443 | Reverse Proxy + TLS (in breakpilot-core!) |
|
||||
| `vault` | HashiCorp Vault | 8200 | Secrets Management |
|
||||
| `vault-agent` | Vault | - | Zertifikatserneuerung |
|
||||
| `gitea` | Gitea | 3003 | Git-Server |
|
||||
@@ -175,14 +265,13 @@ Alle Security-Tools müssen nach der Pipeline durchlaufen:
|
||||
| `night-scheduler` | Python/FastAPI | 8096 | Auto-Shutdown/Startup |
|
||||
| `mailpit` | Mailpit | 8025/1025 | E-Mail (Dev) |
|
||||
|
||||
### ERP & Billing
|
||||
### ERP
|
||||
|
||||
| Service | Tech | Port | Beschreibung |
|
||||
|---------|------|------|--------------|
|
||||
| `erpnext-frontend` | ERPNext | 8009 | ERP Frontend |
|
||||
| `erpnext-backend` | ERPNext | - | ERP Backend |
|
||||
| `erpnext-db` | MariaDB | - | ERP Datenbank |
|
||||
| `billing-service` | Python | - | Abrechnungsservice |
|
||||
|
||||
### DSMS (Data Sharing)
|
||||
|
||||
@@ -215,12 +304,12 @@ Alle Security-Tools müssen nach der Pipeline durchlaufen:
|
||||
- `night-scheduler`: FastAPI
|
||||
|
||||
### TypeScript/Next.js
|
||||
- `studio-v2`: Next.js 14, React, TailwindCSS
|
||||
- `admin-v2`: Next.js 14, React, TailwindCSS, shadcn/ui
|
||||
- `studio-v2`: Next.js 15, React, TailwindCSS
|
||||
- `admin-v2`: Next.js 15, React, TailwindCSS
|
||||
- `website`: Next.js 14
|
||||
- `developer-portal`: Next.js, React, TailwindCSS (in breakpilot-compliance)
|
||||
|
||||
### Node.js
|
||||
- `breakpilot-drive`: Express, IPFS
|
||||
- `dsms-node`: IPFS
|
||||
- `dsms-gateway`: Express
|
||||
|
||||
@@ -235,20 +324,27 @@ breakpilot-pwa/
|
||||
│ ├── rules/ # Automatische Regeln
|
||||
│ │ ├── testing.md
|
||||
│ │ ├── documentation.md
|
||||
│ │ └── night-scheduler.md
|
||||
│ │ ├── night-scheduler.md
|
||||
│ │ ├── open-source-policy.md
|
||||
│ │ ├── compliance-checklist.md
|
||||
│ │ ├── abiturkorrektur.md
|
||||
│ │ ├── vocab-worksheet.md
|
||||
│ │ ├── multi-agent-architecture.md
|
||||
│ │ └── experimental-dashboard.md
|
||||
│ └── settings.json
|
||||
├── admin-v2/ # Admin Dashboard (Next.js)
|
||||
├── studio-v2/ # Lehrer-/Schüler-Studio (Next.js)
|
||||
├── website/ # Öffentliche Website (Next.js)
|
||||
├── developer-portal/ # Developer Portal (Next.js, auch in breakpilot-compliance)
|
||||
├── backend/ # Python Backend (FastAPI)
|
||||
├── consent-service/ # Go Consent Service
|
||||
├── klausur-service/ # Klausur/OCR Service
|
||||
├── ai-compliance-sdk/ # KI-Compliance SDK
|
||||
├── breakpilot-compliance-sdk/ # Compliance SDK (Monorepo)
|
||||
├── voice-service/ # Spracheingabe
|
||||
├── geo-service/ # Geo-Daten
|
||||
├── school-service/ # Schulverwaltung
|
||||
├── edu-search-service/ # Bildungssuche
|
||||
├── breakpilot-drive/ # Dateiablage
|
||||
├── night-scheduler/ # Auto-Shutdown
|
||||
├── nginx/ # Reverse Proxy Config
|
||||
├── vault/ # Vault Config
|
||||
@@ -258,6 +354,10 @@ breakpilot-pwa/
|
||||
└── mkdocs.yml # MKDocs Config
|
||||
```
|
||||
|
||||
**Entfernte/nicht mehr aktive Verzeichnisse (in .gitignore blockiert):**
|
||||
- `BreakpilotDrive/` — altes Unity-Projekt, nicht mehr in Entwicklung
|
||||
- `billing-service/` — nicht benötigt
|
||||
|
||||
---
|
||||
|
||||
## Dokumentation (MKDocs)
|
||||
@@ -290,42 +390,55 @@ mkdocs build
|
||||
|
||||
## Häufige Befehle
|
||||
|
||||
### Docker
|
||||
### Docker (via SSH auf Mac Mini)
|
||||
|
||||
```bash
|
||||
# Alle Services starten
|
||||
docker compose up -d
|
||||
# Alle Services starten (breakpilot-pwa)
|
||||
ssh macmini "/usr/local/bin/docker compose -f /Users/benjaminadmin/Projekte/breakpilot-pwa/docker-compose.yml up -d"
|
||||
|
||||
# Einzelnen Service neu bauen
|
||||
docker compose build --no-cache <service-name>
|
||||
docker compose up -d <service-name>
|
||||
# Einzelnen Service neu bauen & starten
|
||||
ssh macmini "/usr/local/bin/docker compose -f /Users/benjaminadmin/Projekte/breakpilot-pwa/docker-compose.yml build --no-cache <service-name>"
|
||||
ssh macmini "/usr/local/bin/docker compose -f /Users/benjaminadmin/Projekte/breakpilot-pwa/docker-compose.yml up -d <service-name>"
|
||||
|
||||
# Logs anzeigen
|
||||
docker compose logs -f <service-name>
|
||||
ssh macmini "/usr/local/bin/docker compose -f /Users/benjaminadmin/Projekte/breakpilot-pwa/docker-compose.yml logs -f <service-name>"
|
||||
|
||||
# Status aller Container
|
||||
docker compose ps
|
||||
ssh macmini "/usr/local/bin/docker compose -f /Users/benjaminadmin/Projekte/breakpilot-pwa/docker-compose.yml ps"
|
||||
|
||||
# Developer Portal (in breakpilot-compliance!)
|
||||
ssh macmini "/usr/local/bin/docker compose -f /Users/benjaminadmin/Projekte/breakpilot-compliance/docker-compose.yml build --no-cache developer-portal"
|
||||
ssh macmini "/usr/local/bin/docker compose -f /Users/benjaminadmin/Projekte/breakpilot-compliance/docker-compose.yml up -d developer-portal"
|
||||
|
||||
# Nginx (in breakpilot-core!)
|
||||
ssh macmini "/usr/local/bin/docker compose -f /Users/benjaminadmin/Projekte/breakpilot-core/docker-compose.yml restart nginx"
|
||||
```
|
||||
|
||||
### Tests
|
||||
**WICHTIG:** Docker-Pfad auf Mac Mini ist `/usr/local/bin/docker` (nicht im Standard-PATH bei SSH).
|
||||
|
||||
### Tests (via SSH)
|
||||
|
||||
```bash
|
||||
# Go Tests (Consent Service)
|
||||
cd consent-service && go test -v ./...
|
||||
ssh macmini "cd /Users/benjaminadmin/Projekte/breakpilot-pwa/consent-service && go test -v ./..."
|
||||
|
||||
# Python Tests
|
||||
cd backend && source venv/bin/activate && pytest -v
|
||||
|
||||
# Mit Coverage
|
||||
pytest --cov=. --cov-report=html
|
||||
ssh macmini "cd /Users/benjaminadmin/Projekte/breakpilot-pwa/backend && source venv/bin/activate && pytest -v"
|
||||
```
|
||||
|
||||
### Git (via Gitea)
|
||||
### Git
|
||||
|
||||
```bash
|
||||
# Remote ist localhost weil Gitea im Container läuft
|
||||
git remote -v
|
||||
# origin http://localhost:3003/pilotadmin/breakpilot-pwa.git
|
||||
# Zwei Remotes konfiguriert - IMMER zu beiden pushen!
|
||||
# origin: http://macmini:3003/pilotadmin/breakpilot-pwa.git (lokale Gitea auf Mac Mini)
|
||||
# gitea: git@gitea.meghsakha.com:Benjamin_Boenisch/breakpilot-pwa.git (externer Gitea-Server)
|
||||
|
||||
# Push zu beiden Remotes (PFLICHT bei jedem Push):
|
||||
git push origin main && git push gitea main
|
||||
|
||||
# Git-Befehle auf Mac Mini ausfuehren (ohne cd):
|
||||
ssh macmini "git -C /Users/benjaminadmin/Projekte/breakpilot-pwa status"
|
||||
ssh macmini "git -C /Users/benjaminadmin/Projekte/breakpilot-pwa pull --no-rebase origin main"
|
||||
```
|
||||
|
||||
---
|
||||
@@ -367,6 +480,15 @@ git remote -v
|
||||
- Vault-Tokens
|
||||
- SSL-Zertifikate
|
||||
|
||||
**NIEMALS ins Git laden (via .gitignore blockiert):**
|
||||
- `*.pdf`, `*.docx`, `*.xlsx`, `*.pptx` — Dokumente bleiben nur lokal auf dem Mac Mini
|
||||
- Kompilierte Go-Binaries (`consent-service/server`, etc.)
|
||||
- Große Mediendateien (Videos, Audio, Bilder >1 MB)
|
||||
- `BreakpilotDrive/` — altes Unity-Projekt
|
||||
- `billing-service/` — nicht benötigt
|
||||
|
||||
**Hinweis:** Die Git-History wurde am 2026-02-12 mit `git-filter-repo` bereinigt. Alle PDFs, Word-/Excel-Dateien, BreakpilotDrive/ und billing-service/ wurden aus der gesamten History entfernt. Das Repo ging dadurch von 1.7 GB auf 11 MB.
|
||||
|
||||
---
|
||||
|
||||
## Ansprechpartner
|
||||
|
||||
614
.claude/rules/abiturkorrektur.md
Normal file
614
.claude/rules/abiturkorrektur.md
Normal file
@@ -0,0 +1,614 @@
|
||||
# Abiturkorrektur-System - Entwicklerdokumentation
|
||||
|
||||
**WICHTIG: Diese Datei wird bei jedem Compacting gelesen. Alle Implementierungsdetails hier dokumentieren!**
|
||||
|
||||
---
|
||||
|
||||
## 1. Projektziel
|
||||
|
||||
Entwicklung eines KI-gestützten Korrektur-Systems für Deutsch-Abiturklausuren:
|
||||
- **Zielgruppe**: Lehrer in Niedersachsen (Pilot), später alle Bundesländer
|
||||
- **Kernproblem**: Erstkorrektur dauert 6 Stunden pro Arbeit
|
||||
- **Lösung**: KI schlägt Bewertungen vor, Lehrer bestätigt/korrigiert
|
||||
|
||||
---
|
||||
|
||||
## 2. Architektur-Übersicht
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Frontend (Next.js) │
|
||||
│ /website/app/admin/klausur-korrektur/ │
|
||||
│ - page.tsx (Klausur-Liste) │
|
||||
│ - [klausurId]/page.tsx (Studenten-Liste) │
|
||||
│ - [klausurId]/[studentId]/page.tsx (Korrektur-Workspace) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ klausur-service (FastAPI) │
|
||||
│ Port 8086 - /klausur-service/backend/main.py │
|
||||
│ - Klausur CRUD (/api/v1/klausuren) │
|
||||
│ - Student Work (/api/v1/students) │
|
||||
│ - Annotations (/api/v1/annotations) [NEU] │
|
||||
│ - Gutachten Generation │
|
||||
│ - Fairness Analysis │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Infrastruktur │
|
||||
│ - Qdrant (Vektor-DB für RAG) │
|
||||
│ - MinIO (Datei-Storage) │
|
||||
│ - PostgreSQL (Metadaten) │
|
||||
│ - Embedding-Service (Port 8087) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Bestehende Backend-Komponenten (NUTZEN!)
|
||||
|
||||
### 3.1 Klausur-Service API (main.py)
|
||||
|
||||
```python
|
||||
# Bereits implementiert:
|
||||
GET/POST /api/v1/klausuren # Klausur CRUD
|
||||
GET /api/v1/klausuren/{id} # Klausur Details
|
||||
POST /api/v1/klausuren/{id}/students # Student Work hochladen
|
||||
GET /api/v1/klausuren/{id}/students # Studenten-Liste
|
||||
PUT /api/v1/students/{id}/criteria # Kriterien bewerten
|
||||
PUT /api/v1/students/{id}/gutachten # Gutachten speichern
|
||||
POST /api/v1/students/{id}/gutachten/generate # Gutachten generieren (KI)
|
||||
GET /api/v1/klausuren/{id}/fairness # Fairness-Analyse
|
||||
GET /api/v1/grade-info # Notensystem-Info
|
||||
```
|
||||
|
||||
### 3.2 Datenmodelle (main.py)
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class Klausur:
|
||||
id: str
|
||||
title: str
|
||||
subject: str = "Deutsch"
|
||||
year: int = 2025
|
||||
semester: str = "Abitur"
|
||||
modus: str = "abitur" # oder "vorabitur"
|
||||
eh_id: Optional[str] = None # Erwartungshorizont-Referenz
|
||||
|
||||
@dataclass
|
||||
class StudentKlausur:
|
||||
id: str
|
||||
klausur_id: str
|
||||
anonym_id: str
|
||||
file_path: str
|
||||
ocr_text: str = ""
|
||||
criteria_scores: Dict[str, int] = field(default_factory=dict)
|
||||
gutachten: str = ""
|
||||
status: str = "UPLOADED"
|
||||
raw_points: int = 0
|
||||
grade_points: int = 0
|
||||
|
||||
# Status-Workflow:
|
||||
# UPLOADED → OCR_PROCESSING → OCR_COMPLETE → ANALYZING →
|
||||
# FIRST_EXAMINER → SECOND_EXAMINER → COMPLETED
|
||||
```
|
||||
|
||||
### 3.3 Notensystem (15-Punkte)
|
||||
|
||||
```python
|
||||
GRADE_THRESHOLDS = {
|
||||
15: 95, 14: 90, 13: 85, 12: 80, 11: 75,
|
||||
10: 70, 9: 65, 8: 60, 7: 55, 6: 50,
|
||||
5: 45, 4: 40, 3: 33, 2: 27, 1: 20, 0: 0
|
||||
}
|
||||
|
||||
DEFAULT_CRITERIA = {
|
||||
"rechtschreibung": {"name": "Rechtschreibung", "weight": 15},
|
||||
"grammatik": {"name": "Grammatik", "weight": 15},
|
||||
"inhalt": {"name": "Inhalt", "weight": 40},
|
||||
"struktur": {"name": "Struktur", "weight": 15},
|
||||
"stil": {"name": "Stil", "weight": 15}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. NEU ZU IMPLEMENTIEREN
|
||||
|
||||
### Phase 1: Korrektur-Workspace MVP
|
||||
|
||||
#### 4.1 Frontend-Struktur
|
||||
|
||||
```
|
||||
/website/app/admin/klausur-korrektur/
|
||||
├── page.tsx # Klausur-Übersicht (Liste aller Klausuren)
|
||||
├── types.ts # TypeScript Interfaces
|
||||
├── [klausurId]/
|
||||
│ ├── page.tsx # Studenten-Liste einer Klausur
|
||||
│ └── [studentId]/
|
||||
│ └── page.tsx # Korrektur-Workspace (2/3-1/3)
|
||||
└── components/
|
||||
├── KlausurCard.tsx # Klausur in Liste
|
||||
├── StudentList.tsx # Studenten-Übersicht
|
||||
├── DocumentViewer.tsx # PDF/Bild-Anzeige (links, 2/3)
|
||||
├── AnnotationLayer.tsx # SVG-Overlay für Markierungen
|
||||
├── AnnotationToolbar.tsx # Werkzeuge
|
||||
├── CorrectionPanel.tsx # Bewertungs-Panel (rechts, 1/3)
|
||||
├── CriteriaScoreCard.tsx # Einzelnes Kriterium
|
||||
├── EHSuggestionPanel.tsx # EH-Vorschläge via RAG
|
||||
├── GutachtenEditor.tsx # Gutachten bearbeiten
|
||||
└── StudentNavigation.tsx # Prev/Next Navigation
|
||||
```
|
||||
|
||||
#### 4.2 Annotations-Backend (NEU in main.py)
|
||||
|
||||
```python
|
||||
# Neues Datenmodell:
|
||||
@dataclass
|
||||
class Annotation:
|
||||
id: str
|
||||
student_work_id: str
|
||||
page: int
|
||||
position: dict # {x, y, width, height} in % (0-100)
|
||||
type: str # 'rechtschreibung' | 'grammatik' | 'inhalt' | 'struktur' | 'stil' | 'comment'
|
||||
text: str # Kommentar-Text
|
||||
severity: str # 'minor' | 'major' | 'critical'
|
||||
suggestion: str # Korrekturvorschlag (bei RS/Gram)
|
||||
created_by: str # User-ID (EK oder ZK)
|
||||
created_at: datetime
|
||||
role: str # 'first_examiner' | 'second_examiner'
|
||||
linked_criterion: Optional[str] # Verknüpfung zu Kriterium
|
||||
|
||||
# Neue Endpoints:
|
||||
POST /api/v1/students/{id}/annotations # Erstellen
|
||||
GET /api/v1/students/{id}/annotations # Abrufen
|
||||
PUT /api/v1/annotations/{id} # Ändern
|
||||
DELETE /api/v1/annotations/{id} # Löschen
|
||||
```
|
||||
|
||||
#### 4.3 UI-Layout Spezifikation
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────────┐
|
||||
│ Header: Klausur-Titel | Student: Anonym-123 | [← Prev] [5/24] [Next →]│
|
||||
├─────────────────────────────────────────┬────────────────────────────┤
|
||||
│ │ Tabs: [Kriterien] [Gutachten]│
|
||||
│ ┌─────────────────────────────────┐ │ │
|
||||
│ │ │ │ ▼ Rechtschreibung (15%) │
|
||||
│ │ Dokument-Anzeige │ │ [====|====] 70/100 │
|
||||
│ │ (PDF/Bild mit Zoom) │ │ 12 Fehler markiert │
|
||||
│ │ │ │ │
|
||||
│ │ + Annotation-Overlay │ │ ▼ Grammatik (15%) │
|
||||
│ │ (SVG Layer) │ │ [====|====] 80/100 │
|
||||
│ │ │ │ │
|
||||
│ │ │ │ ▼ Inhalt (40%) │
|
||||
│ │ │ │ [====|====] 65/100 │
|
||||
│ │ │ │ EH-Vorschläge: [Laden] │
|
||||
│ └─────────────────────────────────┘ │ │
|
||||
│ │ ▼ Struktur (15%) │
|
||||
│ Toolbar: [RS] [Gram] [Kommentar] │ [====|====] 75/100 │
|
||||
│ [Zoom+] [Zoom-] [Fit] │ │
|
||||
│ │ ▼ Stil (15%) │
|
||||
│ Seiten: [1] [2] [3] [4] [5] │ [====|====] 70/100 │
|
||||
│ │ │
|
||||
│ │ ━━━━━━━━━━━━━━━━━━━━━━━━━━ │
|
||||
│ │ Gesamtnote: 10 Punkte (2-) │
|
||||
│ │ [Gutachten generieren] │
|
||||
│ │ [Speichern] [Abschließen] │
|
||||
├─────────────────────────────────────────┴────────────────────────────┤
|
||||
│ 2/3 Breite │ 1/3 Breite │
|
||||
└──────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Implementierungs-Reihenfolge
|
||||
|
||||
### Phase 1.1: Grundgerüst (AKTUELL)
|
||||
1. ✅ Dokumentation erstellen
|
||||
2. [ ] `/website/app/admin/klausur-korrektur/page.tsx` - Klausur-Liste
|
||||
3. [ ] `/website/app/admin/klausur-korrektur/types.ts` - TypeScript Types
|
||||
4. [ ] Navigation in AdminLayout.tsx hinzufügen
|
||||
5. [ ] Deploy + Test
|
||||
|
||||
### Phase 1.2: Korrektur-Workspace
|
||||
1. [ ] `[klausurId]/page.tsx` - Studenten-Liste
|
||||
2. [ ] `[klausurId]/[studentId]/page.tsx` - Workspace
|
||||
3. [ ] `components/DocumentViewer.tsx` - Bild/PDF Anzeige
|
||||
4. [ ] `components/CorrectionPanel.tsx` - Bewertungs-Panel
|
||||
5. [ ] Deploy + Test mit Lehrer
|
||||
|
||||
### Phase 1.3: Annotations-System
|
||||
1. [ ] Backend: Annotations-Endpoints in main.py
|
||||
2. [ ] `components/AnnotationLayer.tsx` - SVG Overlay
|
||||
3. [ ] `components/AnnotationToolbar.tsx` - Werkzeuge
|
||||
4. [ ] Farbkodierung: RS=rot, Gram=blau, Inhalt=grün
|
||||
5. [ ] Deploy + Test
|
||||
|
||||
### Phase 1.4: EH-Integration
|
||||
1. [ ] `components/EHSuggestionPanel.tsx`
|
||||
2. [ ] Backend: `/api/v1/students/{id}/eh-suggestions`
|
||||
3. [ ] RAG-Query mit Student-Text
|
||||
4. [ ] Deploy + Test
|
||||
|
||||
### Phase 1.5: Gutachten-Editor
|
||||
1. [ ] `components/GutachtenEditor.tsx`
|
||||
2. [ ] Beleg-Verlinkung zu Annotations
|
||||
3. [ ] Gutachten-Generierung Button
|
||||
4. [ ] Deploy + Test
|
||||
|
||||
---
|
||||
|
||||
## 6. API-Konfiguration
|
||||
|
||||
```typescript
|
||||
// Frontend API Base URLs
|
||||
const KLAUSUR_SERVICE = process.env.NEXT_PUBLIC_KLAUSUR_SERVICE_URL || 'http://localhost:8086'
|
||||
|
||||
// Endpoints:
|
||||
// Klausuren
|
||||
GET ${KLAUSUR_SERVICE}/api/v1/klausuren
|
||||
POST ${KLAUSUR_SERVICE}/api/v1/klausuren
|
||||
GET ${KLAUSUR_SERVICE}/api/v1/klausuren/{id}
|
||||
GET ${KLAUSUR_SERVICE}/api/v1/klausuren/{id}/students
|
||||
|
||||
// Studenten
|
||||
GET ${KLAUSUR_SERVICE}/api/v1/students/{id}
|
||||
GET ${KLAUSUR_SERVICE}/api/v1/students/{id}/file // Dokument-Download
|
||||
PUT ${KLAUSUR_SERVICE}/api/v1/students/{id}/criteria
|
||||
PUT ${KLAUSUR_SERVICE}/api/v1/students/{id}/gutachten
|
||||
POST ${KLAUSUR_SERVICE}/api/v1/students/{id}/gutachten/generate
|
||||
|
||||
// Annotations (NEU)
|
||||
GET ${KLAUSUR_SERVICE}/api/v1/students/{id}/annotations
|
||||
POST ${KLAUSUR_SERVICE}/api/v1/students/{id}/annotations
|
||||
PUT ${KLAUSUR_SERVICE}/api/v1/annotations/{id}
|
||||
DELETE ${KLAUSUR_SERVICE}/api/v1/annotations/{id}
|
||||
|
||||
// System
|
||||
GET ${KLAUSUR_SERVICE}/api/v1/grade-info
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Deployment-Prozess
|
||||
|
||||
```bash
|
||||
# 1. Dateien auf Mac Mini synchronisieren
|
||||
rsync -avz --delete \
|
||||
--exclude 'node_modules' --exclude '.next' --exclude '.git' \
|
||||
/Users/benjaminadmin/Projekte/breakpilot-pwa/website/ \
|
||||
macmini:/Users/benjaminadmin/Projekte/breakpilot-pwa/website/
|
||||
|
||||
# 2. Website-Container neu bauen
|
||||
ssh macmini "/usr/local/bin/docker compose \
|
||||
-f /Users/benjaminadmin/Projekte/breakpilot-pwa/docker-compose.yml \
|
||||
build --no-cache website"
|
||||
|
||||
# 3. Container neu starten
|
||||
ssh macmini "/usr/local/bin/docker compose \
|
||||
-f /Users/benjaminadmin/Projekte/breakpilot-pwa/docker-compose.yml \
|
||||
up -d website"
|
||||
|
||||
# 4. Testen unter:
|
||||
# http://macmini:3000/admin/klausur-korrektur
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Bundesland-Spezifika (Niedersachsen Pilot)
|
||||
|
||||
```json
|
||||
// /klausur-service/backend/policies/bundeslaender.json
|
||||
{
|
||||
"NI": {
|
||||
"name": "Niedersachsen",
|
||||
"grading_mode": "points_15",
|
||||
"requires_gutachten": true,
|
||||
"zk_visibility": "full", // ZK sieht EK-Korrektur
|
||||
"third_correction_threshold": 4, // Ab 4 Punkte Diff
|
||||
"colors": {
|
||||
"first_examiner": "#dc2626", // Rot
|
||||
"second_examiner": "#16a34a" // Grün
|
||||
},
|
||||
"criteria_weights": {
|
||||
"rechtschreibung": 15,
|
||||
"grammatik": 15,
|
||||
"inhalt": 40,
|
||||
"struktur": 15,
|
||||
"stil": 15
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Wichtige Dateien (Referenz)
|
||||
|
||||
| Datei | Beschreibung |
|
||||
|-------|--------------|
|
||||
| `/klausur-service/backend/main.py` | Haupt-API, alle Endpoints |
|
||||
| `/klausur-service/backend/eh_pipeline.py` | BYOEH Verarbeitung |
|
||||
| `/klausur-service/backend/qdrant_service.py` | RAG Vector-Suche |
|
||||
| `/klausur-service/backend/hybrid_search.py` | Hybrid Search |
|
||||
| `/website/components/admin/AdminLayout.tsx` | Admin Navigation |
|
||||
| `/website/app/admin/ocr-labeling/page.tsx` | Referenz für 2/3-1/3 Layout |
|
||||
|
||||
---
|
||||
|
||||
## 10. Testing-Checkliste
|
||||
|
||||
### Nach jeder Phase:
|
||||
- [ ] Seite lädt ohne Fehler
|
||||
- [ ] API-Calls funktionieren (DevTools Network)
|
||||
- [ ] Responsives Layout korrekt
|
||||
- [ ] Lehrer kann Workflow durchführen
|
||||
|
||||
### Lehrer-Test-Szenarien:
|
||||
1. Klausur erstellen
|
||||
2. 3+ Studentenarbeiten hochladen
|
||||
3. Erste Arbeit korrigieren (alle Kriterien)
|
||||
4. Annotations setzen
|
||||
5. Gutachten generieren
|
||||
6. Zur nächsten Arbeit navigieren
|
||||
7. Fairness-Check nach allen Arbeiten
|
||||
|
||||
---
|
||||
|
||||
## 11. Phase 2: Zweitkorrektur-System (NEU)
|
||||
|
||||
### 11.1 Neue Backend-Endpoints (main.py)
|
||||
|
||||
```python
|
||||
# Zweitkorrektur Workflow
|
||||
POST /api/v1/students/{id}/start-zweitkorrektur # ZK starten (nach EK)
|
||||
POST /api/v1/students/{id}/submit-zweitkorrektur # ZK-Ergebnis abgeben
|
||||
|
||||
# Einigung (bei Diff 3 Punkte)
|
||||
POST /api/v1/students/{id}/einigung # Einigung einreichen
|
||||
|
||||
# Drittkorrektur (bei Diff >= 4 Punkte)
|
||||
POST /api/v1/students/{id}/assign-drittkorrektor # DK zuweisen
|
||||
POST /api/v1/students/{id}/submit-drittkorrektur # DK-Ergebnis (final)
|
||||
|
||||
# Workflow-Status & Visibility-Filtering
|
||||
GET /api/v1/students/{id}/examiner-workflow # Workflow-Status abrufen
|
||||
GET /api/v1/students/{id}/annotations-filtered # Policy-gefilterte Annotations
|
||||
```
|
||||
|
||||
### 11.2 Workflow-Status
|
||||
|
||||
```python
|
||||
class ExaminerWorkflowStatus(str, Enum):
|
||||
NOT_STARTED = "not_started"
|
||||
EK_IN_PROGRESS = "ek_in_progress"
|
||||
EK_COMPLETED = "ek_completed"
|
||||
ZK_ASSIGNED = "zk_assigned"
|
||||
ZK_IN_PROGRESS = "zk_in_progress"
|
||||
ZK_COMPLETED = "zk_completed"
|
||||
EINIGUNG_REQUIRED = "einigung_required"
|
||||
EINIGUNG_COMPLETED = "einigung_completed"
|
||||
DRITTKORREKTUR_REQUIRED = "drittkorrektur_required"
|
||||
DRITTKORREKTUR_ASSIGNED = "drittkorrektur_assigned"
|
||||
DRITTKORREKTUR_IN_PROGRESS = "drittkorrektur_in_progress"
|
||||
COMPLETED = "completed"
|
||||
```
|
||||
|
||||
### 11.3 Visibility-Regeln (aus bundeslaender.json)
|
||||
|
||||
| Modus | ZK sieht EK-Annotations | ZK sieht EK-Note | ZK sieht EK-Gutachten |
|
||||
|-------|-------------------------|------------------|----------------------|
|
||||
| `blind` | Nein | Nein | Nein |
|
||||
| `semi` (Bayern) | Ja | Nein | Nein |
|
||||
| `full` (NI, Default) | Ja | Ja | Ja |
|
||||
|
||||
### 11.4 Konsens-Regeln
|
||||
|
||||
| Differenz EK-ZK | Aktion |
|
||||
|-----------------|--------|
|
||||
| 0-2 Punkte | Auto-Konsens (Durchschnitt) |
|
||||
| 3 Punkte | Einigung erforderlich |
|
||||
| >= 4 Punkte | Drittkorrektur erforderlich |
|
||||
|
||||
---
|
||||
|
||||
## 12. Aktueller Stand
|
||||
|
||||
**Datum**: 2026-01-21
|
||||
**Phase**: Alle Phasen abgeschlossen
|
||||
**Status**: MVP komplett - bereit fuer Produktionstest
|
||||
|
||||
### Abgeschlossen:
|
||||
- [x] Phase 1: Korrektur-Workspace MVP
|
||||
- [x] Phase 1.1: Grundgerüst (Klausur-Liste, Studenten-Liste)
|
||||
- [x] Phase 1.2: Annotations-System
|
||||
- [x] Phase 1.3: RS/Grammatik Overlays
|
||||
- [x] Phase 1.4: EH-Vorschläge via RAG
|
||||
- [x] Phase 2.1 Backend: Zweitkorrektur-Endpoints
|
||||
- [x] Phase 2.2 Backend: Einigung-Endpoint
|
||||
- [x] Phase 2.3 Backend: Drittkorrektur-Trigger
|
||||
- [x] Phase 2.1 Frontend: ZK-Modus UI
|
||||
- [x] Phase 2.2 Frontend: Einigung-Screen
|
||||
- [x] Phase 3.1: Fairness-Dashboard Frontend
|
||||
- [x] Phase 3.2: Ausreißer-Liste mit Quick-Adjust
|
||||
- [x] Phase 3.3: Noten-Histogramm & Heatmap
|
||||
- [x] Phase 4.1: PDF-Export Backend (reportlab)
|
||||
- [x] Phase 4.2: PDF-Export Frontend
|
||||
- [x] Phase 4.3: Vorabitur-Modus mit EH-Templates
|
||||
|
||||
### URLs:
|
||||
- Klausur-Korrektur: `/admin/klausur-korrektur`
|
||||
- Fairness-Dashboard: `/admin/klausur-korrektur/[klausurId]/fairness`
|
||||
|
||||
### PDF-Export Endpoints:
|
||||
- `GET /api/v1/students/{id}/export/gutachten` - Einzelnes Gutachten als PDF
|
||||
- `GET /api/v1/students/{id}/export/annotations` - Anmerkungen als PDF
|
||||
- `GET /api/v1/klausuren/{id}/export/overview` - Notenübersicht als PDF
|
||||
- `GET /api/v1/klausuren/{id}/export/all-gutachten` - Alle Gutachten als PDF
|
||||
|
||||
### Vorabitur-Modus Endpoints:
|
||||
- `GET /api/v1/vorabitur/templates` - Liste aller EH-Templates
|
||||
- `GET /api/v1/vorabitur/templates/{aufgabentyp}` - Template-Details
|
||||
- `POST /api/v1/klausuren/{id}/vorabitur-eh` - Custom EH erstellen
|
||||
- `GET /api/v1/klausuren/{id}/vorabitur-eh` - Verknuepften EH abrufen
|
||||
- `PUT /api/v1/klausuren/{id}/vorabitur-eh` - EH aktualisieren
|
||||
|
||||
### Verfuegbare Aufgabentypen:
|
||||
- `textanalyse_pragmatisch` - Sachtexte, Reden, Kommentare
|
||||
- `gedichtanalyse` - Lyrik/Gedichte
|
||||
- `prosaanalyse` - Romane, Kurzgeschichten
|
||||
- `dramenanalyse` - Dramatische Texte
|
||||
- `eroerterung_textgebunden` - Textgebundene Eroerterung
|
||||
|
||||
---
|
||||
|
||||
## 13. Lehrer-Anleitung (Schritt-fuer-Schritt)
|
||||
|
||||
### 13.1 Zugang zum System
|
||||
|
||||
**Weg 1: Ueber das Haupt-Dashboard**
|
||||
1. Oeffnen Sie `http://macmini:8000/app` im Browser
|
||||
2. Klicken Sie auf die Kachel "Abiturklausuren"
|
||||
3. Sie werden automatisch zur Korrektur-Oberflaeche weitergeleitet
|
||||
|
||||
**Weg 2: Direkter Zugang**
|
||||
1. Oeffnen Sie direkt `http://macmini:3000/admin/klausur-korrektur`
|
||||
|
||||
### 13.2 Zwei Einstiegs-Optionen
|
||||
|
||||
Beim ersten Besuch sehen Sie die Willkommens-Seite mit zwei Optionen:
|
||||
|
||||
#### Option A: Schnellstart (Direkt hochladen)
|
||||
- Ideal wenn Sie sofort loslegen moechten
|
||||
- Keine manuelle Klausur-Erstellung erforderlich
|
||||
- System erstellt automatisch eine Klausur im Hintergrund
|
||||
|
||||
**Schritte:**
|
||||
1. Klicken Sie auf "Schnellstart - Direkt hochladen"
|
||||
2. **Schritt 1**: Ziehen Sie Ihre eingescannten Arbeiten (PDF/JPG/PNG) in den Upload-Bereich
|
||||
3. **Schritt 2**: Optional - Waehlen Sie den Aufgabentyp und beschreiben Sie die Aufgabenstellung
|
||||
4. **Schritt 3**: Pruefen Sie die Zusammenfassung und klicken "Korrektur starten"
|
||||
5. Sie werden automatisch zur Korrektur-Ansicht weitergeleitet
|
||||
|
||||
#### Option B: Neue Klausur erstellen (Standard)
|
||||
- Empfohlen fuer regelmaessige Nutzung
|
||||
- Volle Metadaten (Fach, Jahr, Kurs, Modus)
|
||||
- Unterstuetzt Zweitkorrektur-Workflow
|
||||
|
||||
**Schritte:**
|
||||
1. Klicken Sie auf "Neue Klausur erstellen"
|
||||
2. Geben Sie Titel, Fach, Jahr und Semester ein
|
||||
3. Waehlen Sie den Modus:
|
||||
- **Abitur**: Fuer offizielle Abitur-Pruefungen mit NiBiS-EH
|
||||
- **Vorabitur**: Fuer Uebungsklausuren mit eigenem EH
|
||||
4. Bei Vorabitur: Waehlen Sie Aufgabentyp und beschreiben Sie die Aufgabenstellung
|
||||
5. Klicken Sie "Klausur erstellen"
|
||||
|
||||
### 13.3 Arbeiten hochladen
|
||||
|
||||
Nach Erstellung der Klausur:
|
||||
1. Oeffnen Sie die Klausur aus der Liste
|
||||
2. Klicken Sie "Arbeiten hochladen"
|
||||
3. Waehlen Sie die eingescannten Dateien (PDF oder Bilder)
|
||||
4. Geben Sie optional anonyme IDs (z.B. "Arbeit-1", "Arbeit-2")
|
||||
5. Das System startet automatisch die OCR-Erkennung
|
||||
|
||||
### 13.4 Korrigieren
|
||||
|
||||
**Korrektur-Workspace (2/3-1/3 Layout):**
|
||||
- Links (2/3): Das Originaldokument mit Zoom-Funktion
|
||||
- Rechts (1/3): Bewertungspanel mit Kriterien
|
||||
|
||||
**Schritt fuer Schritt:**
|
||||
1. Oeffnen Sie eine Arbeit durch Klick auf "Korrigieren"
|
||||
2. Lesen Sie die Arbeit im linken Bereich (Zoom mit +/-)
|
||||
3. Setzen Sie Anmerkungen durch Klick auf das Dokument
|
||||
4. Waehlen Sie den Anmerkungstyp:
|
||||
- **RS** (rot): Rechtschreibfehler
|
||||
- **Gram** (blau): Grammatikfehler
|
||||
- **Inhalt** (gruen): Inhaltliche Anmerkungen
|
||||
- **Kommentar**: Allgemeine Bemerkungen
|
||||
5. Bewerten Sie die 5 Kriterien im rechten Panel:
|
||||
- Rechtschreibung (15%)
|
||||
- Grammatik (15%)
|
||||
- Inhalt (40%)
|
||||
- Struktur (15%)
|
||||
- Stil (15%)
|
||||
6. Klicken Sie "EH-Vorschlaege laden" fuer KI-Unterstuetzung
|
||||
7. Klicken Sie "Gutachten generieren" fuer einen KI-Vorschlag
|
||||
8. Bearbeiten Sie das Gutachten nach Bedarf
|
||||
9. Klicken Sie "Speichern" und dann "Naechste Arbeit"
|
||||
|
||||
### 13.5 Fairness-Analyse
|
||||
|
||||
Nach Korrektur mehrerer Arbeiten:
|
||||
1. Klicken Sie auf "Fairness-Dashboard" in der Klausur-Ansicht
|
||||
2. Pruefen Sie:
|
||||
- **Noten-Histogramm**: Ist die Verteilung realistisch?
|
||||
- **Ausreisser**: Gibt es ungewoehnlich hohe/niedrige Noten?
|
||||
- **Kriterien-Heatmap**: Sind Kriterien konsistent bewertet?
|
||||
3. Nutzen Sie "Quick-Adjust" um Anpassungen vorzunehmen
|
||||
|
||||
### 13.6 PDF-Export
|
||||
|
||||
1. In der Klausur-Ansicht klicken Sie "PDF-Export"
|
||||
2. Waehlen Sie:
|
||||
- **Einzelgutachten**: PDF fuer einen Schueler
|
||||
- **Alle Gutachten**: Gesamtes PDF fuer alle Arbeiten
|
||||
- **Notenuebersicht**: Uebersicht aller Noten
|
||||
- **Anmerkungen**: Alle Annotationen als PDF
|
||||
|
||||
### 13.7 Zweitkorrektur (Optional)
|
||||
|
||||
Fuer offizielle Abitur-Klausuren:
|
||||
1. Erstkorrektur abschliessen (Status: "Abgeschlossen")
|
||||
2. Klicken Sie "Zweitkorrektur starten"
|
||||
3. Der Zweitkorrektor bewertet unabhaengig
|
||||
4. Bei Differenz >= 3 Punkte: Einigung erforderlich
|
||||
5. Bei Differenz >= 4 Punkte: Drittkorrektur wird automatisch ausgeloest
|
||||
|
||||
### 13.8 Haeufige Fragen
|
||||
|
||||
**F: Kann ich eine Korrektur unterbrechen und spaeter fortsetzen?**
|
||||
A: Ja, alle Aenderungen werden automatisch gespeichert.
|
||||
|
||||
**F: Was passiert mit meinen Daten?**
|
||||
A: Alle Daten werden lokal auf dem Schulserver gespeichert. Keine Cloud-Speicherung.
|
||||
|
||||
**F: Kann ich den KI-Vorschlag komplett ueberschreiben?**
|
||||
A: Ja, das Gutachten ist frei editierbar. Der KI-Vorschlag ist nur ein Startpunkt.
|
||||
|
||||
**F: Wie funktioniert die OCR-Erkennung?**
|
||||
A: Das System erkennt Handschrift automatisch. Bei schlechter Lesbarkeit koennen Sie manuell nachbessern.
|
||||
|
||||
---
|
||||
|
||||
## 14. Integration Dashboard (Port 8000)
|
||||
|
||||
### 14.1 Aenderungen in dashboard.py
|
||||
|
||||
Die Funktion `openKlausurService()` wurde aktualisiert:
|
||||
|
||||
```javascript
|
||||
// Alte Version: Oeffnete Port 8086 (Backend)
|
||||
// Neue Version: Oeffnet Port 3000 (Next.js Frontend)
|
||||
function openKlausurService() {
|
||||
let baseUrl;
|
||||
if (window.location.hostname === 'macmini') {
|
||||
baseUrl = 'http://macmini:3000';
|
||||
} else {
|
||||
baseUrl = 'http://localhost:3000';
|
||||
}
|
||||
window.open(baseUrl + '/admin/klausur-korrektur', '_blank');
|
||||
}
|
||||
```
|
||||
|
||||
### 14.2 Neue Frontend-Features
|
||||
|
||||
- **Willkommens-Tab**: Erster Tab fuer neue Benutzer mit Workflow-Erklaerung
|
||||
- **Direktupload-Wizard**: 3-Schritt-Wizard fuer Schnellstart
|
||||
- **Drag & Drop**: Arbeiten per Drag & Drop hochladen
|
||||
- **localStorage-Persistenz**: System merkt sich wiederkehrende Benutzer
|
||||
250
.claude/rules/experimental-dashboard.md
Normal file
250
.claude/rules/experimental-dashboard.md
Normal file
@@ -0,0 +1,250 @@
|
||||
# Experimental Dashboard - Apple Weather Style UI
|
||||
|
||||
**Status:** In Entwicklung
|
||||
**Letzte Aktualisierung:** 2026-01-24
|
||||
**URL:** http://macmini:3001/dashboard-experimental
|
||||
|
||||
---
|
||||
|
||||
## Uebersicht
|
||||
|
||||
Das Experimental Dashboard implementiert einen **Apple Weather App Style** mit:
|
||||
- Ultra-transparenten Glassmorphism-Cards (~8% Opacity)
|
||||
- Dunklem Sternenhimmel-Hintergrund mit Parallax
|
||||
- Weisser Schrift auf monochromem Design
|
||||
- Schwebenden Nachrichten (FloatingMessage) mit ~4% Background
|
||||
- Nuetzlichen Widgets: Uhr, Wetter, Kompass, Diagramme
|
||||
|
||||
---
|
||||
|
||||
## Design-Prinzipien
|
||||
|
||||
| Prinzip | Umsetzung |
|
||||
|---------|-----------|
|
||||
| **Transparenz** | Cards mit 8% Opacity, Messages mit 4% |
|
||||
| **Verschmelzung** | Elemente verschmelzen mit dem Hintergrund |
|
||||
| **Monochrom** | Weisse Schrift, keine bunten Akzente |
|
||||
| **Subtilitaet** | Dezente Hover-Effekte, sanfte Animationen |
|
||||
| **Nuetzlichkeit** | Echte Informationen (Uhrzeit, Wetter) |
|
||||
|
||||
---
|
||||
|
||||
## Dateistruktur
|
||||
|
||||
```
|
||||
/studio-v2/
|
||||
├── app/
|
||||
│ └── dashboard-experimental/
|
||||
│ └── page.tsx # Haupt-Dashboard (740 Zeilen)
|
||||
│
|
||||
├── components/
|
||||
│ └── spatial-ui/
|
||||
│ ├── index.ts # Exports
|
||||
│ ├── SpatialCard.tsx # Original SpatialCard (nicht verwendet)
|
||||
│ └── FloatingMessage.tsx # Schwebende Nachrichten
|
||||
│
|
||||
└── lib/
|
||||
└── spatial-ui/
|
||||
├── index.ts # Exports
|
||||
├── depth-system.ts # Design Tokens
|
||||
├── PerformanceContext.tsx # Adaptive Qualitaet
|
||||
└── FocusContext.tsx # Focus-Modus
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Komponenten
|
||||
|
||||
### GlassCard
|
||||
Ultra-transparente Card fuer alle Inhalte.
|
||||
|
||||
```typescript
|
||||
interface GlassCardProps {
|
||||
children: React.ReactNode
|
||||
className?: string
|
||||
onClick?: () => void
|
||||
size?: 'sm' | 'md' | 'lg' // Padding: 16px, 20px, 24px
|
||||
delay?: number // Einblend-Verzoegerung in ms
|
||||
}
|
||||
```
|
||||
|
||||
**Styling:**
|
||||
- Background: `rgba(255, 255, 255, 0.08)` (8%)
|
||||
- Hover: `rgba(255, 255, 255, 0.12)` (12%)
|
||||
- Border: `1px solid rgba(255, 255, 255, 0.1)`
|
||||
- Blur: 24px (adaptiv)
|
||||
- Border-Radius: 24px (rounded-3xl)
|
||||
|
||||
### AnalogClock
|
||||
Analoge Uhr mit Sekundenzeiger.
|
||||
|
||||
- Stunden-Zeiger: Weiss, dick
|
||||
- Minuten-Zeiger: Weiss/80%, duenn
|
||||
- Sekunden-Zeiger: Orange (#fb923c)
|
||||
- 12 Stundenmarkierungen
|
||||
- Aktualisiert jede Sekunde
|
||||
|
||||
### Compass
|
||||
Kompass im Apple Weather Style.
|
||||
|
||||
```typescript
|
||||
interface CompassProps {
|
||||
direction?: number // Grad (0 = Nord, 90 = Ost, etc.)
|
||||
}
|
||||
```
|
||||
|
||||
- Nord-Nadel: Rot (#ef4444)
|
||||
- Sued-Nadel: Weiss
|
||||
- Kardinalrichtungen: N (rot), S, W, O
|
||||
|
||||
### BarChart
|
||||
Balkendiagramm fuer Wochen-Statistiken.
|
||||
|
||||
```typescript
|
||||
interface BarChartProps {
|
||||
data: { label: string; value: number; highlight?: boolean }[]
|
||||
maxValue?: number
|
||||
}
|
||||
```
|
||||
|
||||
- Highlight-Balken mit Gradient (blau → lila)
|
||||
- Normale Balken: 20% weiss
|
||||
- Labels unten, Werte oben
|
||||
|
||||
### ProgressRing
|
||||
Kreisfoermiger Fortschrittsanzeiger.
|
||||
|
||||
```typescript
|
||||
interface ProgressRingProps {
|
||||
progress: number // 0-100
|
||||
size?: number // Default: 80px
|
||||
strokeWidth?: number // Default: 6px
|
||||
label: string
|
||||
value: string
|
||||
color?: string // Farbe des Fortschritts
|
||||
}
|
||||
```
|
||||
|
||||
### TemperatureDisplay
|
||||
Wetter-Anzeige mit Icon und Temperatur.
|
||||
|
||||
```typescript
|
||||
interface TemperatureDisplayProps {
|
||||
temp: number
|
||||
condition: 'sunny' | 'cloudy' | 'rainy' | 'snowy' | 'partly_cloudy'
|
||||
}
|
||||
```
|
||||
|
||||
### FloatingMessage
|
||||
Schwebende Benachrichtigungen von rechts.
|
||||
|
||||
**Aktuell:**
|
||||
- Background: 4% Opacity
|
||||
- Blur: 24px
|
||||
- Border: `1px solid rgba(255, 255, 255, 0.12)`
|
||||
- Auto-Dismiss mit Progress-Bar
|
||||
- 3 Antwort-Optionen: Antworten, Oeffnen, Spaeter
|
||||
- Typewriter-Effekt fuer Text
|
||||
|
||||
---
|
||||
|
||||
## Farbpalette
|
||||
|
||||
| Element | Wert |
|
||||
|---------|------|
|
||||
| Background | `from-slate-900 via-indigo-950 to-slate-900` |
|
||||
| Card Background | `rgba(255, 255, 255, 0.08)` |
|
||||
| Card Hover | `rgba(255, 255, 255, 0.12)` |
|
||||
| Message Background | `rgba(255, 255, 255, 0.04)` |
|
||||
| Border | `rgba(255, 255, 255, 0.1)` |
|
||||
| Text Primary | `text-white` |
|
||||
| Text Secondary | `text-white/50` bis `text-white/40` |
|
||||
| Accent Blue | `#60a5fa` |
|
||||
| Accent Purple | `#a78bfa` |
|
||||
| Accent Orange | `#fb923c` (Sekundenzeiger) |
|
||||
| Accent Red | `#ef4444` (Kompass Nord) |
|
||||
|
||||
---
|
||||
|
||||
## Performance-System
|
||||
|
||||
Das Dashboard nutzt das **PerformanceContext** fuer adaptive Qualitaet:
|
||||
|
||||
| Quality Level | Blur | Parallax | Animationen |
|
||||
|---------------|------|----------|-------------|
|
||||
| high | 24px | Ja | Spring |
|
||||
| medium | 17px | Ja | Standard |
|
||||
| low | 0px | Nein | Reduziert |
|
||||
| minimal | 0px | Nein | Keine |
|
||||
|
||||
**FPS-Monitor** unten links zeigt:
|
||||
- Aktuelle FPS
|
||||
- Quality Level
|
||||
- Blur/Parallax Status
|
||||
|
||||
---
|
||||
|
||||
## Deployment
|
||||
|
||||
```bash
|
||||
# 1. Sync zu Mac Mini
|
||||
rsync -avz --delete \
|
||||
--exclude 'node_modules' --exclude '.next' --exclude '.git' \
|
||||
/Users/benjaminadmin/Projekte/breakpilot-pwa/studio-v2/ \
|
||||
macmini:/Users/benjaminadmin/Projekte/breakpilot-pwa/studio-v2/
|
||||
|
||||
# 2. Build
|
||||
ssh macmini "/usr/local/bin/docker compose \
|
||||
-f /Users/benjaminadmin/Projekte/breakpilot-pwa/docker-compose.yml \
|
||||
build --no-cache studio-v2"
|
||||
|
||||
# 3. Deploy
|
||||
ssh macmini "/usr/local/bin/docker compose \
|
||||
-f /Users/benjaminadmin/Projekte/breakpilot-pwa/docker-compose.yml \
|
||||
up -d studio-v2"
|
||||
|
||||
# 4. Testen
|
||||
http://macmini:3001/dashboard-experimental
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Offene Punkte / Ideen
|
||||
|
||||
### Kurzfristig
|
||||
- [ ] Echte Wetterdaten via API integrieren
|
||||
- [ ] Kompass-Richtung dynamisch (GPS oder manuell)
|
||||
- [ ] Klick auf Cards fuehrt zu Detailseiten
|
||||
- [ ] Light Mode Support (aktuell nur Dark)
|
||||
|
||||
### Mittelfristig
|
||||
- [ ] Drag & Drop fuer Card-Anordnung
|
||||
- [ ] Weitere Widgets: Kalender, Termine, Erinnerungen
|
||||
- [ ] Animierte Uebergaenge zwischen Seiten
|
||||
- [ ] Sound-Feedback bei Interaktionen
|
||||
|
||||
### Langfristig
|
||||
- [ ] Personalisierbare Widgets
|
||||
- [ ] Dashboard als Standard-Startseite
|
||||
- [ ] Mobile-optimierte Version
|
||||
- [ ] Integration mit Apple Health / Fitness Daten
|
||||
|
||||
---
|
||||
|
||||
## Referenzen
|
||||
|
||||
- **Apple Weather App** (iOS) - Hauptinspiration
|
||||
- **Dribbble Shot:** https://dribbble.com/shots/26339637-Smart-Home-Dashboard-Glassmorphism-UI
|
||||
- **Design Tokens:** `/studio-v2/lib/spatial-ui/depth-system.ts`
|
||||
|
||||
---
|
||||
|
||||
## Aenderungshistorie
|
||||
|
||||
| Datum | Aenderung |
|
||||
|-------|-----------|
|
||||
| 2026-01-24 | FloatingMessage auf 4% Opacity reduziert |
|
||||
| 2026-01-24 | Kompass, Balkendiagramm, Analog-Uhr hinzugefuegt |
|
||||
| 2026-01-24 | Cards auf 8% Opacity reduziert |
|
||||
| 2026-01-24 | Apple Weather Style implementiert |
|
||||
| 2026-01-24 | Erstes Spatial UI System erstellt |
|
||||
295
.claude/rules/multi-agent-architecture.md
Normal file
295
.claude/rules/multi-agent-architecture.md
Normal file
@@ -0,0 +1,295 @@
|
||||
# Multi-Agent Architektur - Entwicklerdokumentation
|
||||
|
||||
**Status:** Implementiert
|
||||
**Letzte Aktualisierung:** 2025-01-15
|
||||
**Modul:** `/agent-core/`
|
||||
|
||||
---
|
||||
|
||||
## 1. Übersicht
|
||||
|
||||
Die Multi-Agent-Architektur erweitert Breakpilot um ein verteiltes Agent-System basierend auf Mission Control Konzepten.
|
||||
|
||||
### Kernkomponenten
|
||||
|
||||
| Komponente | Pfad | Beschreibung |
|
||||
|------------|------|--------------|
|
||||
| Session Management | `/agent-core/sessions/` | Lifecycle & Recovery |
|
||||
| Shared Brain | `/agent-core/brain/` | Langzeit-Gedächtnis |
|
||||
| Orchestrator | `/agent-core/orchestrator/` | Koordination |
|
||||
| SOUL Files | `/agent-core/soul/` | Agent-Persönlichkeiten |
|
||||
|
||||
---
|
||||
|
||||
## 2. Agent-Typen
|
||||
|
||||
| Agent | Aufgabe | SOUL-Datei |
|
||||
|-------|---------|------------|
|
||||
| **TutorAgent** | Lernbegleitung, Fragen beantworten | `tutor-agent.soul.md` |
|
||||
| **GraderAgent** | Klausur-Korrektur, Bewertung | `grader-agent.soul.md` |
|
||||
| **QualityJudge** | BQAS Qualitätsprüfung | `quality-judge.soul.md` |
|
||||
| **AlertAgent** | Monitoring, Benachrichtigungen | `alert-agent.soul.md` |
|
||||
| **Orchestrator** | Task-Koordination | `orchestrator.soul.md` |
|
||||
|
||||
---
|
||||
|
||||
## 3. Wichtige Dateien
|
||||
|
||||
### Session Management
|
||||
```
|
||||
agent-core/sessions/
|
||||
├── session_manager.py # AgentSession, SessionManager, SessionState
|
||||
├── heartbeat.py # HeartbeatMonitor, HeartbeatClient
|
||||
└── checkpoint.py # CheckpointManager
|
||||
```
|
||||
|
||||
### Shared Brain
|
||||
```
|
||||
agent-core/brain/
|
||||
├── memory_store.py # MemoryStore, Memory (mit TTL)
|
||||
├── context_manager.py # ConversationContext, ContextManager
|
||||
└── knowledge_graph.py # KnowledgeGraph, Entity, Relationship
|
||||
```
|
||||
|
||||
### Orchestrator
|
||||
```
|
||||
agent-core/orchestrator/
|
||||
├── message_bus.py # MessageBus, AgentMessage, MessagePriority
|
||||
├── supervisor.py # AgentSupervisor, AgentInfo, AgentStatus
|
||||
└── task_router.py # TaskRouter, RoutingRule, RoutingResult
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Datenbank-Schema
|
||||
|
||||
Die Migration befindet sich in:
|
||||
`/backend/migrations/add_agent_core_tables.sql`
|
||||
|
||||
### Tabellen
|
||||
|
||||
1. **agent_sessions** - Session-Daten mit Checkpoints
|
||||
2. **agent_memory** - Langzeit-Gedächtnis mit TTL
|
||||
3. **agent_messages** - Audit-Trail für Inter-Agent Kommunikation
|
||||
|
||||
### Helper-Funktionen
|
||||
|
||||
```sql
|
||||
-- Abgelaufene Memories bereinigen
|
||||
SELECT cleanup_expired_agent_memory();
|
||||
|
||||
-- Inaktive Sessions bereinigen
|
||||
SELECT cleanup_stale_agent_sessions(48); -- 48 Stunden
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Integration Voice-Service
|
||||
|
||||
Der `EnhancedTaskOrchestrator` erweitert den bestehenden `TaskOrchestrator`:
|
||||
|
||||
```python
|
||||
# voice-service/services/enhanced_task_orchestrator.py
|
||||
|
||||
from agent_core.sessions import SessionManager
|
||||
from agent_core.orchestrator import MessageBus
|
||||
|
||||
class EnhancedTaskOrchestrator(TaskOrchestrator):
|
||||
# Nutzt Session-Checkpoints für Recovery
|
||||
# Routet komplexe Tasks an spezialisierte Agents
|
||||
# Führt Quality-Checks via BQAS durch
|
||||
```
|
||||
|
||||
**Wichtig:** Der Enhanced Orchestrator ist abwärtskompatibel und kann parallel zum Original verwendet werden.
|
||||
|
||||
---
|
||||
|
||||
## 6. Integration BQAS
|
||||
|
||||
Der `QualityJudgeAgent` integriert BQAS mit dem Multi-Agent-System:
|
||||
|
||||
```python
|
||||
# voice-service/bqas/quality_judge_agent.py
|
||||
|
||||
from bqas.judge import LLMJudge
|
||||
from agent_core.orchestrator import MessageBus
|
||||
|
||||
class QualityJudgeAgent:
|
||||
# Wertet Responses in Echtzeit aus
|
||||
# Nutzt Memory für konsistente Bewertungen
|
||||
# Empfängt Evaluierungs-Requests via Message Bus
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Code-Beispiele
|
||||
|
||||
### Session erstellen
|
||||
|
||||
```python
|
||||
from agent_core.sessions import SessionManager
|
||||
|
||||
manager = SessionManager(redis_client=redis, db_pool=pool)
|
||||
session = await manager.create_session(
|
||||
agent_type="tutor-agent",
|
||||
user_id="user-123"
|
||||
)
|
||||
```
|
||||
|
||||
### Memory speichern
|
||||
|
||||
```python
|
||||
from agent_core.brain import MemoryStore
|
||||
|
||||
store = MemoryStore(redis_client=redis, db_pool=pool)
|
||||
await store.remember(
|
||||
key="student:123:progress",
|
||||
value={"level": 5, "score": 85},
|
||||
agent_id="tutor-agent",
|
||||
ttl_days=30
|
||||
)
|
||||
```
|
||||
|
||||
### Nachricht senden
|
||||
|
||||
```python
|
||||
from agent_core.orchestrator import MessageBus, AgentMessage
|
||||
|
||||
bus = MessageBus(redis_client=redis)
|
||||
await bus.publish(AgentMessage(
|
||||
sender="orchestrator",
|
||||
receiver="grader-agent",
|
||||
message_type="grade_request",
|
||||
payload={"exam_id": "exam-1"}
|
||||
))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Tests ausführen
|
||||
|
||||
```bash
|
||||
# Alle Agent-Core Tests
|
||||
cd agent-core && pytest -v
|
||||
|
||||
# Mit Coverage-Report
|
||||
pytest --cov=. --cov-report=html
|
||||
|
||||
# Einzelne Module
|
||||
pytest tests/test_session_manager.py -v
|
||||
pytest tests/test_message_bus.py -v
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Deployment-Schritte
|
||||
|
||||
### 1. Migration ausführen
|
||||
|
||||
```bash
|
||||
psql -h localhost -U breakpilot -d breakpilot \
|
||||
-f backend/migrations/add_agent_core_tables.sql
|
||||
```
|
||||
|
||||
### 2. Voice-Service aktualisieren
|
||||
|
||||
```bash
|
||||
# Sync zu Server
|
||||
rsync -avz --exclude 'node_modules' --exclude '.git' \
|
||||
/path/to/breakpilot-pwa/ server:/path/to/breakpilot-pwa/
|
||||
|
||||
# Container neu bauen
|
||||
docker compose build --no-cache voice-service
|
||||
|
||||
# Starten
|
||||
docker compose up -d voice-service
|
||||
```
|
||||
|
||||
### 3. Verifizieren
|
||||
|
||||
```bash
|
||||
# Session-Tabelle prüfen
|
||||
psql -c "SELECT COUNT(*) FROM agent_sessions;"
|
||||
|
||||
# Memory-Tabelle prüfen
|
||||
psql -c "SELECT COUNT(*) FROM agent_memory;"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. Monitoring
|
||||
|
||||
### Metriken
|
||||
|
||||
| Metrik | Beschreibung |
|
||||
|--------|--------------|
|
||||
| `agent_session_count` | Anzahl aktiver Sessions |
|
||||
| `agent_heartbeat_delay_ms` | Zeit seit letztem Heartbeat |
|
||||
| `agent_message_latency_ms` | Nachrichtenlatenz |
|
||||
| `agent_memory_count` | Gespeicherte Memories |
|
||||
| `agent_routing_success_rate` | Erfolgreiche Routings |
|
||||
|
||||
### Health-Check-Endpunkte
|
||||
|
||||
```
|
||||
GET /api/v1/agents/health # Supervisor Status
|
||||
GET /api/v1/agents/sessions # Aktive Sessions
|
||||
GET /api/v1/agents/memory/stats # Memory-Statistiken
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Troubleshooting
|
||||
|
||||
### Problem: Session nicht gefunden
|
||||
|
||||
1. Prüfen ob Valkey läuft: `redis-cli ping`
|
||||
2. Session-Timeout prüfen (default 24h)
|
||||
3. Heartbeat-Status checken
|
||||
|
||||
### Problem: Message Bus Timeout
|
||||
|
||||
1. Redis Pub/Sub Status prüfen
|
||||
2. Ziel-Agent registriert?
|
||||
3. Timeout erhöhen (default 30s)
|
||||
|
||||
### Problem: Memory nicht gefunden
|
||||
|
||||
1. Namespace korrekt?
|
||||
2. TTL abgelaufen?
|
||||
3. Cleanup-Job gelaufen?
|
||||
|
||||
---
|
||||
|
||||
## 12. Erweiterungen
|
||||
|
||||
### Neuen Agent hinzufügen
|
||||
|
||||
1. SOUL-Datei erstellen in `/agent-core/soul/`
|
||||
2. Routing-Regel in `task_router.py` hinzufügen
|
||||
3. Handler beim Supervisor registrieren
|
||||
4. Tests schreiben
|
||||
|
||||
### Neuen Memory-Typ hinzufügen
|
||||
|
||||
1. Key-Schema definieren (z.B. `student:*:progress`)
|
||||
2. TTL festlegen
|
||||
3. Access-Pattern dokumentieren
|
||||
|
||||
---
|
||||
|
||||
## 13. Referenzen
|
||||
|
||||
- **Agent-Core README:** `/agent-core/README.md`
|
||||
- **Migration:** `/backend/migrations/add_agent_core_tables.sql`
|
||||
- **Voice-Service Integration:** `/voice-service/services/enhanced_task_orchestrator.py`
|
||||
- **BQAS Integration:** `/voice-service/bqas/quality_judge_agent.py`
|
||||
- **Tests:** `/agent-core/tests/`
|
||||
|
||||
---
|
||||
|
||||
## 14. Änderungshistorie
|
||||
|
||||
| Datum | Version | Änderung |
|
||||
|-------|---------|----------|
|
||||
| 2025-01-15 | 1.0.0 | Initial Release |
|
||||
205
.claude/rules/vocab-worksheet.md
Normal file
205
.claude/rules/vocab-worksheet.md
Normal file
@@ -0,0 +1,205 @@
|
||||
# Vokabel-Arbeitsblatt Generator - Entwicklerdokumentation
|
||||
|
||||
**Status:** Produktiv
|
||||
**Letzte Aktualisierung:** 2026-02-08
|
||||
**URL:** https://macmini/vocab-worksheet
|
||||
|
||||
---
|
||||
|
||||
## Uebersicht
|
||||
|
||||
Der Vokabel-Arbeitsblatt Generator ermoeglicht Lehrern:
|
||||
- Schulbuchseiten (PDF/Bild) zu scannen
|
||||
- Vokabeln automatisch per OCR zu extrahieren
|
||||
- Druckfertige Arbeitsblaetter in verschiedenen Formaten zu generieren
|
||||
|
||||
---
|
||||
|
||||
## Architektur
|
||||
|
||||
```
|
||||
Browser (studio-v2) klausur-service (Port 8086) PostgreSQL
|
||||
│ │ │
|
||||
│ POST /upload-pdf-info │ │
|
||||
│ POST /process-single-page │ │
|
||||
│ POST /generate │ │
|
||||
│ POST /generate-nru │ ──── vocab_sessions ──────▶│
|
||||
│ GET /worksheets/{id}/pdf │ ──── vocab_entries ───────▶│
|
||||
│ │ ──── vocab_worksheets ────▶│
|
||||
└────────────────────────────┘ │
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Arbeitsblatt-Formate
|
||||
|
||||
### Standard-Format
|
||||
|
||||
Klassisches Arbeitsblatt mit waehlbaren Uebungstypen:
|
||||
- **Englisch → Deutsch**: Englische Woerter uebersetzen
|
||||
- **Deutsch → Englisch**: Deutsche Woerter uebersetzen
|
||||
- **Abschreibuebung**: Woerter mehrfach schreiben
|
||||
- **Lueckensaetze**: Saetze mit Luecken ausfuellen
|
||||
|
||||
### NRU-Format (Neu: 2026-02-08)
|
||||
|
||||
Spezielles Format fuer strukturiertes Vokabellernen:
|
||||
|
||||
**Seite 1 (pro gescannter Seite): Vokabeltabelle**
|
||||
| Englisch | Deutsch | Korrektur |
|
||||
|----------|---------|-----------|
|
||||
| word | (leer) | (leer) |
|
||||
|
||||
- Kind schreibt deutsche Uebersetzung
|
||||
- Eltern korrigieren, Kind schreibt ggf. korrigierte Version
|
||||
|
||||
**Seite 2 (pro gescannter Seite): Lernsaetze**
|
||||
| Deutscher Satz |
|
||||
|-----------------------------------|
|
||||
| (2 leere Zeilen fuer EN-Uebersetzung) |
|
||||
|
||||
- Deutscher Satz vorgegeben
|
||||
- Kind schreibt englische Uebersetzung
|
||||
|
||||
**Automatische Trennung:**
|
||||
- Einzelwoerter/Phrasen → Vokabeltabelle
|
||||
- Saetze (enden mit `.!?` oder > 50 Zeichen) → Lernsaetze
|
||||
|
||||
---
|
||||
|
||||
## API-Endpoints
|
||||
|
||||
### Standard-Format
|
||||
```
|
||||
POST /api/v1/vocab/sessions/{session_id}/generate
|
||||
Body: {
|
||||
"worksheet_types": ["en_to_de", "de_to_en", "copy", "gap_fill"],
|
||||
"title": "Vokabeln Unit 3",
|
||||
"include_solutions": true,
|
||||
"line_height": "normal" | "large" | "extra-large"
|
||||
}
|
||||
Response: { "id": "worksheet-uuid", ... }
|
||||
```
|
||||
|
||||
### NRU-Format
|
||||
```
|
||||
POST /api/v1/vocab/sessions/{session_id}/generate-nru
|
||||
Body: {
|
||||
"title": "Vokabeltest",
|
||||
"include_solutions": true,
|
||||
"specific_pages": [1, 2] // optional, 1-indexed
|
||||
}
|
||||
Response: {
|
||||
"worksheet_id": "uuid",
|
||||
"statistics": {
|
||||
"total_entries": 96,
|
||||
"vocabulary_count": 75,
|
||||
"sentence_count": 21,
|
||||
"source_pages": [1, 2, 3],
|
||||
"worksheet_pages": 6
|
||||
},
|
||||
"download_url": "/api/v1/vocab/worksheets/{id}/pdf",
|
||||
"solution_url": "/api/v1/vocab/worksheets/{id}/solution"
|
||||
}
|
||||
```
|
||||
|
||||
### PDF-Download
|
||||
```
|
||||
GET /api/v1/vocab/worksheets/{worksheet_id}/pdf
|
||||
GET /api/v1/vocab/worksheets/{worksheet_id}/solution
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dateien
|
||||
|
||||
### Backend (klausur-service)
|
||||
|
||||
| Datei | Beschreibung |
|
||||
|-------|--------------|
|
||||
| `vocab_worksheet_api.py` | Haupt-API Router mit allen Endpoints |
|
||||
| `nru_worksheet_generator.py` | NRU-Format HTML/PDF Generator |
|
||||
| `vocab_session_store.py` | PostgreSQL Datenbankoperationen |
|
||||
| `hybrid_vocab_extractor.py` | OCR-Extraktion (PaddleOCR + LLM) |
|
||||
| `tesseract_vocab_extractor.py` | Tesseract OCR Fallback |
|
||||
|
||||
### Frontend (studio-v2)
|
||||
|
||||
| Datei | Beschreibung |
|
||||
|-------|--------------|
|
||||
| `app/vocab-worksheet/page.tsx` | Haupt-UI mit Template-Auswahl |
|
||||
|
||||
---
|
||||
|
||||
## Datenbank-Schema
|
||||
|
||||
```sql
|
||||
-- Sessions
|
||||
CREATE TABLE vocab_sessions (
|
||||
id UUID PRIMARY KEY,
|
||||
name VARCHAR(255),
|
||||
status VARCHAR(50),
|
||||
vocabulary_count INT,
|
||||
source_language VARCHAR(10),
|
||||
target_language VARCHAR(10),
|
||||
created_at TIMESTAMP
|
||||
);
|
||||
|
||||
-- Vokabeln
|
||||
CREATE TABLE vocab_entries (
|
||||
id UUID PRIMARY KEY,
|
||||
session_id UUID REFERENCES vocab_sessions(id),
|
||||
english TEXT,
|
||||
german TEXT,
|
||||
example_sentence TEXT,
|
||||
source_page INT,
|
||||
source_row INT,
|
||||
source_column INT
|
||||
);
|
||||
|
||||
-- Generierte Arbeitsblaetter
|
||||
CREATE TABLE vocab_worksheets (
|
||||
id UUID PRIMARY KEY,
|
||||
session_id UUID REFERENCES vocab_sessions(id),
|
||||
worksheet_types JSONB,
|
||||
pdf_path VARCHAR(500),
|
||||
solution_path VARCHAR(500),
|
||||
generated_at TIMESTAMP
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Deployment
|
||||
|
||||
```bash
|
||||
# 1. Backend synchronisieren
|
||||
rsync -avz klausur-service/backend/ macmini:.../klausur-service/backend/
|
||||
|
||||
# 2. Frontend synchronisieren
|
||||
rsync -avz studio-v2/app/vocab-worksheet/ macmini:.../studio-v2/app/vocab-worksheet/
|
||||
|
||||
# 3. Container neu bauen
|
||||
ssh macmini "docker compose build --no-cache klausur-service studio-v2"
|
||||
|
||||
# 4. Container starten
|
||||
ssh macmini "docker compose up -d klausur-service studio-v2"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Erweiterung: Neue Formate hinzufuegen
|
||||
|
||||
1. **Backend**: Neuen Generator in `klausur-service/backend/` erstellen
|
||||
2. **API**: Neuen Endpoint in `vocab_worksheet_api.py` hinzufuegen
|
||||
3. **Frontend**: Format zu `worksheetFormats` Array in `page.tsx` hinzufuegen
|
||||
4. **Doku**: Diese Datei aktualisieren
|
||||
|
||||
---
|
||||
|
||||
## Aenderungshistorie
|
||||
|
||||
| Datum | Aenderung |
|
||||
|-------|-----------|
|
||||
| 2026-02-08 | NRU-Format und Template-Auswahl hinzugefuegt |
|
||||
| 2026-02-07 | Initiale Implementierung mit Standard-Format |
|
||||
117
.claude/session-status-2026-01-25.md
Normal file
117
.claude/session-status-2026-01-25.md
Normal file
@@ -0,0 +1,117 @@
|
||||
# Session Status - 25. Januar 2026 (Aktualisiert)
|
||||
|
||||
## Zusammenfassung
|
||||
|
||||
Open Data School Import erfolgreich implementiert. Schulbestand von 17,610 auf 30,355 erhoeht.
|
||||
|
||||
---
|
||||
|
||||
## Erledigte Aufgaben
|
||||
|
||||
### 1. Studio-v2 Build-Fehler (Vorherige Session)
|
||||
- **Status:** Erledigt
|
||||
- **Problem:** `Module not found: Can't resolve 'pdf-lib'`
|
||||
- **Loesung:** Falsches package.json auf macmini ersetzt, rsync mit --delete
|
||||
|
||||
### 2. Open Data School Importer
|
||||
- **Status:** Erledigt
|
||||
- **Datei:** `/edu-search-service/scripts/import_open_data.py`
|
||||
- **Erfolgreich importiert:**
|
||||
- **NRW:** 5,637 Schulen (CSV von schulministerium.nrw.de)
|
||||
- **Berlin:** 930 Schulen (WFS/GeoJSON von gdi.berlin.de)
|
||||
- **Hamburg:** 543 Schulen (WFS/GML von geodienste.hamburg.de)
|
||||
|
||||
---
|
||||
|
||||
## Aktuelle Schulstatistiken
|
||||
|
||||
```
|
||||
Total: 30,355 Schulen
|
||||
|
||||
Nach Bundesland:
|
||||
NW: 14,962 (inkl. Open Data Import)
|
||||
BY: 2,803
|
||||
NI: 2,192
|
||||
BE: 1,475 (inkl. WFS Import)
|
||||
SN: 1,425
|
||||
SH: 1,329
|
||||
HE: 1,290
|
||||
RP: 1,066
|
||||
HH: 902 (inkl. WFS Import)
|
||||
TH: 799
|
||||
BB: 562
|
||||
SL: 533
|
||||
MV: 367
|
||||
ST: 250
|
||||
BW: 200 (nur JedeSchule.de - BW Daten kostenpflichtig!)
|
||||
HB: 200
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Open Data Importer - Verfuegbare Quellen
|
||||
|
||||
| Bundesland | Status | Quelle | Format |
|
||||
|------------|--------|--------|--------|
|
||||
| NW | Funktioniert | schulministerium.nrw.de | CSV |
|
||||
| BE | Funktioniert | gdi.berlin.de | WFS/GeoJSON |
|
||||
| HH | Funktioniert | geodienste.hamburg.de | WFS/GML |
|
||||
| SN | 404 Error | schuldatenbank.sachsen.de | API |
|
||||
| BW | Kostenpflichtig | LOBW | - |
|
||||
| BY | Kein Open Data | - | - |
|
||||
|
||||
---
|
||||
|
||||
## Importer-Nutzung
|
||||
|
||||
```bash
|
||||
# Alle verfuegbaren Quellen importieren
|
||||
cd /Users/benjaminadmin/Projekte/breakpilot-pwa/edu-search-service/scripts
|
||||
python3 import_open_data.py --all --url http://macmini:8088
|
||||
|
||||
# Einzelnes Bundesland (Dry-Run)
|
||||
python3 import_open_data.py --state NW --dry-run
|
||||
|
||||
# Mit Server-URL
|
||||
python3 import_open_data.py --state HH --url http://macmini:8088
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Offene Punkte
|
||||
|
||||
### Bundeslaender ohne Open Data
|
||||
- **BW:** Schuldaten muessen GEKAUFT werden (LOBW)
|
||||
- **BY:** Keine Open Data API gefunden
|
||||
- **NI, HE, RP, etc.:** Keine zentralen Open Data Quellen bekannt
|
||||
|
||||
### Moegliche weitere Quellen
|
||||
- OSM (OpenStreetMap) - amenity=school
|
||||
- Statistisches Bundesamt
|
||||
- Lokale Schultraeger-Verzeichnisse
|
||||
|
||||
---
|
||||
|
||||
## Container-Status auf macmini
|
||||
|
||||
| Container | Port | Status |
|
||||
|-----------|------|--------|
|
||||
| website | 3000 | Laeuft |
|
||||
| studio-v2 | 3001 | Laeuft |
|
||||
| edu-search-service | 8088 | Laeuft |
|
||||
|
||||
---
|
||||
|
||||
## Wichtige URLs
|
||||
|
||||
- School Directory: http://macmini:3000/admin/school-directory
|
||||
- School Stats API: http://macmini:8088/api/v1/schools/stats
|
||||
- School Search API: http://macmini:8088/api/v1/schools?q=NAME
|
||||
|
||||
---
|
||||
|
||||
## Naechste moegliche Schritte
|
||||
|
||||
1. **OSM Import testen** - OpenStreetMap hat Schuldaten (amenity=school)
|
||||
2. **Weitere WFS-Quellen suchen** - Andere Bundeslaender koennten Geo-Portale haben
|
||||
3. **Deduplizierung** - Pruefen ob durch multiple Imports Duplikate entstanden sind
|
||||
82
.claude/settings.local.json
Normal file
82
.claude/settings.local.json
Normal file
@@ -0,0 +1,82 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(textutil -convert txt:*)",
|
||||
"Bash(find:*)",
|
||||
"Bash(grep:*)",
|
||||
"Bash(wc:*)",
|
||||
"Bash(/bin/bash -c \"source venv/bin/activate && pip install pyjwt --quiet 2>/dev/null && python -c \"\"import sys; sys.path.insert(0, ''.''); from llm_gateway.models.chat import ChatMessage; print(''Models import OK'')\"\"\")",
|
||||
"Bash(/Users/benjaminadmin/Projekte/breakpilot-pwa/backend/venv/bin/python:*)",
|
||||
"Bash(./venv/bin/pip install:*)",
|
||||
"Bash(brew install:*)",
|
||||
"Bash(brew services start:*)",
|
||||
"Bash(ollama list:*)",
|
||||
"Bash(ollama pull:*)",
|
||||
"Bash(export LLM_GATEWAY_ENABLED=true)",
|
||||
"Bash(export LLM_GATEWAY_DEBUG=true)",
|
||||
"Bash(export LLM_API_KEYS=test-key-123)",
|
||||
"Bash(export ANTHROPIC_API_KEY=\"$ANTHROPIC_API_KEY\")",
|
||||
"Bash(source:*)",
|
||||
"Bash(pytest:*)",
|
||||
"Bash(./venv/bin/pytest:*)",
|
||||
"Bash(python3 -m pytest:*)",
|
||||
"Bash(export TAVILY_API_KEY=\"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\")",
|
||||
"Bash(python3:*)",
|
||||
"Bash(curl:*)",
|
||||
"Bash(pip3 install:*)",
|
||||
"WebSearch",
|
||||
"Bash(export ALERTS_AGENT_ENABLED=true)",
|
||||
"Bash(export LLM_API_KEYS=test-key)",
|
||||
"WebFetch(domain:docs.vast.ai)",
|
||||
"Bash(docker compose:*)",
|
||||
"Bash(docker ps:*)",
|
||||
"Bash(docker inspect:*)",
|
||||
"Bash(docker logs:*)",
|
||||
"Bash(ls:*)",
|
||||
"Bash(docker exec:*)",
|
||||
"WebFetch(domain:www.librechat.ai)",
|
||||
"Bash(export TAVILY_API_KEY=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)",
|
||||
"Bash(/Users/benjaminadmin/Projekte/breakpilot-pwa/backend/venv/bin/pip install:*)",
|
||||
"Bash(/Users/benjaminadmin/Projekte/breakpilot-pwa/backend/venv/bin/pytest -v tests/test_integration/test_librechat_tavily.py -x)",
|
||||
"WebFetch(domain:vast.ai)",
|
||||
"Bash(/Users/benjaminadmin/Projekte/breakpilot-pwa/backend/venv/bin/pytest tests/test_infra/test_vast_client.py tests/test_infra/test_vast_power.py -v --tb=short)",
|
||||
"Bash(go build:*)",
|
||||
"Bash(go test:*)",
|
||||
"Bash(npm install)",
|
||||
"Bash(/usr/local/bin/node:*)",
|
||||
"Bash(/opt/homebrew/bin/node --version)",
|
||||
"Bash(docker --version:*)",
|
||||
"Bash(docker build:*)",
|
||||
"Bash(docker images:*)",
|
||||
"Bash(/Users/benjaminadmin/Projekte/breakpilot-pwa/backend/venv/bin/pytest:*)",
|
||||
"Bash(npm test:*)",
|
||||
"Bash(/opt/homebrew/bin/node /opt/homebrew/bin/npm test -- --passWithNoTests)",
|
||||
"Bash(/usr/libexec/java_home:*)",
|
||||
"Bash(/opt/homebrew/bin/node:*)",
|
||||
"Bash(docker restart:*)",
|
||||
"Bash(tree:*)",
|
||||
"Bash(go mod tidy:*)",
|
||||
"Bash(go mod vendor:*)",
|
||||
"Bash(python -m pytest:*)",
|
||||
"Bash(lsof:*)",
|
||||
"Bash(python scripts/load_initial_seeds.py:*)",
|
||||
"Bash(python:*)",
|
||||
"Bash(docker cp:*)",
|
||||
"Bash(node --check:*)",
|
||||
"Bash(cat:*)",
|
||||
"Bash(DATABASE_URL='postgresql://breakpilot:breakpilot123@localhost:5432/breakpilot_db' python3:*)",
|
||||
"Bash(docker volume:*)",
|
||||
"Bash(docker stop:*)",
|
||||
"Bash(docker rm:*)",
|
||||
"Bash(docker run:*)",
|
||||
"Bash(docker network:*)",
|
||||
"Bash(breakpilot-edu-search:latest)",
|
||||
"Bash(jq:*)",
|
||||
"Bash(docker port:*)",
|
||||
"Bash(/dev/null curl -X POST http://localhost:8086/v1/crawl/queue -H 'Authorization: Bearer dev-key' -H 'Content-Type: application/json' -d '{\"\"\"\"university_id\"\"\"\": \"\"\"\"783333a1-91a3-4015-9299-45d10537dae4\"\"\"\", \"\"\"\"priority\"\"\"\": 10}')",
|
||||
"Bash(1)",
|
||||
"WebFetch(domain:uol.de)",
|
||||
"Bash(xargs:*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
31
.docker/build-ci-images.sh
Executable file
31
.docker/build-ci-images.sh
Executable file
@@ -0,0 +1,31 @@
|
||||
#!/bin/bash
|
||||
# Build CI Docker Images for BreakPilot
|
||||
# Run this script on the Mac Mini to build the custom CI images
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
echo "=== Building BreakPilot CI Images ==="
|
||||
echo "Project directory: $PROJECT_DIR"
|
||||
|
||||
cd "$PROJECT_DIR"
|
||||
|
||||
# Build Python CI image with WeasyPrint
|
||||
echo ""
|
||||
echo "Building breakpilot/python-ci:3.12 ..."
|
||||
docker build \
|
||||
-t breakpilot/python-ci:3.12 \
|
||||
-t breakpilot/python-ci:latest \
|
||||
-f .docker/python-ci.Dockerfile \
|
||||
.
|
||||
|
||||
echo ""
|
||||
echo "=== Build complete ==="
|
||||
echo ""
|
||||
echo "Images built:"
|
||||
docker images | grep breakpilot/python-ci
|
||||
|
||||
echo ""
|
||||
echo "To use in Woodpecker CI, the image is already configured in .woodpecker/main.yml"
|
||||
51
.docker/python-ci.Dockerfile
Normal file
51
.docker/python-ci.Dockerfile
Normal file
@@ -0,0 +1,51 @@
|
||||
# Custom Python CI Image with WeasyPrint Dependencies
|
||||
# Build: docker build -t breakpilot/python-ci:3.12 -f .docker/python-ci.Dockerfile .
|
||||
#
|
||||
# This image includes all system libraries needed for:
|
||||
# - WeasyPrint (PDF generation)
|
||||
# - psycopg2 (PostgreSQL)
|
||||
# - General Python testing
|
||||
|
||||
FROM python:3.12-slim
|
||||
|
||||
LABEL maintainer="BreakPilot Team"
|
||||
LABEL description="Python 3.12 with WeasyPrint and test dependencies for CI"
|
||||
|
||||
# Install system dependencies in a single layer
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
# WeasyPrint dependencies
|
||||
libpango-1.0-0 \
|
||||
libpangocairo-1.0-0 \
|
||||
libpangoft2-1.0-0 \
|
||||
libgdk-pixbuf-2.0-0 \
|
||||
libffi-dev \
|
||||
libcairo2 \
|
||||
libcairo2-dev \
|
||||
libgirepository1.0-dev \
|
||||
gir1.2-pango-1.0 \
|
||||
# PostgreSQL client (for psycopg2)
|
||||
libpq-dev \
|
||||
# Build tools (for some pip packages)
|
||||
gcc \
|
||||
g++ \
|
||||
# Useful utilities
|
||||
curl \
|
||||
git \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& apt-get clean
|
||||
|
||||
# Pre-install commonly used Python packages for faster CI
|
||||
RUN pip install --no-cache-dir \
|
||||
pytest \
|
||||
pytest-cov \
|
||||
pytest-asyncio \
|
||||
pytest-json-report \
|
||||
psycopg2-binary \
|
||||
weasyprint \
|
||||
httpx
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Default command
|
||||
CMD ["python", "--version"]
|
||||
115
.env.dev
Normal file
115
.env.dev
Normal file
@@ -0,0 +1,115 @@
|
||||
# ============================================
|
||||
# BreakPilot PWA - DEVELOPMENT Environment
|
||||
# ============================================
|
||||
# Usage: cp .env.dev .env
|
||||
# Or: ./scripts/env-switch.sh dev
|
||||
# ============================================
|
||||
|
||||
# ============================================
|
||||
# Environment Identifier
|
||||
# ============================================
|
||||
ENVIRONMENT=development
|
||||
COMPOSE_PROJECT_NAME=breakpilot-dev
|
||||
|
||||
# ============================================
|
||||
# HashiCorp Vault (Secrets Management)
|
||||
# ============================================
|
||||
# In development, use the local Vault instance with dev token
|
||||
VAULT_ADDR=http://localhost:8200
|
||||
VAULT_DEV_TOKEN=breakpilot-dev-token
|
||||
|
||||
# ============================================
|
||||
# Database
|
||||
# ============================================
|
||||
POSTGRES_USER=breakpilot
|
||||
POSTGRES_PASSWORD=breakpilot_dev_123
|
||||
POSTGRES_DB=breakpilot_dev
|
||||
DATABASE_URL=postgres://breakpilot:breakpilot_dev_123@postgres:5432/breakpilot_dev?sslmode=disable
|
||||
|
||||
# Synapse DB (Matrix)
|
||||
SYNAPSE_DB_PASSWORD=synapse_dev_123
|
||||
|
||||
# ============================================
|
||||
# Authentication
|
||||
# ============================================
|
||||
# Development only - NOT for production!
|
||||
JWT_SECRET=dev-jwt-secret-not-for-production-32chars
|
||||
JWT_REFRESH_SECRET=dev-refresh-secret-32chars-change-me
|
||||
|
||||
# ============================================
|
||||
# Service URLs (Development)
|
||||
# ============================================
|
||||
FRONTEND_URL=http://localhost:8000
|
||||
BACKEND_URL=http://localhost:8000
|
||||
CONSENT_SERVICE_URL=http://localhost:8081
|
||||
BILLING_SERVICE_URL=http://localhost:8083
|
||||
SCHOOL_SERVICE_URL=http://localhost:8084
|
||||
KLAUSUR_SERVICE_URL=http://localhost:8086
|
||||
WEBSITE_URL=http://localhost:3000
|
||||
|
||||
# ============================================
|
||||
# E-Mail (Mailpit for Development)
|
||||
# ============================================
|
||||
# Mailpit catches all emails - view at http://localhost:8025
|
||||
SMTP_HOST=mailpit
|
||||
SMTP_PORT=1025
|
||||
SMTP_USERNAME=
|
||||
SMTP_PASSWORD=
|
||||
SMTP_FROM_NAME=BreakPilot Dev
|
||||
SMTP_FROM_ADDR=dev@breakpilot.local
|
||||
|
||||
# ============================================
|
||||
# MinIO (Object Storage)
|
||||
# ============================================
|
||||
MINIO_ROOT_USER=breakpilot_dev
|
||||
MINIO_ROOT_PASSWORD=breakpilot_dev_123
|
||||
MINIO_ENDPOINT=localhost:9000
|
||||
|
||||
# ============================================
|
||||
# Qdrant (Vector DB)
|
||||
# ============================================
|
||||
QDRANT_URL=http://localhost:6333
|
||||
|
||||
# ============================================
|
||||
# API Keys (Optional for Dev)
|
||||
# ============================================
|
||||
# Leave empty for offline development
|
||||
# Or add your test keys here
|
||||
ANTHROPIC_API_KEY=
|
||||
ANTHROPIC_DEFAULT_MODEL=claude-sonnet-4-20250514
|
||||
ANTHROPIC_ENABLED=false
|
||||
|
||||
VAST_API_KEY=
|
||||
VAST_INSTANCE_ID=
|
||||
CONTROL_API_KEY=
|
||||
VAST_AUTO_SHUTDOWN=true
|
||||
VAST_AUTO_SHUTDOWN_MINUTES=30
|
||||
|
||||
VLLM_BASE_URL=
|
||||
VLLM_ENABLED=false
|
||||
|
||||
# ============================================
|
||||
# Embedding Configuration
|
||||
# ============================================
|
||||
# "local" = sentence-transformers (no API key needed)
|
||||
# "openai" = OpenAI API (requires OPENAI_API_KEY)
|
||||
EMBEDDING_BACKEND=local
|
||||
|
||||
# ============================================
|
||||
# Stripe (Billing - Test Mode)
|
||||
# ============================================
|
||||
STRIPE_SECRET_KEY=
|
||||
STRIPE_PUBLISHABLE_KEY=
|
||||
STRIPE_WEBHOOK_SECRET=
|
||||
|
||||
# ============================================
|
||||
# Debug Settings
|
||||
# ============================================
|
||||
DEBUG=true
|
||||
GIN_MODE=debug
|
||||
LOG_LEVEL=debug
|
||||
|
||||
# ============================================
|
||||
# Jitsi (Video Conferencing)
|
||||
# ============================================
|
||||
JITSI_PUBLIC_URL=http://localhost:8443
|
||||
124
.env.example
Normal file
124
.env.example
Normal file
@@ -0,0 +1,124 @@
|
||||
# BreakPilot PWA - Environment Configuration
|
||||
# Kopieren Sie diese Datei nach .env und passen Sie die Werte an
|
||||
|
||||
# ================================================
|
||||
# Allgemein
|
||||
# ================================================
|
||||
ENVIRONMENT=development
|
||||
# ENVIRONMENT=production
|
||||
|
||||
# ================================================
|
||||
# Sicherheit
|
||||
# ================================================
|
||||
# WICHTIG: In Produktion sichere Schluessel verwenden!
|
||||
# Generieren mit: openssl rand -hex 32
|
||||
JWT_SECRET=CHANGE_ME_RUN_openssl_rand_hex_32
|
||||
JWT_REFRESH_SECRET=CHANGE_ME_RUN_openssl_rand_hex_32
|
||||
|
||||
# ================================================
|
||||
# Keycloak (Optional - fuer Produktion empfohlen)
|
||||
# ================================================
|
||||
# Wenn Keycloak konfiguriert ist, wird es fuer Authentifizierung verwendet.
|
||||
# Ohne Keycloak wird lokales JWT verwendet (gut fuer Entwicklung).
|
||||
#
|
||||
# KEYCLOAK_SERVER_URL=https://keycloak.breakpilot.app
|
||||
# KEYCLOAK_REALM=breakpilot
|
||||
# KEYCLOAK_CLIENT_ID=breakpilot-backend
|
||||
# KEYCLOAK_CLIENT_SECRET=your-client-secret
|
||||
# KEYCLOAK_VERIFY_SSL=true
|
||||
|
||||
# ================================================
|
||||
# E-Mail Konfiguration
|
||||
# ================================================
|
||||
|
||||
# === ENTWICKLUNG (Mailpit - Standardwerte) ===
|
||||
# Mailpit fängt alle E-Mails ab und zeigt sie unter http://localhost:8025
|
||||
SMTP_HOST=mailpit
|
||||
SMTP_PORT=1025
|
||||
SMTP_USERNAME=
|
||||
SMTP_PASSWORD=
|
||||
SMTP_FROM_NAME=BreakPilot
|
||||
SMTP_FROM_ADDR=noreply@breakpilot.app
|
||||
FRONTEND_URL=http://localhost:8000
|
||||
|
||||
# === PRODUKTION (Beispiel für verschiedene Provider) ===
|
||||
|
||||
# --- Option 1: Eigener Mailserver ---
|
||||
# SMTP_HOST=mail.ihredomain.de
|
||||
# SMTP_PORT=587
|
||||
# SMTP_USERNAME=noreply@ihredomain.de
|
||||
# SMTP_PASSWORD=ihr-sicheres-passwort
|
||||
# SMTP_FROM_NAME=BreakPilot
|
||||
# SMTP_FROM_ADDR=noreply@ihredomain.de
|
||||
# FRONTEND_URL=https://app.ihredomain.de
|
||||
|
||||
# --- Option 2: SendGrid ---
|
||||
# SMTP_HOST=smtp.sendgrid.net
|
||||
# SMTP_PORT=587
|
||||
# SMTP_USERNAME=apikey
|
||||
# SMTP_PASSWORD=SG.xxxxxxxxxxxxxxxxxxxxx
|
||||
# SMTP_FROM_NAME=BreakPilot
|
||||
# SMTP_FROM_ADDR=noreply@ihredomain.de
|
||||
|
||||
# --- Option 3: Mailgun ---
|
||||
# SMTP_HOST=smtp.mailgun.org
|
||||
# SMTP_PORT=587
|
||||
# SMTP_USERNAME=postmaster@mg.ihredomain.de
|
||||
# SMTP_PASSWORD=ihr-mailgun-passwort
|
||||
# SMTP_FROM_NAME=BreakPilot
|
||||
# SMTP_FROM_ADDR=noreply@mg.ihredomain.de
|
||||
|
||||
# --- Option 4: Amazon SES ---
|
||||
# SMTP_HOST=email-smtp.eu-central-1.amazonaws.com
|
||||
# SMTP_PORT=587
|
||||
# SMTP_USERNAME=AKIAXXXXXXXXXXXXXXXX
|
||||
# SMTP_PASSWORD=ihr-ses-secret
|
||||
# SMTP_FROM_NAME=BreakPilot
|
||||
# SMTP_FROM_ADDR=noreply@ihredomain.de
|
||||
|
||||
# ================================================
|
||||
# Datenbank
|
||||
# ================================================
|
||||
POSTGRES_USER=breakpilot
|
||||
POSTGRES_PASSWORD=breakpilot123
|
||||
POSTGRES_DB=breakpilot_db
|
||||
DATABASE_URL=postgres://breakpilot:breakpilot123@localhost:5432/breakpilot_db?sslmode=disable
|
||||
|
||||
# ================================================
|
||||
# Optional: AI Integration
|
||||
# ================================================
|
||||
# ANTHROPIC_API_KEY=your-anthropic-api-key-here
|
||||
|
||||
# ================================================
|
||||
# Breakpilot Drive - Lernspiel
|
||||
# ================================================
|
||||
# Aktiviert Datenbank-Speicherung fuer Spielsessions
|
||||
GAME_USE_DATABASE=true
|
||||
|
||||
# LLM fuer Quiz-Fragen-Generierung (optional)
|
||||
# Wenn nicht gesetzt, werden statische Fragen verwendet
|
||||
GAME_LLM_MODEL=llama-3.1-8b
|
||||
GAME_LLM_FALLBACK_MODEL=claude-3-haiku
|
||||
|
||||
# Feature Flags
|
||||
GAME_REQUIRE_AUTH=false
|
||||
GAME_REQUIRE_BILLING=false
|
||||
GAME_ENABLE_LEADERBOARDS=true
|
||||
|
||||
# Task-Kosten fuer Billing (wenn aktiviert)
|
||||
GAME_SESSION_TASK_COST=1.0
|
||||
GAME_QUICK_SESSION_TASK_COST=0.5
|
||||
|
||||
# ================================================
|
||||
# Woodpecker CI/CD
|
||||
# ================================================
|
||||
# URL zum Woodpecker Server
|
||||
WOODPECKER_URL=http://woodpecker-server:8000
|
||||
# API Token für Dashboard-Integration (Pipeline-Start)
|
||||
# Erstellen unter: http://macmini:8090 → User Settings → Personal Access Tokens
|
||||
WOODPECKER_TOKEN=
|
||||
|
||||
# ================================================
|
||||
# Debug
|
||||
# ================================================
|
||||
DEBUG=false
|
||||
113
.env.staging
Normal file
113
.env.staging
Normal file
@@ -0,0 +1,113 @@
|
||||
# ============================================
|
||||
# BreakPilot PWA - STAGING Environment
|
||||
# ============================================
|
||||
# Usage: cp .env.staging .env
|
||||
# Or: ./scripts/env-switch.sh staging
|
||||
# ============================================
|
||||
|
||||
# ============================================
|
||||
# Environment Identifier
|
||||
# ============================================
|
||||
ENVIRONMENT=staging
|
||||
COMPOSE_PROJECT_NAME=breakpilot-staging
|
||||
|
||||
# ============================================
|
||||
# HashiCorp Vault (Secrets Management)
|
||||
# ============================================
|
||||
# In staging, still use dev token but with staging secrets path
|
||||
VAULT_ADDR=http://localhost:8200
|
||||
VAULT_DEV_TOKEN=breakpilot-staging-token
|
||||
|
||||
# ============================================
|
||||
# Database (Separate from Dev!)
|
||||
# ============================================
|
||||
POSTGRES_USER=breakpilot
|
||||
POSTGRES_PASSWORD=staging_secure_password_change_this
|
||||
POSTGRES_DB=breakpilot_staging
|
||||
DATABASE_URL=postgres://breakpilot:staging_secure_password_change_this@postgres:5432/breakpilot_staging?sslmode=disable
|
||||
|
||||
# Synapse DB (Matrix)
|
||||
SYNAPSE_DB_PASSWORD=synapse_staging_secure_123
|
||||
|
||||
# ============================================
|
||||
# Authentication
|
||||
# ============================================
|
||||
# Staging secrets - more secure than dev, but not production
|
||||
JWT_SECRET=staging-jwt-secret-32chars-change-me-now
|
||||
JWT_REFRESH_SECRET=staging-refresh-secret-32chars-secure
|
||||
|
||||
# ============================================
|
||||
# Service URLs (Staging - Different Ports)
|
||||
# ============================================
|
||||
FRONTEND_URL=http://localhost:8001
|
||||
BACKEND_URL=http://localhost:8001
|
||||
CONSENT_SERVICE_URL=http://localhost:8091
|
||||
BILLING_SERVICE_URL=http://localhost:8093
|
||||
SCHOOL_SERVICE_URL=http://localhost:8094
|
||||
KLAUSUR_SERVICE_URL=http://localhost:8096
|
||||
WEBSITE_URL=http://localhost:3001
|
||||
|
||||
# ============================================
|
||||
# E-Mail (Still Mailpit for Safety)
|
||||
# ============================================
|
||||
# Mailpit catches all emails - no accidental sends to real users
|
||||
SMTP_HOST=mailpit
|
||||
SMTP_PORT=1025
|
||||
SMTP_USERNAME=
|
||||
SMTP_PASSWORD=
|
||||
SMTP_FROM_NAME=BreakPilot Staging
|
||||
SMTP_FROM_ADDR=staging@breakpilot.local
|
||||
|
||||
# ============================================
|
||||
# MinIO (Object Storage)
|
||||
# ============================================
|
||||
MINIO_ROOT_USER=breakpilot_staging
|
||||
MINIO_ROOT_PASSWORD=staging_minio_secure_123
|
||||
MINIO_ENDPOINT=localhost:9002
|
||||
|
||||
# ============================================
|
||||
# Qdrant (Vector DB)
|
||||
# ============================================
|
||||
QDRANT_URL=http://localhost:6335
|
||||
|
||||
# ============================================
|
||||
# API Keys (Test Keys for Staging)
|
||||
# ============================================
|
||||
# Use test/sandbox API keys here
|
||||
ANTHROPIC_API_KEY=
|
||||
ANTHROPIC_DEFAULT_MODEL=claude-sonnet-4-20250514
|
||||
ANTHROPIC_ENABLED=false
|
||||
|
||||
VAST_API_KEY=
|
||||
VAST_INSTANCE_ID=
|
||||
CONTROL_API_KEY=
|
||||
VAST_AUTO_SHUTDOWN=true
|
||||
VAST_AUTO_SHUTDOWN_MINUTES=30
|
||||
|
||||
VLLM_BASE_URL=
|
||||
VLLM_ENABLED=false
|
||||
|
||||
# ============================================
|
||||
# Embedding Configuration
|
||||
# ============================================
|
||||
EMBEDDING_BACKEND=local
|
||||
|
||||
# ============================================
|
||||
# Stripe (Billing - Test Mode)
|
||||
# ============================================
|
||||
# Use Stripe TEST keys (sk_test_...)
|
||||
STRIPE_SECRET_KEY=
|
||||
STRIPE_PUBLISHABLE_KEY=
|
||||
STRIPE_WEBHOOK_SECRET=
|
||||
|
||||
# ============================================
|
||||
# Debug Settings (Reduced in Staging)
|
||||
# ============================================
|
||||
DEBUG=false
|
||||
GIN_MODE=release
|
||||
LOG_LEVEL=info
|
||||
|
||||
# ============================================
|
||||
# Jitsi (Video Conferencing)
|
||||
# ============================================
|
||||
JITSI_PUBLIC_URL=http://localhost:8444
|
||||
132
.github/dependabot.yml
vendored
Normal file
132
.github/dependabot.yml
vendored
Normal file
@@ -0,0 +1,132 @@
|
||||
# Dependabot Configuration for BreakPilot PWA
|
||||
# This file configures Dependabot to automatically check for outdated dependencies
|
||||
# and create pull requests to update them
|
||||
|
||||
version: 2
|
||||
updates:
|
||||
# Go dependencies (consent-service)
|
||||
- package-ecosystem: "gomod"
|
||||
directory: "/consent-service"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
open-pull-requests-limit: 5
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "go"
|
||||
- "security"
|
||||
commit-message:
|
||||
prefix: "deps(go):"
|
||||
groups:
|
||||
go-minor:
|
||||
patterns:
|
||||
- "*"
|
||||
update-types:
|
||||
- "minor"
|
||||
- "patch"
|
||||
|
||||
# Python dependencies (backend)
|
||||
- package-ecosystem: "pip"
|
||||
directory: "/backend"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
open-pull-requests-limit: 5
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "python"
|
||||
- "security"
|
||||
commit-message:
|
||||
prefix: "deps(python):"
|
||||
groups:
|
||||
python-minor:
|
||||
patterns:
|
||||
- "*"
|
||||
update-types:
|
||||
- "minor"
|
||||
- "patch"
|
||||
|
||||
# Node.js dependencies (website)
|
||||
- package-ecosystem: "npm"
|
||||
directory: "/website"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
open-pull-requests-limit: 5
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "javascript"
|
||||
- "security"
|
||||
commit-message:
|
||||
prefix: "deps(npm):"
|
||||
groups:
|
||||
npm-minor:
|
||||
patterns:
|
||||
- "*"
|
||||
update-types:
|
||||
- "minor"
|
||||
- "patch"
|
||||
|
||||
# GitHub Actions
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
open-pull-requests-limit: 5
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "github-actions"
|
||||
commit-message:
|
||||
prefix: "deps(actions):"
|
||||
|
||||
# Docker base images
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/consent-service"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "docker"
|
||||
- "security"
|
||||
commit-message:
|
||||
prefix: "deps(docker):"
|
||||
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/backend"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "docker"
|
||||
- "security"
|
||||
commit-message:
|
||||
prefix: "deps(docker):"
|
||||
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/website"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "docker"
|
||||
- "security"
|
||||
commit-message:
|
||||
prefix: "deps(docker):"
|
||||
503
.github/workflows/ci.yml
vendored
Normal file
503
.github/workflows/ci.yml
vendored
Normal file
@@ -0,0 +1,503 @@
|
||||
name: CI/CD Pipeline
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, develop]
|
||||
pull_request:
|
||||
branches: [main, develop]
|
||||
|
||||
env:
|
||||
GO_VERSION: '1.21'
|
||||
PYTHON_VERSION: '3.11'
|
||||
NODE_VERSION: '20'
|
||||
POSTGRES_USER: breakpilot
|
||||
POSTGRES_PASSWORD: breakpilot123
|
||||
POSTGRES_DB: breakpilot_test
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_PREFIX: ${{ github.repository_owner }}/breakpilot
|
||||
|
||||
jobs:
|
||||
# ==========================================
|
||||
# Go Consent Service Tests
|
||||
# ==========================================
|
||||
go-tests:
|
||||
name: Go Tests
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
env:
|
||||
POSTGRES_USER: ${{ env.POSTGRES_USER }}
|
||||
POSTGRES_PASSWORD: ${{ env.POSTGRES_PASSWORD }}
|
||||
POSTGRES_DB: ${{ env.POSTGRES_DB }}
|
||||
ports:
|
||||
- 5432:5432
|
||||
options: >-
|
||||
--health-cmd pg_isready
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 5
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: ${{ env.GO_VERSION }}
|
||||
cache-dependency-path: consent-service/go.sum
|
||||
|
||||
- name: Download dependencies
|
||||
working-directory: ./consent-service
|
||||
run: go mod download
|
||||
|
||||
- name: Run Go Vet
|
||||
working-directory: ./consent-service
|
||||
run: go vet ./...
|
||||
|
||||
- name: Run Unit Tests
|
||||
working-directory: ./consent-service
|
||||
run: go test -v -race -coverprofile=coverage.out ./...
|
||||
env:
|
||||
DATABASE_URL: postgres://${{ env.POSTGRES_USER }}:${{ env.POSTGRES_PASSWORD }}@localhost:5432/${{ env.POSTGRES_DB }}?sslmode=disable
|
||||
JWT_SECRET: test-jwt-secret-for-ci
|
||||
JWT_REFRESH_SECRET: test-refresh-secret-for-ci
|
||||
|
||||
- name: Check Coverage
|
||||
working-directory: ./consent-service
|
||||
run: |
|
||||
go tool cover -func=coverage.out
|
||||
COVERAGE=$(go tool cover -func=coverage.out | grep total | awk '{print $3}' | sed 's/%//')
|
||||
echo "Total coverage: ${COVERAGE}%"
|
||||
if (( $(echo "$COVERAGE < 50" | bc -l) )); then
|
||||
echo "::warning::Coverage is below 50%"
|
||||
fi
|
||||
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v4
|
||||
with:
|
||||
files: ./consent-service/coverage.out
|
||||
flags: go
|
||||
name: go-coverage
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Python Backend Tests
|
||||
# ==========================================
|
||||
python-tests:
|
||||
name: Python Tests
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
cache: 'pip'
|
||||
cache-dependency-path: backend/requirements.txt
|
||||
|
||||
- name: Install dependencies
|
||||
working-directory: ./backend
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install pytest pytest-cov pytest-asyncio httpx
|
||||
|
||||
- name: Run Python Tests
|
||||
working-directory: ./backend
|
||||
run: pytest -v --cov=. --cov-report=xml --cov-report=term-missing
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v4
|
||||
with:
|
||||
files: ./backend/coverage.xml
|
||||
flags: python
|
||||
name: python-coverage
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Node.js Website Tests
|
||||
# ==========================================
|
||||
website-tests:
|
||||
name: Website Tests
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
cache: 'npm'
|
||||
cache-dependency-path: website/package-lock.json
|
||||
|
||||
- name: Install dependencies
|
||||
working-directory: ./website
|
||||
run: npm ci
|
||||
|
||||
- name: Run TypeScript check
|
||||
working-directory: ./website
|
||||
run: npx tsc --noEmit
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run ESLint
|
||||
working-directory: ./website
|
||||
run: npm run lint
|
||||
continue-on-error: true
|
||||
|
||||
- name: Build website
|
||||
working-directory: ./website
|
||||
run: npm run build
|
||||
env:
|
||||
NEXT_PUBLIC_BILLING_API_URL: http://localhost:8083
|
||||
NEXT_PUBLIC_APP_URL: http://localhost:3000
|
||||
|
||||
# ==========================================
|
||||
# Linting
|
||||
# ==========================================
|
||||
lint:
|
||||
name: Linting
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: ${{ env.GO_VERSION }}
|
||||
|
||||
- name: Run golangci-lint
|
||||
uses: golangci/golangci-lint-action@v4
|
||||
with:
|
||||
version: latest
|
||||
working-directory: ./consent-service
|
||||
args: --timeout=5m
|
||||
continue-on-error: true
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install Python linters
|
||||
run: pip install flake8 black isort
|
||||
|
||||
- name: Run flake8
|
||||
working-directory: ./backend
|
||||
run: flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
|
||||
continue-on-error: true
|
||||
|
||||
- name: Check Black formatting
|
||||
working-directory: ./backend
|
||||
run: black --check --diff .
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Security Scan
|
||||
# ==========================================
|
||||
security:
|
||||
name: Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Trivy vulnerability scanner
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
scan-type: 'fs'
|
||||
scan-ref: '.'
|
||||
severity: 'CRITICAL,HIGH'
|
||||
exit-code: '0'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run Go security check
|
||||
uses: securego/gosec@master
|
||||
with:
|
||||
args: '-no-fail -fmt sarif -out results.sarif ./consent-service/...'
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Docker Build & Push
|
||||
# ==========================================
|
||||
docker-build:
|
||||
name: Docker Build & Push
|
||||
runs-on: ubuntu-latest
|
||||
needs: [go-tests, python-tests, website-tests]
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Log in to GitHub Container Registry
|
||||
if: github.event_name != 'pull_request'
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata for consent-service
|
||||
id: meta-consent
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-consent-service
|
||||
tags: |
|
||||
type=ref,event=branch
|
||||
type=ref,event=pr
|
||||
type=sha,prefix=
|
||||
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
|
||||
|
||||
- name: Build and push consent-service
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: ./consent-service
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.meta-consent.outputs.tags }}
|
||||
labels: ${{ steps.meta-consent.outputs.labels }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
|
||||
- name: Extract metadata for backend
|
||||
id: meta-backend
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-backend
|
||||
tags: |
|
||||
type=ref,event=branch
|
||||
type=ref,event=pr
|
||||
type=sha,prefix=
|
||||
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
|
||||
|
||||
- name: Build and push backend
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: ./backend
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.meta-backend.outputs.tags }}
|
||||
labels: ${{ steps.meta-backend.outputs.labels }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
|
||||
- name: Extract metadata for website
|
||||
id: meta-website
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-website
|
||||
tags: |
|
||||
type=ref,event=branch
|
||||
type=ref,event=pr
|
||||
type=sha,prefix=
|
||||
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
|
||||
|
||||
- name: Build and push website
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: ./website
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.meta-website.outputs.tags }}
|
||||
labels: ${{ steps.meta-website.outputs.labels }}
|
||||
build-args: |
|
||||
NEXT_PUBLIC_BILLING_API_URL=${{ vars.NEXT_PUBLIC_BILLING_API_URL || 'http://localhost:8083' }}
|
||||
NEXT_PUBLIC_APP_URL=${{ vars.NEXT_PUBLIC_APP_URL || 'http://localhost:3000' }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
|
||||
# ==========================================
|
||||
# Integration Tests
|
||||
# ==========================================
|
||||
integration-tests:
|
||||
name: Integration Tests
|
||||
runs-on: ubuntu-latest
|
||||
needs: [docker-build]
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Start services with Docker Compose
|
||||
run: |
|
||||
docker compose up -d postgres mailpit
|
||||
sleep 10
|
||||
|
||||
- name: Run consent-service
|
||||
working-directory: ./consent-service
|
||||
run: |
|
||||
go build -o consent-service ./cmd/server
|
||||
./consent-service &
|
||||
sleep 5
|
||||
env:
|
||||
DATABASE_URL: postgres://breakpilot:breakpilot123@localhost:5432/breakpilot_db?sslmode=disable
|
||||
JWT_SECRET: test-jwt-secret
|
||||
JWT_REFRESH_SECRET: test-refresh-secret
|
||||
SMTP_HOST: localhost
|
||||
SMTP_PORT: 1025
|
||||
|
||||
- name: Health Check
|
||||
run: |
|
||||
curl -f http://localhost:8081/health || exit 1
|
||||
|
||||
- name: Run Integration Tests
|
||||
run: |
|
||||
# Test Auth endpoints
|
||||
curl -s http://localhost:8081/api/v1/auth/health
|
||||
|
||||
# Test Document endpoints
|
||||
curl -s http://localhost:8081/api/v1/documents
|
||||
continue-on-error: true
|
||||
|
||||
- name: Stop services
|
||||
if: always()
|
||||
run: docker compose down
|
||||
|
||||
# ==========================================
|
||||
# Deploy to Staging
|
||||
# ==========================================
|
||||
deploy-staging:
|
||||
name: Deploy to Staging
|
||||
runs-on: ubuntu-latest
|
||||
needs: [docker-build, integration-tests]
|
||||
if: github.ref == 'refs/heads/develop' && github.event_name == 'push'
|
||||
environment:
|
||||
name: staging
|
||||
url: https://staging.breakpilot.app
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Log in to GitHub Container Registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Deploy to staging server
|
||||
env:
|
||||
STAGING_HOST: ${{ secrets.STAGING_HOST }}
|
||||
STAGING_USER: ${{ secrets.STAGING_USER }}
|
||||
STAGING_SSH_KEY: ${{ secrets.STAGING_SSH_KEY }}
|
||||
run: |
|
||||
# This is a placeholder for actual deployment
|
||||
# Configure based on your staging infrastructure
|
||||
echo "Deploying to staging environment..."
|
||||
echo "Images to deploy:"
|
||||
echo " - ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-consent-service:develop"
|
||||
echo " - ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-backend:develop"
|
||||
echo " - ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-website:develop"
|
||||
|
||||
# Example: SSH deployment (uncomment when configured)
|
||||
# mkdir -p ~/.ssh
|
||||
# echo "$STAGING_SSH_KEY" > ~/.ssh/id_rsa
|
||||
# chmod 600 ~/.ssh/id_rsa
|
||||
# ssh -o StrictHostKeyChecking=no $STAGING_USER@$STAGING_HOST "cd /opt/breakpilot && docker compose pull && docker compose up -d"
|
||||
|
||||
- name: Notify deployment
|
||||
run: |
|
||||
echo "## Staging Deployment" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Successfully deployed to staging environment" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "**Deployed images:**" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- consent-service: \`develop\`" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- backend: \`develop\`" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- website: \`develop\`" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
# ==========================================
|
||||
# Deploy to Production
|
||||
# ==========================================
|
||||
deploy-production:
|
||||
name: Deploy to Production
|
||||
runs-on: ubuntu-latest
|
||||
needs: [docker-build, integration-tests]
|
||||
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
|
||||
environment:
|
||||
name: production
|
||||
url: https://breakpilot.app
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Log in to GitHub Container Registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Deploy to production server
|
||||
env:
|
||||
PROD_HOST: ${{ secrets.PROD_HOST }}
|
||||
PROD_USER: ${{ secrets.PROD_USER }}
|
||||
PROD_SSH_KEY: ${{ secrets.PROD_SSH_KEY }}
|
||||
run: |
|
||||
# This is a placeholder for actual deployment
|
||||
# Configure based on your production infrastructure
|
||||
echo "Deploying to production environment..."
|
||||
echo "Images to deploy:"
|
||||
echo " - ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-consent-service:latest"
|
||||
echo " - ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-backend:latest"
|
||||
echo " - ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-website:latest"
|
||||
|
||||
# Example: SSH deployment (uncomment when configured)
|
||||
# mkdir -p ~/.ssh
|
||||
# echo "$PROD_SSH_KEY" > ~/.ssh/id_rsa
|
||||
# chmod 600 ~/.ssh/id_rsa
|
||||
# ssh -o StrictHostKeyChecking=no $PROD_USER@$PROD_HOST "cd /opt/breakpilot && docker compose pull && docker compose up -d"
|
||||
|
||||
- name: Notify deployment
|
||||
run: |
|
||||
echo "## Production Deployment" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Successfully deployed to production environment" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "**Deployed images:**" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- consent-service: \`latest\`" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- backend: \`latest\`" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- website: \`latest\`" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
# ==========================================
|
||||
# Summary
|
||||
# ==========================================
|
||||
summary:
|
||||
name: CI Summary
|
||||
runs-on: ubuntu-latest
|
||||
needs: [go-tests, python-tests, website-tests, lint, security, docker-build, integration-tests]
|
||||
if: always()
|
||||
|
||||
steps:
|
||||
- name: Check job results
|
||||
run: |
|
||||
echo "## CI/CD Summary" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Job | Status |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "|-----|--------|" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Go Tests | ${{ needs.go-tests.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Python Tests | ${{ needs.python-tests.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Website Tests | ${{ needs.website-tests.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Linting | ${{ needs.lint.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Security | ${{ needs.security.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Docker Build | ${{ needs.docker-build.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Integration Tests | ${{ needs.integration-tests.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "### Docker Images" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Images are pushed to: \`${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-*\`" >> $GITHUB_STEP_SUMMARY
|
||||
222
.github/workflows/security.yml
vendored
Normal file
222
.github/workflows/security.yml
vendored
Normal file
@@ -0,0 +1,222 @@
|
||||
name: Security Scanning
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, develop]
|
||||
pull_request:
|
||||
branches: [main, develop]
|
||||
schedule:
|
||||
# Run security scans weekly on Sundays at midnight
|
||||
- cron: '0 0 * * 0'
|
||||
|
||||
jobs:
|
||||
# ==========================================
|
||||
# Secret Scanning
|
||||
# ==========================================
|
||||
secret-scan:
|
||||
name: Secret Scanning
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: TruffleHog Secret Scan
|
||||
uses: trufflesecurity/trufflehog@main
|
||||
with:
|
||||
extra_args: --only-verified
|
||||
|
||||
- name: GitLeaks Secret Scan
|
||||
uses: gitleaks/gitleaks-action@v2
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Dependency Vulnerability Scanning
|
||||
# ==========================================
|
||||
dependency-scan:
|
||||
name: Dependency Vulnerability Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Trivy vulnerability scanner (filesystem)
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
scan-type: 'fs'
|
||||
scan-ref: '.'
|
||||
severity: 'CRITICAL,HIGH'
|
||||
format: 'sarif'
|
||||
output: 'trivy-fs-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload Trivy scan results to GitHub Security tab
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: 'trivy-fs-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Go Security Scan
|
||||
# ==========================================
|
||||
go-security:
|
||||
name: Go Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.21'
|
||||
|
||||
- name: Run Gosec Security Scanner
|
||||
uses: securego/gosec@master
|
||||
with:
|
||||
args: '-no-fail -fmt sarif -out gosec-results.sarif ./consent-service/...'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload Gosec results to GitHub Security tab
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: 'gosec-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run govulncheck
|
||||
working-directory: ./consent-service
|
||||
run: |
|
||||
go install golang.org/x/vuln/cmd/govulncheck@latest
|
||||
govulncheck ./... || true
|
||||
|
||||
# ==========================================
|
||||
# Python Security Scan
|
||||
# ==========================================
|
||||
python-security:
|
||||
name: Python Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Install safety
|
||||
run: pip install safety bandit
|
||||
|
||||
- name: Run Safety (dependency check)
|
||||
working-directory: ./backend
|
||||
run: safety check -r requirements.txt --full-report || true
|
||||
|
||||
- name: Run Bandit (code security scan)
|
||||
working-directory: ./backend
|
||||
run: bandit -r . -f sarif -o bandit-results.sarif --exit-zero
|
||||
|
||||
- name: Upload Bandit results to GitHub Security tab
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: './backend/bandit-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Node.js Security Scan
|
||||
# ==========================================
|
||||
node-security:
|
||||
name: Node.js Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
|
||||
- name: Install dependencies
|
||||
working-directory: ./website
|
||||
run: npm ci
|
||||
|
||||
- name: Run npm audit
|
||||
working-directory: ./website
|
||||
run: npm audit --audit-level=high || true
|
||||
|
||||
# ==========================================
|
||||
# Docker Image Scanning
|
||||
# ==========================================
|
||||
docker-security:
|
||||
name: Docker Image Security
|
||||
runs-on: ubuntu-latest
|
||||
needs: [go-security, python-security, node-security]
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Build consent-service image
|
||||
run: docker build -t breakpilot/consent-service:scan ./consent-service
|
||||
|
||||
- name: Run Trivy on consent-service
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
image-ref: 'breakpilot/consent-service:scan'
|
||||
severity: 'CRITICAL,HIGH'
|
||||
format: 'sarif'
|
||||
output: 'trivy-consent-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Build backend image
|
||||
run: docker build -t breakpilot/backend:scan ./backend
|
||||
|
||||
- name: Run Trivy on backend
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
image-ref: 'breakpilot/backend:scan'
|
||||
severity: 'CRITICAL,HIGH'
|
||||
format: 'sarif'
|
||||
output: 'trivy-backend-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload Trivy results
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: 'trivy-consent-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Security Summary
|
||||
# ==========================================
|
||||
security-summary:
|
||||
name: Security Summary
|
||||
runs-on: ubuntu-latest
|
||||
needs: [secret-scan, dependency-scan, go-security, python-security, node-security, docker-security]
|
||||
if: always()
|
||||
|
||||
steps:
|
||||
- name: Create security summary
|
||||
run: |
|
||||
echo "## Security Scan Results" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Scan Type | Status |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "|-----------|--------|" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Secret Scanning | ${{ needs.secret-scan.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Dependency Scanning | ${{ needs.dependency-scan.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Go Security | ${{ needs.go-security.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Python Security | ${{ needs.python-security.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Node.js Security | ${{ needs.node-security.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Docker Security | ${{ needs.docker-security.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "### Notes" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- Results are uploaded to the GitHub Security tab" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- Weekly scheduled scans run on Sundays" >> $GITHUB_STEP_SUMMARY
|
||||
244
.github/workflows/test.yml
vendored
Normal file
244
.github/workflows/test.yml
vendored
Normal file
@@ -0,0 +1,244 @@
|
||||
name: Tests
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main, develop ]
|
||||
pull_request:
|
||||
branches: [ main, develop ]
|
||||
|
||||
jobs:
|
||||
go-tests:
|
||||
name: Go Tests
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
env:
|
||||
POSTGRES_USER: breakpilot
|
||||
POSTGRES_PASSWORD: breakpilot123
|
||||
POSTGRES_DB: breakpilot_test
|
||||
ports:
|
||||
- 5432:5432
|
||||
options: >-
|
||||
--health-cmd pg_isready
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 5
|
||||
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.21'
|
||||
cache: true
|
||||
cache-dependency-path: consent-service/go.sum
|
||||
|
||||
- name: Install Dependencies
|
||||
working-directory: ./consent-service
|
||||
run: go mod download
|
||||
|
||||
- name: Run Tests
|
||||
working-directory: ./consent-service
|
||||
env:
|
||||
DATABASE_URL: postgres://breakpilot:breakpilot123@localhost:5432/breakpilot_test?sslmode=disable
|
||||
JWT_SECRET: test-secret-key-for-ci
|
||||
JWT_REFRESH_SECRET: test-refresh-secret-for-ci
|
||||
run: |
|
||||
go test -v -race -coverprofile=coverage.out ./...
|
||||
go tool cover -func=coverage.out
|
||||
|
||||
- name: Check Coverage Threshold
|
||||
working-directory: ./consent-service
|
||||
run: |
|
||||
COVERAGE=$(go tool cover -func=coverage.out | grep total | awk '{print $3}' | sed 's/%//')
|
||||
echo "Total Coverage: $COVERAGE%"
|
||||
if (( $(echo "$COVERAGE < 70.0" | bc -l) )); then
|
||||
echo "Coverage $COVERAGE% is below threshold 70%"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Upload Coverage to Codecov
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
files: ./consent-service/coverage.out
|
||||
flags: go
|
||||
name: go-coverage
|
||||
|
||||
python-tests:
|
||||
name: Python Tests
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.10'
|
||||
cache: 'pip'
|
||||
cache-dependency-path: backend/requirements.txt
|
||||
|
||||
- name: Install Dependencies
|
||||
working-directory: ./backend
|
||||
run: |
|
||||
pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install pytest pytest-cov pytest-asyncio
|
||||
|
||||
- name: Run Tests
|
||||
working-directory: ./backend
|
||||
env:
|
||||
CONSENT_SERVICE_URL: http://localhost:8081
|
||||
JWT_SECRET: test-secret-key-for-ci
|
||||
run: |
|
||||
pytest -v --cov=. --cov-report=xml --cov-report=term
|
||||
|
||||
- name: Check Coverage Threshold
|
||||
working-directory: ./backend
|
||||
run: |
|
||||
COVERAGE=$(python -c "import xml.etree.ElementTree as ET; tree = ET.parse('coverage.xml'); print(tree.getroot().attrib['line-rate'])")
|
||||
COVERAGE_PCT=$(echo "$COVERAGE * 100" | bc)
|
||||
echo "Total Coverage: ${COVERAGE_PCT}%"
|
||||
if (( $(echo "$COVERAGE_PCT < 60.0" | bc -l) )); then
|
||||
echo "Coverage ${COVERAGE_PCT}% is below threshold 60%"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Upload Coverage to Codecov
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
files: ./backend/coverage.xml
|
||||
flags: python
|
||||
name: python-coverage
|
||||
|
||||
integration-tests:
|
||||
name: Integration Tests
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Start Services
|
||||
run: |
|
||||
docker-compose up -d
|
||||
docker-compose ps
|
||||
|
||||
- name: Wait for Postgres
|
||||
run: |
|
||||
timeout 60 bash -c 'until docker-compose exec -T postgres pg_isready -U breakpilot; do sleep 2; done'
|
||||
|
||||
- name: Wait for Consent Service
|
||||
run: |
|
||||
timeout 60 bash -c 'until curl -f http://localhost:8081/health; do sleep 2; done'
|
||||
|
||||
- name: Wait for Backend
|
||||
run: |
|
||||
timeout 60 bash -c 'until curl -f http://localhost:8000/health; do sleep 2; done'
|
||||
|
||||
- name: Wait for Mailpit
|
||||
run: |
|
||||
timeout 60 bash -c 'until curl -f http://localhost:8025/api/v1/info; do sleep 2; done'
|
||||
|
||||
- name: Run Integration Tests
|
||||
run: |
|
||||
chmod +x ./scripts/integration-tests.sh
|
||||
./scripts/integration-tests.sh
|
||||
|
||||
- name: Show Service Logs on Failure
|
||||
if: failure()
|
||||
run: |
|
||||
echo "=== Consent Service Logs ==="
|
||||
docker-compose logs consent-service
|
||||
echo "=== Backend Logs ==="
|
||||
docker-compose logs backend
|
||||
echo "=== Postgres Logs ==="
|
||||
docker-compose logs postgres
|
||||
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: docker-compose down -v
|
||||
|
||||
lint-go:
|
||||
name: Go Lint
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.21'
|
||||
|
||||
- name: Run golangci-lint
|
||||
uses: golangci/golangci-lint-action@v3
|
||||
with:
|
||||
version: latest
|
||||
working-directory: consent-service
|
||||
args: --timeout=5m
|
||||
|
||||
lint-python:
|
||||
name: Python Lint
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.10'
|
||||
|
||||
- name: Install Dependencies
|
||||
run: |
|
||||
pip install flake8 black mypy
|
||||
|
||||
- name: Run Black
|
||||
working-directory: ./backend
|
||||
run: black --check .
|
||||
|
||||
- name: Run Flake8
|
||||
working-directory: ./backend
|
||||
run: flake8 . --max-line-length=120 --exclude=venv
|
||||
|
||||
security-scan:
|
||||
name: Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Trivy Security Scan
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
scan-type: 'fs'
|
||||
scan-ref: '.'
|
||||
format: 'sarif'
|
||||
output: 'trivy-results.sarif'
|
||||
|
||||
- name: Upload Trivy Results to GitHub Security
|
||||
uses: github/codeql-action/upload-sarif@v2
|
||||
if: always()
|
||||
with:
|
||||
sarif_file: 'trivy-results.sarif'
|
||||
|
||||
all-checks:
|
||||
name: All Checks Passed
|
||||
runs-on: ubuntu-latest
|
||||
needs: [go-tests, python-tests, integration-tests, lint-go, lint-python, security-scan]
|
||||
|
||||
steps:
|
||||
- name: All Tests Passed
|
||||
run: echo "All tests and checks passed successfully!"
|
||||
54
.gitignore
vendored
54
.gitignore
vendored
@@ -143,3 +143,57 @@ coverage/
|
||||
*.safetensors
|
||||
models/
|
||||
.claude/settings.local.json
|
||||
|
||||
# ============================================
|
||||
# IDE Plugins & AI Tools
|
||||
# ============================================
|
||||
.continue/
|
||||
CLAUDE_CONTINUE.md
|
||||
|
||||
# ============================================
|
||||
# Misplaced / Large Directories
|
||||
# ============================================
|
||||
backend/BreakpilotDrive/
|
||||
backend/website/
|
||||
backend/screenshots/
|
||||
**/za-download-9/
|
||||
|
||||
# ============================================
|
||||
# Debug & Temp Artifacts
|
||||
# ============================================
|
||||
*.command
|
||||
ssh_key*.txt
|
||||
anleitung.txt
|
||||
fix_permissions.txt
|
||||
|
||||
# ============================================
|
||||
# Compiled Go Binaries
|
||||
# ============================================
|
||||
billing-service/billing-service
|
||||
ai-compliance-sdk/server
|
||||
consent-service/server
|
||||
edu-search-service/server
|
||||
edu-search-service/edu-search-service
|
||||
|
||||
# ============================================
|
||||
# Large Document Archives (PDFs, DOCX)
|
||||
# ============================================
|
||||
docs/za-download/
|
||||
docs/za-download-2/
|
||||
docs/za-download-3/
|
||||
*.pdf
|
||||
*.docx
|
||||
*.xlsx
|
||||
*.pptx
|
||||
*.numbers
|
||||
|
||||
# ============================================
|
||||
# MkDocs Build Output
|
||||
# ============================================
|
||||
docs-site/
|
||||
|
||||
# ============================================
|
||||
# Entfernte Projekte (nicht mehr aktiv)
|
||||
# ============================================
|
||||
BreakpilotDrive/
|
||||
billing-service/
|
||||
|
||||
77
.gitleaks.toml
Normal file
77
.gitleaks.toml
Normal file
@@ -0,0 +1,77 @@
|
||||
# Gitleaks Configuration for BreakPilot
|
||||
# https://github.com/gitleaks/gitleaks
|
||||
#
|
||||
# Run locally: gitleaks detect --source . -v
|
||||
# Pre-commit: gitleaks protect --staged -v
|
||||
|
||||
title = "BreakPilot Gitleaks Configuration"
|
||||
|
||||
# Use the default rules plus custom rules
|
||||
[extend]
|
||||
useDefault = true
|
||||
|
||||
# Custom rules for BreakPilot-specific patterns
|
||||
[[rules]]
|
||||
id = "anthropic-api-key"
|
||||
description = "Anthropic API Key"
|
||||
regex = '''sk-ant-api[0-9a-zA-Z-_]{20,}'''
|
||||
tags = ["api", "anthropic"]
|
||||
keywords = ["sk-ant-api"]
|
||||
|
||||
[[rules]]
|
||||
id = "vast-api-key"
|
||||
description = "vast.ai API Key"
|
||||
regex = '''(?i)(vast[_-]?api[_-]?key|vast[_-]?key)\s*[=:]\s*['"]?([a-zA-Z0-9-_]{20,})['"]?'''
|
||||
tags = ["api", "vast"]
|
||||
keywords = ["vast"]
|
||||
|
||||
[[rules]]
|
||||
id = "stripe-secret-key"
|
||||
description = "Stripe Secret Key"
|
||||
regex = '''sk_live_[0-9a-zA-Z]{24,}'''
|
||||
tags = ["api", "stripe"]
|
||||
keywords = ["sk_live"]
|
||||
|
||||
[[rules]]
|
||||
id = "stripe-restricted-key"
|
||||
description = "Stripe Restricted Key"
|
||||
regex = '''rk_live_[0-9a-zA-Z]{24,}'''
|
||||
tags = ["api", "stripe"]
|
||||
keywords = ["rk_live"]
|
||||
|
||||
[[rules]]
|
||||
id = "jwt-secret-hardcoded"
|
||||
description = "Hardcoded JWT Secret"
|
||||
regex = '''(?i)(jwt[_-]?secret|jwt[_-]?key)\s*[=:]\s*['"]([^'"]{32,})['"]'''
|
||||
tags = ["secret", "jwt"]
|
||||
keywords = ["jwt"]
|
||||
|
||||
# Allowlist for false positives
|
||||
[allowlist]
|
||||
description = "Global allowlist"
|
||||
paths = [
|
||||
'''\.env\.example$''',
|
||||
'''\.env\.template$''',
|
||||
'''docs/.*\.md$''',
|
||||
'''SBOM\.md$''',
|
||||
'''.*_test\.py$''',
|
||||
'''.*_test\.go$''',
|
||||
'''test_.*\.py$''',
|
||||
'''.*\.bak$''',
|
||||
'''node_modules/.*''',
|
||||
'''venv/.*''',
|
||||
'''\.git/.*''',
|
||||
]
|
||||
|
||||
# Specific commit allowlist (for already-rotated secrets)
|
||||
commits = []
|
||||
|
||||
# Regex patterns to ignore
|
||||
regexes = [
|
||||
'''REPLACE_WITH_REAL_.*''',
|
||||
'''your-.*-key-change-in-production''',
|
||||
'''breakpilot-dev-.*''',
|
||||
'''DEVELOPMENT-ONLY-.*''',
|
||||
'''placeholder.*''',
|
||||
'''example.*key''',
|
||||
]
|
||||
152
.pre-commit-config.yaml
Normal file
152
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,152 @@
|
||||
# Pre-commit Hooks für BreakPilot
|
||||
# Installation: pip install pre-commit && pre-commit install
|
||||
# Aktivierung: pre-commit install
|
||||
|
||||
repos:
|
||||
# Go Hooks
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: go-test
|
||||
name: Go Tests
|
||||
entry: bash -c 'cd consent-service && go test -short ./...'
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: \.go$
|
||||
stages: [commit]
|
||||
|
||||
- id: go-fmt
|
||||
name: Go Format
|
||||
entry: bash -c 'cd consent-service && gofmt -l -w .'
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: \.go$
|
||||
stages: [commit]
|
||||
|
||||
- id: go-vet
|
||||
name: Go Vet
|
||||
entry: bash -c 'cd consent-service && go vet ./...'
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: \.go$
|
||||
stages: [commit]
|
||||
|
||||
- id: golangci-lint
|
||||
name: Go Lint (golangci-lint)
|
||||
entry: bash -c 'cd consent-service && golangci-lint run --timeout=5m'
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: \.go$
|
||||
stages: [commit]
|
||||
|
||||
# Python Hooks
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: pytest
|
||||
name: Python Tests
|
||||
entry: bash -c 'cd backend && pytest -x'
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: \.py$
|
||||
stages: [commit]
|
||||
|
||||
- id: black
|
||||
name: Black Format
|
||||
entry: black
|
||||
language: python
|
||||
types: [python]
|
||||
args: [--line-length=120]
|
||||
stages: [commit]
|
||||
|
||||
- id: flake8
|
||||
name: Flake8 Lint
|
||||
entry: flake8
|
||||
language: python
|
||||
types: [python]
|
||||
args: [--max-line-length=120, --exclude=venv]
|
||||
stages: [commit]
|
||||
|
||||
# General Hooks
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: trailing-whitespace
|
||||
name: Trim Trailing Whitespace
|
||||
- id: end-of-file-fixer
|
||||
name: Fix End of Files
|
||||
- id: check-yaml
|
||||
name: Check YAML
|
||||
args: [--allow-multiple-documents]
|
||||
- id: check-json
|
||||
name: Check JSON
|
||||
- id: check-added-large-files
|
||||
name: Check Large Files
|
||||
args: [--maxkb=500]
|
||||
- id: detect-private-key
|
||||
name: Detect Private Keys
|
||||
- id: mixed-line-ending
|
||||
name: Fix Mixed Line Endings
|
||||
|
||||
# Security Checks
|
||||
- repo: https://github.com/Yelp/detect-secrets
|
||||
rev: v1.4.0
|
||||
hooks:
|
||||
- id: detect-secrets
|
||||
name: Detect Secrets
|
||||
args: ['--baseline', '.secrets.baseline']
|
||||
exclude: |
|
||||
(?x)^(
|
||||
.*\.lock|
|
||||
.*\.sum|
|
||||
package-lock\.json
|
||||
)$
|
||||
|
||||
# =============================================
|
||||
# DevSecOps: Gitleaks (Secrets Detection)
|
||||
# =============================================
|
||||
- repo: https://github.com/gitleaks/gitleaks
|
||||
rev: v8.18.1
|
||||
hooks:
|
||||
- id: gitleaks
|
||||
name: Gitleaks (secrets detection)
|
||||
entry: gitleaks protect --staged -v --config .gitleaks.toml
|
||||
language: golang
|
||||
pass_filenames: false
|
||||
|
||||
# =============================================
|
||||
# DevSecOps: Semgrep (SAST)
|
||||
# =============================================
|
||||
- repo: https://github.com/returntocorp/semgrep
|
||||
rev: v1.52.0
|
||||
hooks:
|
||||
- id: semgrep
|
||||
name: Semgrep (SAST)
|
||||
args:
|
||||
- --config=auto
|
||||
- --config=.semgrep.yml
|
||||
- --severity=ERROR
|
||||
types_or: [python, javascript, typescript, go]
|
||||
stages: [commit]
|
||||
|
||||
# =============================================
|
||||
# DevSecOps: Bandit (Python Security)
|
||||
# =============================================
|
||||
- repo: https://github.com/PyCQA/bandit
|
||||
rev: 1.7.6
|
||||
hooks:
|
||||
- id: bandit
|
||||
name: Bandit (Python security)
|
||||
args: ["-r", "backend/", "-ll", "-x", "backend/tests/*"]
|
||||
files: ^backend/.*\.py$
|
||||
stages: [commit]
|
||||
|
||||
# Branch Protection
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: no-commit-to-branch
|
||||
name: Protect main/develop branches
|
||||
args: ['--branch', 'main', '--branch', 'develop']
|
||||
|
||||
# Configuration
|
||||
default_stages: [commit]
|
||||
fail_fast: false
|
||||
147
.semgrep.yml
Normal file
147
.semgrep.yml
Normal file
@@ -0,0 +1,147 @@
|
||||
# Semgrep Configuration for BreakPilot
|
||||
# https://semgrep.dev/
|
||||
#
|
||||
# Run locally: semgrep scan --config auto
|
||||
# Run with this config: semgrep scan --config .semgrep.yml
|
||||
|
||||
rules:
|
||||
# =============================================
|
||||
# Python/FastAPI Security Rules
|
||||
# =============================================
|
||||
|
||||
- id: hardcoded-secret-in-string
|
||||
patterns:
|
||||
- pattern-either:
|
||||
- pattern: |
|
||||
$VAR = "...$SECRET..."
|
||||
- pattern: |
|
||||
$VAR = '...$SECRET...'
|
||||
message: "Potential hardcoded secret detected. Use environment variables or Vault."
|
||||
languages: [python]
|
||||
severity: WARNING
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-798: Use of Hard-coded Credentials"
|
||||
|
||||
- id: sql-injection-fastapi
|
||||
patterns:
|
||||
- pattern-either:
|
||||
- pattern: |
|
||||
$CURSOR.execute(f"...{$USER_INPUT}...")
|
||||
- pattern: |
|
||||
$CURSOR.execute("..." + $USER_INPUT + "...")
|
||||
- pattern: |
|
||||
$CURSOR.execute("..." % $USER_INPUT)
|
||||
message: "Potential SQL injection. Use parameterized queries."
|
||||
languages: [python]
|
||||
severity: ERROR
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-89: SQL Injection"
|
||||
owasp: "A03:2021 - Injection"
|
||||
|
||||
- id: command-injection
|
||||
patterns:
|
||||
- pattern-either:
|
||||
- pattern: os.system($USER_INPUT)
|
||||
- pattern: subprocess.call($USER_INPUT, shell=True)
|
||||
- pattern: subprocess.run($USER_INPUT, shell=True)
|
||||
- pattern: subprocess.Popen($USER_INPUT, shell=True)
|
||||
message: "Potential command injection. Avoid shell=True with user input."
|
||||
languages: [python]
|
||||
severity: ERROR
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-78: OS Command Injection"
|
||||
owasp: "A03:2021 - Injection"
|
||||
|
||||
- id: insecure-jwt-algorithm
|
||||
patterns:
|
||||
- pattern: jwt.decode(..., algorithms=["none"], ...)
|
||||
- pattern: jwt.decode(..., algorithms=["HS256"], verify=False, ...)
|
||||
message: "Insecure JWT algorithm or verification disabled."
|
||||
languages: [python]
|
||||
severity: ERROR
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-347: Improper Verification of Cryptographic Signature"
|
||||
|
||||
- id: path-traversal
|
||||
patterns:
|
||||
- pattern: open(... + $USER_INPUT + ...)
|
||||
- pattern: open(f"...{$USER_INPUT}...")
|
||||
- pattern: Path(...) / $USER_INPUT
|
||||
message: "Potential path traversal. Validate and sanitize file paths."
|
||||
languages: [python]
|
||||
severity: WARNING
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-22: Path Traversal"
|
||||
|
||||
- id: insecure-pickle
|
||||
patterns:
|
||||
- pattern: pickle.loads($DATA)
|
||||
- pattern: pickle.load($FILE)
|
||||
message: "Pickle deserialization is insecure. Use JSON or other safe formats."
|
||||
languages: [python]
|
||||
severity: WARNING
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-502: Deserialization of Untrusted Data"
|
||||
|
||||
# =============================================
|
||||
# Go Security Rules
|
||||
# =============================================
|
||||
|
||||
- id: go-sql-injection
|
||||
patterns:
|
||||
- pattern: |
|
||||
$DB.Query(fmt.Sprintf("...", $USER_INPUT))
|
||||
- pattern: |
|
||||
$DB.Exec(fmt.Sprintf("...", $USER_INPUT))
|
||||
message: "Potential SQL injection in Go. Use parameterized queries."
|
||||
languages: [go]
|
||||
severity: ERROR
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-89: SQL Injection"
|
||||
|
||||
- id: go-hardcoded-credentials
|
||||
patterns:
|
||||
- pattern: |
|
||||
$VAR := "..."
|
||||
- metavariable-regex:
|
||||
metavariable: $VAR
|
||||
regex: (password|secret|apiKey|api_key|token)
|
||||
message: "Potential hardcoded credential. Use environment variables."
|
||||
languages: [go]
|
||||
severity: WARNING
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-798: Use of Hard-coded Credentials"
|
||||
|
||||
# =============================================
|
||||
# JavaScript/TypeScript Security Rules
|
||||
# =============================================
|
||||
|
||||
- id: js-xss-innerhtml
|
||||
patterns:
|
||||
- pattern: $EL.innerHTML = $USER_INPUT
|
||||
message: "Potential XSS via innerHTML. Use textContent or sanitize input."
|
||||
languages: [javascript, typescript]
|
||||
severity: WARNING
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-79: Cross-site Scripting"
|
||||
owasp: "A03:2021 - Injection"
|
||||
|
||||
- id: js-eval
|
||||
patterns:
|
||||
- pattern: eval($CODE)
|
||||
- pattern: new Function($CODE)
|
||||
message: "Avoid eval() and new Function() with dynamic input."
|
||||
languages: [javascript, typescript]
|
||||
severity: ERROR
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-95: Improper Neutralization of Directives in Dynamically Evaluated Code"
|
||||
66
.trivy.yaml
Normal file
66
.trivy.yaml
Normal file
@@ -0,0 +1,66 @@
|
||||
# Trivy Configuration for BreakPilot
|
||||
# https://trivy.dev/
|
||||
#
|
||||
# Run: trivy image breakpilot-pwa-backend:latest
|
||||
# Run filesystem: trivy fs .
|
||||
# Run config: trivy config .
|
||||
|
||||
# Scan settings
|
||||
scan:
|
||||
# Security checks to perform
|
||||
security-checks:
|
||||
- vuln # Vulnerabilities
|
||||
- config # Misconfigurations
|
||||
- secret # Secrets in files
|
||||
|
||||
# Vulnerability settings
|
||||
vulnerability:
|
||||
# Vulnerability types to scan for
|
||||
type:
|
||||
- os # OS packages
|
||||
- library # Application dependencies
|
||||
|
||||
# Ignore unfixed vulnerabilities
|
||||
ignore-unfixed: false
|
||||
|
||||
# Severity settings
|
||||
severity:
|
||||
- CRITICAL
|
||||
- HIGH
|
||||
- MEDIUM
|
||||
# - LOW # Uncomment to include low severity
|
||||
|
||||
# Output format
|
||||
format: table
|
||||
|
||||
# Exit code on findings
|
||||
exit-code: 1
|
||||
|
||||
# Timeout
|
||||
timeout: 10m
|
||||
|
||||
# Cache directory
|
||||
cache-dir: /tmp/trivy-cache
|
||||
|
||||
# Skip files/directories
|
||||
skip-dirs:
|
||||
- node_modules
|
||||
- venv
|
||||
- .venv
|
||||
- __pycache__
|
||||
- .git
|
||||
- .idea
|
||||
- .vscode
|
||||
|
||||
skip-files:
|
||||
- "*.md"
|
||||
- "*.txt"
|
||||
- "*.log"
|
||||
|
||||
# Ignore specific vulnerabilities (add after review)
|
||||
ignorefile: .trivyignore
|
||||
|
||||
# SBOM generation
|
||||
sbom:
|
||||
format: cyclonedx
|
||||
output: sbom.json
|
||||
9
.trivyignore
Normal file
9
.trivyignore
Normal file
@@ -0,0 +1,9 @@
|
||||
# Trivy Ignore File for BreakPilot
|
||||
# Add vulnerability IDs to ignore after security review
|
||||
# Format: CVE-XXXX-XXXXX or GHSA-xxxx-xxxx-xxxx
|
||||
|
||||
# Example (remove after adding real ignores):
|
||||
# CVE-2021-12345 # Reason: Not exploitable in our context
|
||||
|
||||
# Reviewed and accepted risks:
|
||||
# (Add vulnerabilities here after security team review)
|
||||
132
.woodpecker/auto-fix.yml
Normal file
132
.woodpecker/auto-fix.yml
Normal file
@@ -0,0 +1,132 @@
|
||||
# Woodpecker CI Auto-Fix Pipeline
|
||||
# Automatische Reparatur fehlgeschlagener Tests
|
||||
#
|
||||
# Laeuft taeglich um 2:00 Uhr nachts
|
||||
# Analysiert offene Backlog-Items und versucht automatische Fixes
|
||||
|
||||
when:
|
||||
- event: cron
|
||||
cron: "0 2 * * *" # Taeglich um 2:00 Uhr
|
||||
|
||||
clone:
|
||||
git:
|
||||
image: woodpeckerci/plugin-git
|
||||
settings:
|
||||
depth: 1
|
||||
extra_hosts:
|
||||
- macmini:192.168.178.100
|
||||
|
||||
steps:
|
||||
# ========================================
|
||||
# 1. Fetch Failed Tests from Backlog
|
||||
# ========================================
|
||||
|
||||
fetch-backlog:
|
||||
image: curlimages/curl:latest
|
||||
commands:
|
||||
- |
|
||||
curl -s "http://backend:8000/api/tests/backlog?status=open&priority=critical" \
|
||||
-o backlog-critical.json
|
||||
curl -s "http://backend:8000/api/tests/backlog?status=open&priority=high" \
|
||||
-o backlog-high.json
|
||||
- echo "=== Kritische Tests ==="
|
||||
- cat backlog-critical.json | head -50
|
||||
- echo "=== Hohe Prioritaet ==="
|
||||
- cat backlog-high.json | head -50
|
||||
|
||||
# ========================================
|
||||
# 2. Analyze and Classify Errors
|
||||
# ========================================
|
||||
|
||||
analyze-errors:
|
||||
image: python:3.12-slim
|
||||
commands:
|
||||
- pip install --quiet jq-py
|
||||
- |
|
||||
python3 << 'EOF'
|
||||
import json
|
||||
import os
|
||||
|
||||
def classify_error(error_type, error_msg):
|
||||
"""Klassifiziert Fehler nach Auto-Fix-Potential"""
|
||||
auto_fixable = {
|
||||
'nil_pointer': 'high',
|
||||
'import_error': 'high',
|
||||
'undefined_variable': 'medium',
|
||||
'type_error': 'medium',
|
||||
'assertion': 'low',
|
||||
'timeout': 'low',
|
||||
'logic_error': 'manual'
|
||||
}
|
||||
return auto_fixable.get(error_type, 'manual')
|
||||
|
||||
# Lade Backlog
|
||||
try:
|
||||
with open('backlog-critical.json') as f:
|
||||
critical = json.load(f)
|
||||
with open('backlog-high.json') as f:
|
||||
high = json.load(f)
|
||||
except:
|
||||
print("Keine Backlog-Daten gefunden")
|
||||
exit(0)
|
||||
|
||||
all_items = critical.get('items', []) + high.get('items', [])
|
||||
|
||||
auto_fix_candidates = []
|
||||
for item in all_items:
|
||||
fix_potential = classify_error(
|
||||
item.get('error_type', 'unknown'),
|
||||
item.get('error_message', '')
|
||||
)
|
||||
if fix_potential in ['high', 'medium']:
|
||||
auto_fix_candidates.append({
|
||||
'id': item.get('id'),
|
||||
'test_name': item.get('test_name'),
|
||||
'error_type': item.get('error_type'),
|
||||
'fix_potential': fix_potential
|
||||
})
|
||||
|
||||
print(f"Auto-Fix Kandidaten: {len(auto_fix_candidates)}")
|
||||
with open('auto-fix-candidates.json', 'w') as f:
|
||||
json.dump(auto_fix_candidates, f, indent=2)
|
||||
EOF
|
||||
depends_on:
|
||||
- fetch-backlog
|
||||
|
||||
# ========================================
|
||||
# 3. Generate Fix Suggestions (Placeholder)
|
||||
# ========================================
|
||||
|
||||
generate-fixes:
|
||||
image: python:3.12-slim
|
||||
commands:
|
||||
- |
|
||||
echo "Auto-Fix Generation ist in Phase 4 geplant"
|
||||
echo "Aktuell werden nur Vorschlaege generiert"
|
||||
|
||||
# Hier wuerde Claude API oder anderer LLM aufgerufen werden
|
||||
# python3 scripts/auto-fix-agent.py auto-fix-candidates.json
|
||||
|
||||
echo "Fix-Vorschlaege wuerden hier generiert werden"
|
||||
depends_on:
|
||||
- analyze-errors
|
||||
|
||||
# ========================================
|
||||
# 4. Report Results
|
||||
# ========================================
|
||||
|
||||
report-results:
|
||||
image: curlimages/curl:latest
|
||||
commands:
|
||||
- |
|
||||
curl -X POST "http://backend:8000/api/tests/auto-fix/report" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"run_date\": \"$(date -Iseconds)\",
|
||||
\"candidates_found\": $(cat auto-fix-candidates.json | wc -l),
|
||||
\"fixes_attempted\": 0,
|
||||
\"fixes_successful\": 0,
|
||||
\"status\": \"analysis_only\"
|
||||
}" || true
|
||||
when:
|
||||
status: [success, failure]
|
||||
37
.woodpecker/build-ci-image.yml
Normal file
37
.woodpecker/build-ci-image.yml
Normal file
@@ -0,0 +1,37 @@
|
||||
# One-time pipeline to build the custom Python CI image
|
||||
# Trigger manually, then delete this file
|
||||
#
|
||||
# This builds the breakpilot/python-ci:3.12 image on the CI runner
|
||||
|
||||
when:
|
||||
- event: manual
|
||||
|
||||
clone:
|
||||
git:
|
||||
image: woodpeckerci/plugin-git
|
||||
settings:
|
||||
depth: 1
|
||||
extra_hosts:
|
||||
- macmini:192.168.178.100
|
||||
|
||||
steps:
|
||||
build-python-ci-image:
|
||||
image: docker:27-cli
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
commands:
|
||||
- |
|
||||
echo "=== Building breakpilot/python-ci:3.12 ==="
|
||||
|
||||
docker build \
|
||||
-t breakpilot/python-ci:3.12 \
|
||||
-t breakpilot/python-ci:latest \
|
||||
-f .docker/python-ci.Dockerfile \
|
||||
.
|
||||
|
||||
echo ""
|
||||
echo "=== Build complete ==="
|
||||
docker images | grep breakpilot/python-ci
|
||||
|
||||
echo ""
|
||||
echo "Image is now available for CI pipelines!"
|
||||
161
.woodpecker/integration.yml
Normal file
161
.woodpecker/integration.yml
Normal file
@@ -0,0 +1,161 @@
|
||||
# Integration Tests Pipeline
|
||||
# Separate Datei weil Services auf Pipeline-Ebene definiert werden muessen
|
||||
#
|
||||
# Diese Pipeline laeuft parallel zur main.yml und testet:
|
||||
# - Database Connectivity (PostgreSQL)
|
||||
# - Cache Connectivity (Valkey/Redis)
|
||||
# - Service-to-Service Kommunikation
|
||||
#
|
||||
# Dokumentation: docs/testing/integration-test-environment.md
|
||||
|
||||
when:
|
||||
- event: [push, pull_request]
|
||||
branch: [main, develop]
|
||||
|
||||
clone:
|
||||
git:
|
||||
image: woodpeckerci/plugin-git
|
||||
settings:
|
||||
depth: 1
|
||||
extra_hosts:
|
||||
- macmini:192.168.178.100
|
||||
|
||||
# Services auf Pipeline-Ebene (NICHT Step-Ebene!)
|
||||
# Diese Services sind fuer ALLE Steps verfuegbar
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
environment:
|
||||
POSTGRES_USER: breakpilot
|
||||
POSTGRES_PASSWORD: breakpilot_test
|
||||
POSTGRES_DB: breakpilot_test
|
||||
|
||||
valkey:
|
||||
image: valkey/valkey:8-alpine
|
||||
|
||||
steps:
|
||||
wait-for-services:
|
||||
image: postgres:16-alpine
|
||||
commands:
|
||||
- |
|
||||
echo "=== Waiting for PostgreSQL ==="
|
||||
for i in $(seq 1 30); do
|
||||
if pg_isready -h postgres -U breakpilot; then
|
||||
echo "PostgreSQL ready after $i attempts!"
|
||||
break
|
||||
fi
|
||||
echo "Attempt $i/30: PostgreSQL not ready, waiting..."
|
||||
sleep 2
|
||||
done
|
||||
# Final check
|
||||
if ! pg_isready -h postgres -U breakpilot; then
|
||||
echo "ERROR: PostgreSQL not ready after 30 attempts"
|
||||
exit 1
|
||||
fi
|
||||
- |
|
||||
echo "=== Waiting for Valkey ==="
|
||||
# Install redis-cli in postgres alpine image
|
||||
apk add --no-cache redis > /dev/null 2>&1 || true
|
||||
for i in $(seq 1 30); do
|
||||
if redis-cli -h valkey ping 2>/dev/null | grep -q PONG; then
|
||||
echo "Valkey ready after $i attempts!"
|
||||
break
|
||||
fi
|
||||
echo "Attempt $i/30: Valkey not ready, waiting..."
|
||||
sleep 2
|
||||
done
|
||||
# Final check
|
||||
if ! redis-cli -h valkey ping 2>/dev/null | grep -q PONG; then
|
||||
echo "ERROR: Valkey not ready after 30 attempts"
|
||||
exit 1
|
||||
fi
|
||||
- echo "=== All services ready ==="
|
||||
|
||||
integration-tests:
|
||||
image: breakpilot/python-ci:3.12
|
||||
environment:
|
||||
CI: "true"
|
||||
DATABASE_URL: postgresql://breakpilot:breakpilot_test@postgres:5432/breakpilot_test
|
||||
VALKEY_URL: redis://valkey:6379
|
||||
REDIS_URL: redis://valkey:6379
|
||||
SKIP_INTEGRATION_TESTS: "false"
|
||||
SKIP_DB_TESTS: "false"
|
||||
SKIP_WEASYPRINT_TESTS: "false"
|
||||
# Test-spezifische Umgebungsvariablen
|
||||
ENVIRONMENT: "testing"
|
||||
JWT_SECRET: "test-secret-key-for-integration-tests"
|
||||
TEACHER_REQUIRE_AUTH: "false"
|
||||
GAME_USE_DATABASE: "false"
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
cd backend
|
||||
|
||||
# PYTHONPATH setzen damit lokale Module gefunden werden
|
||||
export PYTHONPATH="$(pwd):${PYTHONPATH:-}"
|
||||
|
||||
echo "=== Installing dependencies ==="
|
||||
pip install --quiet --no-cache-dir -r requirements.txt
|
||||
|
||||
echo "=== Running Integration Tests ==="
|
||||
set +e
|
||||
python -m pytest tests/test_integration/ -v \
|
||||
--tb=short \
|
||||
--json-report \
|
||||
--json-report-file=../.ci-results/test-integration.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
# Ergebnisse auswerten
|
||||
if [ -f ../.ci-results/test-integration.json ]; then
|
||||
TOTAL=$(python3 -c "import json; d=json.load(open('../.ci-results/test-integration.json')); print(d.get('summary',{}).get('total',0))" 2>/dev/null || echo "0")
|
||||
PASSED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-integration.json')); print(d.get('summary',{}).get('passed',0))" 2>/dev/null || echo "0")
|
||||
FAILED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-integration.json')); print(d.get('summary',{}).get('failed',0))" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-integration.json')); print(d.get('summary',{}).get('skipped',0))" 2>/dev/null || echo "0")
|
||||
else
|
||||
echo "WARNUNG: Keine JSON-Ergebnisse gefunden"
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
echo "{\"service\":\"integration-tests\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../.ci-results/results-integration.json
|
||||
cat ../.ci-results/results-integration.json
|
||||
|
||||
echo ""
|
||||
echo "=== Integration Test Summary ==="
|
||||
echo "Total: $TOTAL | Passed: $PASSED | Failed: $FAILED | Skipped: $SKIPPED"
|
||||
|
||||
if [ "$TEST_EXIT" -ne "0" ]; then
|
||||
echo "Integration tests failed with exit code $TEST_EXIT"
|
||||
exit 1
|
||||
fi
|
||||
depends_on:
|
||||
- wait-for-services
|
||||
|
||||
report-integration-results:
|
||||
image: curlimages/curl:8.10.1
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
echo "=== Sende Integration Test-Ergebnisse an Dashboard ==="
|
||||
|
||||
if [ -f .ci-results/results-integration.json ]; then
|
||||
echo "Sending integration test results..."
|
||||
curl -f -sS -X POST "http://backend:8000/api/tests/ci-result" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"pipeline_id\": \"${CI_PIPELINE_NUMBER}\",
|
||||
\"commit\": \"${CI_COMMIT_SHA}\",
|
||||
\"branch\": \"${CI_COMMIT_BRANCH}\",
|
||||
\"status\": \"${CI_PIPELINE_STATUS:-unknown}\",
|
||||
\"test_results\": $(cat .ci-results/results-integration.json)
|
||||
}" || echo "WARNUNG: Konnte Ergebnisse nicht an Dashboard senden"
|
||||
else
|
||||
echo "Keine Integration-Ergebnisse zum Senden gefunden"
|
||||
fi
|
||||
|
||||
echo "=== Integration Test-Ergebnisse gesendet ==="
|
||||
when:
|
||||
status: [success, failure]
|
||||
depends_on:
|
||||
- integration-tests
|
||||
669
.woodpecker/main.yml
Normal file
669
.woodpecker/main.yml
Normal file
@@ -0,0 +1,669 @@
|
||||
# Woodpecker CI Main Pipeline
|
||||
# BreakPilot PWA - CI/CD Pipeline
|
||||
#
|
||||
# Plattform: ARM64 (Apple Silicon Mac Mini)
|
||||
#
|
||||
# Strategie:
|
||||
# - Tests laufen bei JEDEM Push/PR
|
||||
# - Test-Ergebnisse werden an Dashboard gesendet
|
||||
# - Builds/Scans laufen nur bei Tags oder manuell
|
||||
# - Deployment nur manuell (Sicherheit)
|
||||
|
||||
when:
|
||||
- event: [push, pull_request, manual, tag]
|
||||
branch: [main, develop]
|
||||
|
||||
clone:
|
||||
git:
|
||||
image: woodpeckerci/plugin-git
|
||||
settings:
|
||||
depth: 1
|
||||
extra_hosts:
|
||||
- macmini:192.168.178.100
|
||||
|
||||
variables:
|
||||
- &golang_image golang:1.23-alpine
|
||||
- &python_image python:3.12-slim
|
||||
- &python_ci_image breakpilot/python-ci:3.12 # Custom image with WeasyPrint
|
||||
- &nodejs_image node:20-alpine
|
||||
- &docker_image docker:27-cli
|
||||
|
||||
steps:
|
||||
# ========================================
|
||||
# STAGE 1: Lint (nur bei PRs)
|
||||
# ========================================
|
||||
|
||||
go-lint:
|
||||
image: golangci/golangci-lint:v1.55-alpine
|
||||
commands:
|
||||
- cd consent-service && golangci-lint run --timeout 5m ./...
|
||||
- cd ../billing-service && golangci-lint run --timeout 5m ./...
|
||||
- cd ../school-service && golangci-lint run --timeout 5m ./...
|
||||
when:
|
||||
event: pull_request
|
||||
|
||||
python-lint:
|
||||
image: *python_image
|
||||
commands:
|
||||
- pip install --quiet ruff black
|
||||
- ruff check backend/ --output-format=github || true
|
||||
- black --check backend/ || true
|
||||
when:
|
||||
event: pull_request
|
||||
|
||||
# ========================================
|
||||
# STAGE 2: Unit Tests mit JSON-Ausgabe
|
||||
# Ergebnisse werden im Workspace gespeichert (.ci-results/)
|
||||
# ========================================
|
||||
|
||||
test-go-consent:
|
||||
image: *golang_image
|
||||
environment:
|
||||
CGO_ENABLED: "0"
|
||||
commands:
|
||||
- |
|
||||
set -euo pipefail
|
||||
apk add --no-cache jq bash
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "consent-service" ]; then
|
||||
echo '{"service":"consent-service","framework":"go","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-consent.json
|
||||
echo "WARNUNG: consent-service Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd consent-service
|
||||
set +e
|
||||
go test -v -json -coverprofile=coverage.out ./... 2>&1 | tee ../.ci-results/test-consent.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
# JSON-Zeilen extrahieren und mit jq zählen
|
||||
JSON_FILE="../.ci-results/test-consent.json"
|
||||
if grep -q '^{' "$JSON_FILE" 2>/dev/null; then
|
||||
TOTAL=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="run" and .Test != null)] | length')
|
||||
PASSED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="pass" and .Test != null)] | length')
|
||||
FAILED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="fail" and .Test != null)] | length')
|
||||
SKIPPED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="skip" and .Test != null)] | length')
|
||||
else
|
||||
echo "WARNUNG: Keine JSON-Zeilen in $JSON_FILE gefunden (Build-Fehler?)"
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
COVERAGE=$(go tool cover -func=coverage.out 2>/dev/null | tail -1 | awk '{print $3}' | tr -d '%' || echo "0")
|
||||
[ -z "$COVERAGE" ] && COVERAGE=0
|
||||
|
||||
echo "{\"service\":\"consent-service\",\"framework\":\"go\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":$COVERAGE}" > ../.ci-results/results-consent.json
|
||||
cat ../.ci-results/results-consent.json
|
||||
|
||||
# Backlog-Strategie: Fehler werden gemeldet aber Pipeline laeuft weiter
|
||||
if [ "$FAILED" -gt "0" ]; then
|
||||
echo "WARNUNG: $FAILED Tests fehlgeschlagen - werden ins Backlog geschrieben"
|
||||
fi
|
||||
|
||||
test-go-billing:
|
||||
image: *golang_image
|
||||
environment:
|
||||
CGO_ENABLED: "0"
|
||||
commands:
|
||||
- |
|
||||
set -euo pipefail
|
||||
apk add --no-cache jq bash
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "billing-service" ]; then
|
||||
echo '{"service":"billing-service","framework":"go","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-billing.json
|
||||
echo "WARNUNG: billing-service Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd billing-service
|
||||
set +e
|
||||
go test -v -json -coverprofile=coverage.out ./... 2>&1 | tee ../.ci-results/test-billing.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
# JSON-Zeilen extrahieren und mit jq zählen
|
||||
JSON_FILE="../.ci-results/test-billing.json"
|
||||
if grep -q '^{' "$JSON_FILE" 2>/dev/null; then
|
||||
TOTAL=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="run" and .Test != null)] | length')
|
||||
PASSED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="pass" and .Test != null)] | length')
|
||||
FAILED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="fail" and .Test != null)] | length')
|
||||
SKIPPED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="skip" and .Test != null)] | length')
|
||||
else
|
||||
echo "WARNUNG: Keine JSON-Zeilen in $JSON_FILE gefunden (Build-Fehler?)"
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
COVERAGE=$(go tool cover -func=coverage.out 2>/dev/null | tail -1 | awk '{print $3}' | tr -d '%' || echo "0")
|
||||
[ -z "$COVERAGE" ] && COVERAGE=0
|
||||
|
||||
echo "{\"service\":\"billing-service\",\"framework\":\"go\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":$COVERAGE}" > ../.ci-results/results-billing.json
|
||||
cat ../.ci-results/results-billing.json
|
||||
|
||||
# Backlog-Strategie: Fehler werden gemeldet aber Pipeline laeuft weiter
|
||||
if [ "$FAILED" -gt "0" ]; then
|
||||
echo "WARNUNG: $FAILED Tests fehlgeschlagen - werden ins Backlog geschrieben"
|
||||
fi
|
||||
|
||||
test-go-school:
|
||||
image: *golang_image
|
||||
environment:
|
||||
CGO_ENABLED: "0"
|
||||
commands:
|
||||
- |
|
||||
set -euo pipefail
|
||||
apk add --no-cache jq bash
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "school-service" ]; then
|
||||
echo '{"service":"school-service","framework":"go","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-school.json
|
||||
echo "WARNUNG: school-service Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd school-service
|
||||
set +e
|
||||
go test -v -json -coverprofile=coverage.out ./... 2>&1 | tee ../.ci-results/test-school.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
# JSON-Zeilen extrahieren und mit jq zählen
|
||||
JSON_FILE="../.ci-results/test-school.json"
|
||||
if grep -q '^{' "$JSON_FILE" 2>/dev/null; then
|
||||
TOTAL=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="run" and .Test != null)] | length')
|
||||
PASSED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="pass" and .Test != null)] | length')
|
||||
FAILED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="fail" and .Test != null)] | length')
|
||||
SKIPPED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="skip" and .Test != null)] | length')
|
||||
else
|
||||
echo "WARNUNG: Keine JSON-Zeilen in $JSON_FILE gefunden (Build-Fehler?)"
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
COVERAGE=$(go tool cover -func=coverage.out 2>/dev/null | tail -1 | awk '{print $3}' | tr -d '%' || echo "0")
|
||||
[ -z "$COVERAGE" ] && COVERAGE=0
|
||||
|
||||
echo "{\"service\":\"school-service\",\"framework\":\"go\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":$COVERAGE}" > ../.ci-results/results-school.json
|
||||
cat ../.ci-results/results-school.json
|
||||
|
||||
# Backlog-Strategie: Fehler werden gemeldet aber Pipeline laeuft weiter
|
||||
if [ "$FAILED" -gt "0" ]; then
|
||||
echo "WARNUNG: $FAILED Tests fehlgeschlagen - werden ins Backlog geschrieben"
|
||||
fi
|
||||
|
||||
test-go-edu-search:
|
||||
image: *golang_image
|
||||
environment:
|
||||
CGO_ENABLED: "0"
|
||||
commands:
|
||||
- |
|
||||
set -euo pipefail
|
||||
apk add --no-cache jq bash
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "edu-search-service" ]; then
|
||||
echo '{"service":"edu-search-service","framework":"go","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-edu-search.json
|
||||
echo "WARNUNG: edu-search-service Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd edu-search-service
|
||||
set +e
|
||||
go test -v -json -coverprofile=coverage.out ./internal/... 2>&1 | tee ../.ci-results/test-edu-search.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
# JSON-Zeilen extrahieren und mit jq zählen
|
||||
JSON_FILE="../.ci-results/test-edu-search.json"
|
||||
if grep -q '^{' "$JSON_FILE" 2>/dev/null; then
|
||||
TOTAL=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="run" and .Test != null)] | length')
|
||||
PASSED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="pass" and .Test != null)] | length')
|
||||
FAILED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="fail" and .Test != null)] | length')
|
||||
SKIPPED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="skip" and .Test != null)] | length')
|
||||
else
|
||||
echo "WARNUNG: Keine JSON-Zeilen in $JSON_FILE gefunden (Build-Fehler?)"
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
COVERAGE=$(go tool cover -func=coverage.out 2>/dev/null | tail -1 | awk '{print $3}' | tr -d '%' || echo "0")
|
||||
[ -z "$COVERAGE" ] && COVERAGE=0
|
||||
|
||||
echo "{\"service\":\"edu-search-service\",\"framework\":\"go\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":$COVERAGE}" > ../.ci-results/results-edu-search.json
|
||||
cat ../.ci-results/results-edu-search.json
|
||||
|
||||
# Backlog-Strategie: Fehler werden gemeldet aber Pipeline laeuft weiter
|
||||
if [ "$FAILED" -gt "0" ]; then
|
||||
echo "WARNUNG: $FAILED Tests fehlgeschlagen - werden ins Backlog geschrieben"
|
||||
fi
|
||||
|
||||
test-go-ai-compliance:
|
||||
image: *golang_image
|
||||
environment:
|
||||
CGO_ENABLED: "0"
|
||||
commands:
|
||||
- |
|
||||
set -euo pipefail
|
||||
apk add --no-cache jq bash
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "ai-compliance-sdk" ]; then
|
||||
echo '{"service":"ai-compliance-sdk","framework":"go","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-ai-compliance.json
|
||||
echo "WARNUNG: ai-compliance-sdk Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd ai-compliance-sdk
|
||||
set +e
|
||||
go test -v -json -coverprofile=coverage.out ./... 2>&1 | tee ../.ci-results/test-ai-compliance.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
# JSON-Zeilen extrahieren und mit jq zählen
|
||||
JSON_FILE="../.ci-results/test-ai-compliance.json"
|
||||
if grep -q '^{' "$JSON_FILE" 2>/dev/null; then
|
||||
TOTAL=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="run" and .Test != null)] | length')
|
||||
PASSED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="pass" and .Test != null)] | length')
|
||||
FAILED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="fail" and .Test != null)] | length')
|
||||
SKIPPED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="skip" and .Test != null)] | length')
|
||||
else
|
||||
echo "WARNUNG: Keine JSON-Zeilen in $JSON_FILE gefunden (Build-Fehler?)"
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
COVERAGE=$(go tool cover -func=coverage.out 2>/dev/null | tail -1 | awk '{print $3}' | tr -d '%' || echo "0")
|
||||
[ -z "$COVERAGE" ] && COVERAGE=0
|
||||
|
||||
echo "{\"service\":\"ai-compliance-sdk\",\"framework\":\"go\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":$COVERAGE}" > ../.ci-results/results-ai-compliance.json
|
||||
cat ../.ci-results/results-ai-compliance.json
|
||||
|
||||
# Backlog-Strategie: Fehler werden gemeldet aber Pipeline laeuft weiter
|
||||
if [ "$FAILED" -gt "0" ]; then
|
||||
echo "WARNUNG: $FAILED Tests fehlgeschlagen - werden ins Backlog geschrieben"
|
||||
fi
|
||||
|
||||
test-python-backend:
|
||||
image: *python_ci_image
|
||||
environment:
|
||||
CI: "true"
|
||||
DATABASE_URL: "postgresql://test:test@localhost:5432/test_db"
|
||||
SKIP_DB_TESTS: "true"
|
||||
SKIP_WEASYPRINT_TESTS: "false"
|
||||
SKIP_INTEGRATION_TESTS: "true"
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "backend" ]; then
|
||||
echo '{"service":"backend","framework":"pytest","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-backend.json
|
||||
echo "WARNUNG: backend Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd backend
|
||||
# Set PYTHONPATH to current directory (backend) so local packages like classroom_engine, alerts_agent are found
|
||||
# IMPORTANT: Use absolute path and export before pip install to ensure modules are available
|
||||
export PYTHONPATH="$(pwd):${PYTHONPATH:-}"
|
||||
|
||||
# Test tools are pre-installed in breakpilot/python-ci image
|
||||
# Only install project-specific dependencies
|
||||
pip install --quiet --no-cache-dir -r requirements.txt
|
||||
|
||||
# NOTE: PostgreSQL service removed - tests that require DB are skipped via SKIP_DB_TESTS=true
|
||||
# For full integration tests, use: docker compose -f docker-compose.test.yml up -d
|
||||
|
||||
set +e
|
||||
# Use python -m pytest to ensure PYTHONPATH is properly applied before pytest starts
|
||||
python -m pytest tests/ -v --tb=short --cov=. --cov-report=term-missing --json-report --json-report-file=../.ci-results/test-backend.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ -f ../.ci-results/test-backend.json ]; then
|
||||
TOTAL=$(python3 -c "import json; d=json.load(open('../.ci-results/test-backend.json')); print(d.get('summary',{}).get('total',0))" 2>/dev/null || echo "0")
|
||||
PASSED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-backend.json')); print(d.get('summary',{}).get('passed',0))" 2>/dev/null || echo "0")
|
||||
FAILED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-backend.json')); print(d.get('summary',{}).get('failed',0))" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-backend.json')); print(d.get('summary',{}).get('skipped',0))" 2>/dev/null || echo "0")
|
||||
else
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
echo "{\"service\":\"backend\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../.ci-results/results-backend.json
|
||||
cat ../.ci-results/results-backend.json
|
||||
|
||||
if [ "$TEST_EXIT" -ne "0" ]; then exit 1; fi
|
||||
|
||||
test-python-voice:
|
||||
image: *python_image
|
||||
environment:
|
||||
CI: "true"
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "voice-service" ]; then
|
||||
echo '{"service":"voice-service","framework":"pytest","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-voice.json
|
||||
echo "WARNUNG: voice-service Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd voice-service
|
||||
export PYTHONPATH="$(pwd):${PYTHONPATH:-}"
|
||||
pip install --quiet --no-cache-dir -r requirements.txt
|
||||
pip install --quiet --no-cache-dir pytest-json-report
|
||||
|
||||
set +e
|
||||
python -m pytest tests/ -v --tb=short --json-report --json-report-file=../.ci-results/test-voice.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ -f ../.ci-results/test-voice.json ]; then
|
||||
TOTAL=$(python3 -c "import json; d=json.load(open('../.ci-results/test-voice.json')); print(d.get('summary',{}).get('total',0))" 2>/dev/null || echo "0")
|
||||
PASSED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-voice.json')); print(d.get('summary',{}).get('passed',0))" 2>/dev/null || echo "0")
|
||||
FAILED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-voice.json')); print(d.get('summary',{}).get('failed',0))" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-voice.json')); print(d.get('summary',{}).get('skipped',0))" 2>/dev/null || echo "0")
|
||||
else
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
echo "{\"service\":\"voice-service\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../.ci-results/results-voice.json
|
||||
cat ../.ci-results/results-voice.json
|
||||
|
||||
if [ "$TEST_EXIT" -ne "0" ]; then exit 1; fi
|
||||
|
||||
test-bqas-golden:
|
||||
image: *python_image
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "voice-service/tests/bqas" ]; then
|
||||
echo '{"service":"bqas-golden","framework":"pytest","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-bqas-golden.json
|
||||
echo "WARNUNG: voice-service/tests/bqas Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd voice-service
|
||||
export PYTHONPATH="$(pwd):${PYTHONPATH:-}"
|
||||
pip install --quiet --no-cache-dir -r requirements.txt
|
||||
pip install --quiet --no-cache-dir pytest-json-report pytest-asyncio
|
||||
|
||||
set +e
|
||||
python -m pytest tests/bqas/test_golden.py tests/bqas/test_regression.py tests/bqas/test_synthetic.py -v --tb=short --json-report --json-report-file=../.ci-results/test-bqas-golden.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ -f ../.ci-results/test-bqas-golden.json ]; then
|
||||
TOTAL=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-golden.json')); print(d.get('summary',{}).get('total',0))" 2>/dev/null || echo "0")
|
||||
PASSED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-golden.json')); print(d.get('summary',{}).get('passed',0))" 2>/dev/null || echo "0")
|
||||
FAILED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-golden.json')); print(d.get('summary',{}).get('failed',0))" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-golden.json')); print(d.get('summary',{}).get('skipped',0))" 2>/dev/null || echo "0")
|
||||
else
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
echo "{\"service\":\"bqas-golden\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../.ci-results/results-bqas-golden.json
|
||||
cat ../.ci-results/results-bqas-golden.json
|
||||
|
||||
# BQAS tests may skip if Ollama not available - don't fail pipeline
|
||||
if [ "$FAILED" -gt "0" ]; then exit 1; fi
|
||||
|
||||
test-bqas-rag:
|
||||
image: *python_image
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "voice-service/tests/bqas" ]; then
|
||||
echo '{"service":"bqas-rag","framework":"pytest","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-bqas-rag.json
|
||||
echo "WARNUNG: voice-service/tests/bqas Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd voice-service
|
||||
export PYTHONPATH="$(pwd):${PYTHONPATH:-}"
|
||||
pip install --quiet --no-cache-dir -r requirements.txt
|
||||
pip install --quiet --no-cache-dir pytest-json-report pytest-asyncio
|
||||
|
||||
set +e
|
||||
python -m pytest tests/bqas/test_rag.py tests/bqas/test_notifier.py -v --tb=short --json-report --json-report-file=../.ci-results/test-bqas-rag.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ -f ../.ci-results/test-bqas-rag.json ]; then
|
||||
TOTAL=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-rag.json')); print(d.get('summary',{}).get('total',0))" 2>/dev/null || echo "0")
|
||||
PASSED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-rag.json')); print(d.get('summary',{}).get('passed',0))" 2>/dev/null || echo "0")
|
||||
FAILED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-rag.json')); print(d.get('summary',{}).get('failed',0))" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-rag.json')); print(d.get('summary',{}).get('skipped',0))" 2>/dev/null || echo "0")
|
||||
else
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
echo "{\"service\":\"bqas-rag\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../.ci-results/results-bqas-rag.json
|
||||
cat ../.ci-results/results-bqas-rag.json
|
||||
|
||||
# BQAS tests may skip if Ollama not available - don't fail pipeline
|
||||
if [ "$FAILED" -gt "0" ]; then exit 1; fi
|
||||
|
||||
test-python-klausur:
|
||||
image: *python_image
|
||||
environment:
|
||||
CI: "true"
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "klausur-service/backend" ]; then
|
||||
echo '{"service":"klausur-service","framework":"pytest","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-klausur.json
|
||||
echo "WARNUNG: klausur-service/backend Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd klausur-service/backend
|
||||
# Set PYTHONPATH to current directory so local modules like hyde, hybrid_search, etc. are found
|
||||
export PYTHONPATH="$(pwd):${PYTHONPATH:-}"
|
||||
|
||||
pip install --quiet --no-cache-dir -r requirements.txt 2>/dev/null || pip install --quiet --no-cache-dir fastapi uvicorn pytest pytest-asyncio pytest-json-report
|
||||
pip install --quiet --no-cache-dir pytest-json-report
|
||||
|
||||
set +e
|
||||
python -m pytest tests/ -v --tb=short --json-report --json-report-file=../../.ci-results/test-klausur.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ -f ../../.ci-results/test-klausur.json ]; then
|
||||
TOTAL=$(python3 -c "import json; d=json.load(open('../../.ci-results/test-klausur.json')); print(d.get('summary',{}).get('total',0))" 2>/dev/null || echo "0")
|
||||
PASSED=$(python3 -c "import json; d=json.load(open('../../.ci-results/test-klausur.json')); print(d.get('summary',{}).get('passed',0))" 2>/dev/null || echo "0")
|
||||
FAILED=$(python3 -c "import json; d=json.load(open('../../.ci-results/test-klausur.json')); print(d.get('summary',{}).get('failed',0))" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(python3 -c "import json; d=json.load(open('../../.ci-results/test-klausur.json')); print(d.get('summary',{}).get('skipped',0))" 2>/dev/null || echo "0")
|
||||
else
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
echo "{\"service\":\"klausur-service\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../../.ci-results/results-klausur.json
|
||||
cat ../../.ci-results/results-klausur.json
|
||||
|
||||
if [ "$TEST_EXIT" -ne "0" ]; then exit 1; fi
|
||||
|
||||
test-nodejs-h5p:
|
||||
image: *nodejs_image
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "h5p-service" ]; then
|
||||
echo '{"service":"h5p-service","framework":"jest","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-h5p.json
|
||||
echo "WARNUNG: h5p-service Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd h5p-service
|
||||
npm ci --silent 2>/dev/null || npm install --silent
|
||||
|
||||
set +e
|
||||
npm run test:ci -- --json --outputFile=../.ci-results/test-h5p.json 2>&1
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ -f ../.ci-results/test-h5p.json ]; then
|
||||
TOTAL=$(node -e "const d=require('../.ci-results/test-h5p.json'); console.log(d.numTotalTests || 0)" 2>/dev/null || echo "0")
|
||||
PASSED=$(node -e "const d=require('../.ci-results/test-h5p.json'); console.log(d.numPassedTests || 0)" 2>/dev/null || echo "0")
|
||||
FAILED=$(node -e "const d=require('../.ci-results/test-h5p.json'); console.log(d.numFailedTests || 0)" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(node -e "const d=require('../.ci-results/test-h5p.json'); console.log(d.numPendingTests || 0)" 2>/dev/null || echo "0")
|
||||
else
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
[ -z "$TOTAL" ] && TOTAL=0
|
||||
[ -z "$PASSED" ] && PASSED=0
|
||||
[ -z "$FAILED" ] && FAILED=0
|
||||
[ -z "$SKIPPED" ] && SKIPPED=0
|
||||
|
||||
echo "{\"service\":\"h5p-service\",\"framework\":\"jest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../.ci-results/results-h5p.json
|
||||
cat ../.ci-results/results-h5p.json
|
||||
|
||||
if [ "$TEST_EXIT" -ne "0" ]; then exit 1; fi
|
||||
|
||||
# ========================================
|
||||
# STAGE 2.5: Integration Tests
|
||||
# ========================================
|
||||
# Integration Tests laufen in separater Pipeline:
|
||||
# .woodpecker/integration.yml
|
||||
# (benötigt Pipeline-Level Services für PostgreSQL und Valkey)
|
||||
|
||||
# ========================================
|
||||
# STAGE 3: Test-Ergebnisse an Dashboard senden
|
||||
# ========================================
|
||||
|
||||
report-test-results:
|
||||
image: curlimages/curl:8.10.1
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
echo "=== Sende Test-Ergebnisse an Dashboard ==="
|
||||
echo "Pipeline Status: ${CI_PIPELINE_STATUS:-unknown}"
|
||||
ls -la .ci-results/ || echo "Verzeichnis nicht gefunden"
|
||||
|
||||
PIPELINE_STATUS="${CI_PIPELINE_STATUS:-unknown}"
|
||||
|
||||
for f in .ci-results/results-*.json; do
|
||||
[ -f "$f" ] || continue
|
||||
echo "Sending: $f"
|
||||
curl -f -sS -X POST "http://backend:8000/api/tests/ci-result" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"pipeline_id\": \"${CI_PIPELINE_NUMBER}\",
|
||||
\"commit\": \"${CI_COMMIT_SHA}\",
|
||||
\"branch\": \"${CI_COMMIT_BRANCH}\",
|
||||
\"status\": \"${PIPELINE_STATUS}\",
|
||||
\"test_results\": $(cat "$f")
|
||||
}" || echo "WARNUNG: Konnte $f nicht senden"
|
||||
done
|
||||
|
||||
echo "=== Test-Ergebnisse gesendet ==="
|
||||
when:
|
||||
status: [success, failure]
|
||||
depends_on:
|
||||
- test-go-consent
|
||||
- test-go-billing
|
||||
- test-go-school
|
||||
- test-go-edu-search
|
||||
- test-go-ai-compliance
|
||||
- test-python-backend
|
||||
- test-python-voice
|
||||
- test-bqas-golden
|
||||
- test-bqas-rag
|
||||
- test-python-klausur
|
||||
- test-nodejs-h5p
|
||||
|
||||
# ========================================
|
||||
# STAGE 4: Build & Security (nur Tags/manuell)
|
||||
# ========================================
|
||||
|
||||
build-consent-service:
|
||||
image: *docker_image
|
||||
commands:
|
||||
- docker build -t breakpilot/consent-service:${CI_COMMIT_SHA:0:8} ./consent-service
|
||||
- docker tag breakpilot/consent-service:${CI_COMMIT_SHA:0:8} breakpilot/consent-service:latest
|
||||
- echo "Built breakpilot/consent-service:${CI_COMMIT_SHA:0:8}"
|
||||
when:
|
||||
- event: tag
|
||||
- event: manual
|
||||
|
||||
build-backend:
|
||||
image: *docker_image
|
||||
commands:
|
||||
- docker build -t breakpilot/backend:${CI_COMMIT_SHA:0:8} ./backend
|
||||
- docker tag breakpilot/backend:${CI_COMMIT_SHA:0:8} breakpilot/backend:latest
|
||||
- echo "Built breakpilot/backend:${CI_COMMIT_SHA:0:8}"
|
||||
when:
|
||||
- event: tag
|
||||
- event: manual
|
||||
|
||||
build-voice-service:
|
||||
image: *docker_image
|
||||
commands:
|
||||
- |
|
||||
if [ -d ./voice-service ]; then
|
||||
docker build -t breakpilot/voice-service:${CI_COMMIT_SHA:0:8} ./voice-service
|
||||
docker tag breakpilot/voice-service:${CI_COMMIT_SHA:0:8} breakpilot/voice-service:latest
|
||||
echo "Built breakpilot/voice-service:${CI_COMMIT_SHA:0:8}"
|
||||
else
|
||||
echo "voice-service Verzeichnis nicht gefunden - ueberspringe"
|
||||
fi
|
||||
when:
|
||||
- event: tag
|
||||
- event: manual
|
||||
|
||||
generate-sbom:
|
||||
image: *golang_image
|
||||
commands:
|
||||
- |
|
||||
echo "Installing syft for ARM64..."
|
||||
wget -qO- https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
|
||||
syft dir:./consent-service -o cyclonedx-json > sbom-consent.json
|
||||
syft dir:./backend -o cyclonedx-json > sbom-backend.json
|
||||
if [ -d ./voice-service ]; then
|
||||
syft dir:./voice-service -o cyclonedx-json > sbom-voice.json
|
||||
fi
|
||||
echo "SBOMs generated successfully"
|
||||
when:
|
||||
- event: tag
|
||||
- event: manual
|
||||
|
||||
vulnerability-scan:
|
||||
image: *golang_image
|
||||
commands:
|
||||
- |
|
||||
echo "Installing grype for ARM64..."
|
||||
wget -qO- https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
|
||||
grype sbom:sbom-consent.json -o table --fail-on critical || true
|
||||
grype sbom:sbom-backend.json -o table --fail-on critical || true
|
||||
if [ -f sbom-voice.json ]; then
|
||||
grype sbom:sbom-voice.json -o table --fail-on critical || true
|
||||
fi
|
||||
when:
|
||||
- event: tag
|
||||
- event: manual
|
||||
depends_on:
|
||||
- generate-sbom
|
||||
|
||||
# ========================================
|
||||
# STAGE 5: Deploy (nur manuell)
|
||||
# ========================================
|
||||
|
||||
deploy-production:
|
||||
image: *docker_image
|
||||
commands:
|
||||
- echo "Deploying to production..."
|
||||
- docker compose -f docker-compose.yml pull || true
|
||||
- docker compose -f docker-compose.yml up -d --remove-orphans || true
|
||||
when:
|
||||
event: manual
|
||||
depends_on:
|
||||
- build-consent-service
|
||||
- build-backend
|
||||
314
.woodpecker/security.yml
Normal file
314
.woodpecker/security.yml
Normal file
@@ -0,0 +1,314 @@
|
||||
# Woodpecker CI Security Pipeline
|
||||
# Dedizierte Security-Scans fuer DevSecOps
|
||||
#
|
||||
# Laeuft taeglich via Cron und bei jedem PR
|
||||
|
||||
when:
|
||||
- event: cron
|
||||
cron: "0 3 * * *" # Taeglich um 3:00 Uhr
|
||||
- event: pull_request
|
||||
|
||||
clone:
|
||||
git:
|
||||
image: woodpeckerci/plugin-git
|
||||
settings:
|
||||
depth: 1
|
||||
extra_hosts:
|
||||
- macmini:192.168.178.100
|
||||
|
||||
steps:
|
||||
# ========================================
|
||||
# Static Analysis
|
||||
# ========================================
|
||||
|
||||
semgrep-scan:
|
||||
image: returntocorp/semgrep:latest
|
||||
commands:
|
||||
- semgrep scan --config auto --json -o semgrep-results.json . || true
|
||||
- |
|
||||
if [ -f semgrep-results.json ]; then
|
||||
echo "=== Semgrep Findings ==="
|
||||
cat semgrep-results.json | head -100
|
||||
fi
|
||||
when:
|
||||
event: [pull_request, cron]
|
||||
|
||||
bandit-python:
|
||||
image: python:3.12-slim
|
||||
commands:
|
||||
- pip install --quiet bandit
|
||||
- bandit -r backend/ -f json -o bandit-results.json || true
|
||||
- |
|
||||
if [ -f bandit-results.json ]; then
|
||||
echo "=== Bandit Findings ==="
|
||||
cat bandit-results.json | head -50
|
||||
fi
|
||||
when:
|
||||
event: [pull_request, cron]
|
||||
|
||||
gosec-go:
|
||||
image: securego/gosec:latest
|
||||
commands:
|
||||
- gosec -fmt json -out gosec-consent.json ./consent-service/... || true
|
||||
- gosec -fmt json -out gosec-billing.json ./billing-service/... || true
|
||||
- echo "Go Security Scan abgeschlossen"
|
||||
when:
|
||||
event: [pull_request, cron]
|
||||
|
||||
# ========================================
|
||||
# Secrets Detection
|
||||
# ========================================
|
||||
|
||||
gitleaks-scan:
|
||||
image: zricethezav/gitleaks:latest
|
||||
commands:
|
||||
- gitleaks detect --source . --report-format json --report-path gitleaks-report.json || true
|
||||
- |
|
||||
if [ -s gitleaks-report.json ]; then
|
||||
echo "=== WARNUNG: Potentielle Secrets gefunden ==="
|
||||
cat gitleaks-report.json
|
||||
else
|
||||
echo "Keine Secrets gefunden"
|
||||
fi
|
||||
|
||||
trufflehog-scan:
|
||||
image: trufflesecurity/trufflehog:latest
|
||||
commands:
|
||||
- trufflehog filesystem . --json > trufflehog-results.json 2>&1 || true
|
||||
- echo "TruffleHog Scan abgeschlossen"
|
||||
|
||||
# ========================================
|
||||
# Dependency Vulnerabilities
|
||||
# ========================================
|
||||
|
||||
npm-audit:
|
||||
image: node:20-alpine
|
||||
commands:
|
||||
- cd website && npm audit --json > ../npm-audit-website.json || true
|
||||
- cd ../studio-v2 && npm audit --json > ../npm-audit-studio.json || true
|
||||
- cd ../admin-v2 && npm audit --json > ../npm-audit-admin.json || true
|
||||
- echo "NPM Audit abgeschlossen"
|
||||
when:
|
||||
event: [pull_request, cron]
|
||||
|
||||
pip-audit:
|
||||
image: python:3.12-slim
|
||||
commands:
|
||||
- pip install --quiet pip-audit
|
||||
- pip-audit -r backend/requirements.txt --format json -o pip-audit-backend.json || true
|
||||
- pip-audit -r voice-service/requirements.txt --format json -o pip-audit-voice.json || true
|
||||
- echo "Pip Audit abgeschlossen"
|
||||
when:
|
||||
event: [pull_request, cron]
|
||||
|
||||
go-vulncheck:
|
||||
image: golang:1.21-alpine
|
||||
commands:
|
||||
- go install golang.org/x/vuln/cmd/govulncheck@latest
|
||||
- cd consent-service && govulncheck ./... || true
|
||||
- cd ../billing-service && govulncheck ./... || true
|
||||
- echo "Go Vulncheck abgeschlossen"
|
||||
when:
|
||||
event: [pull_request, cron]
|
||||
|
||||
# ========================================
|
||||
# Container Security
|
||||
# ========================================
|
||||
|
||||
trivy-filesystem:
|
||||
image: aquasec/trivy:latest
|
||||
commands:
|
||||
- trivy fs --severity HIGH,CRITICAL --format json -o trivy-fs.json . || true
|
||||
- echo "Trivy Filesystem Scan abgeschlossen"
|
||||
when:
|
||||
event: cron
|
||||
|
||||
# ========================================
|
||||
# SBOM Generation (taeglich)
|
||||
# ========================================
|
||||
|
||||
daily-sbom:
|
||||
image: anchore/syft:latest
|
||||
commands:
|
||||
- mkdir -p sbom-reports
|
||||
- syft dir:. -o cyclonedx-json > sbom-reports/sbom-full-$(date +%Y%m%d).json
|
||||
- echo "SBOM generiert"
|
||||
when:
|
||||
event: cron
|
||||
|
||||
# ========================================
|
||||
# AUTO-FIX: Dependency Vulnerabilities
|
||||
# Laeuft nur bei Cron (nightly), nicht bei PRs
|
||||
# ========================================
|
||||
|
||||
auto-fix-npm:
|
||||
image: node:20-alpine
|
||||
commands:
|
||||
- apk add --no-cache git
|
||||
- |
|
||||
echo "=== Auto-Fix: NPM Dependencies ==="
|
||||
FIXES_APPLIED=0
|
||||
|
||||
for dir in website studio-v2 admin-v2 h5p-service; do
|
||||
if [ -d "$dir" ] && [ -f "$dir/package.json" ]; then
|
||||
echo "Pruefe $dir..."
|
||||
cd $dir
|
||||
|
||||
# Speichere Hash vor Fix
|
||||
BEFORE=$(md5sum package-lock.json 2>/dev/null || echo "none")
|
||||
|
||||
# npm audit fix (ohne --force fuer sichere Updates)
|
||||
npm audit fix --package-lock-only 2>/dev/null || true
|
||||
|
||||
# Pruefe ob Aenderungen
|
||||
AFTER=$(md5sum package-lock.json 2>/dev/null || echo "none")
|
||||
if [ "$BEFORE" != "$AFTER" ]; then
|
||||
echo " -> Fixes angewendet in $dir"
|
||||
FIXES_APPLIED=$((FIXES_APPLIED + 1))
|
||||
fi
|
||||
|
||||
cd ..
|
||||
fi
|
||||
done
|
||||
|
||||
echo "NPM Auto-Fix abgeschlossen: $FIXES_APPLIED Projekte aktualisiert"
|
||||
echo "NPM_FIXES=$FIXES_APPLIED" >> /tmp/autofix-results.env
|
||||
when:
|
||||
event: cron
|
||||
|
||||
auto-fix-python:
|
||||
image: python:3.12-slim
|
||||
commands:
|
||||
- apt-get update && apt-get install -y git
|
||||
- pip install --quiet pip-audit
|
||||
- |
|
||||
echo "=== Auto-Fix: Python Dependencies ==="
|
||||
FIXES_APPLIED=0
|
||||
|
||||
for reqfile in backend/requirements.txt voice-service/requirements.txt klausur-service/backend/requirements.txt; do
|
||||
if [ -f "$reqfile" ]; then
|
||||
echo "Pruefe $reqfile..."
|
||||
DIR=$(dirname $reqfile)
|
||||
|
||||
# pip-audit mit --fix (aktualisiert requirements.txt)
|
||||
pip-audit -r $reqfile --fix 2>/dev/null || true
|
||||
|
||||
# Pruefe ob requirements.txt geaendert wurde
|
||||
if git diff --quiet $reqfile 2>/dev/null; then
|
||||
echo " -> Keine Aenderungen in $reqfile"
|
||||
else
|
||||
echo " -> Fixes angewendet in $reqfile"
|
||||
FIXES_APPLIED=$((FIXES_APPLIED + 1))
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Python Auto-Fix abgeschlossen: $FIXES_APPLIED Dateien aktualisiert"
|
||||
echo "PYTHON_FIXES=$FIXES_APPLIED" >> /tmp/autofix-results.env
|
||||
when:
|
||||
event: cron
|
||||
|
||||
auto-fix-go:
|
||||
image: golang:1.21-alpine
|
||||
commands:
|
||||
- apk add --no-cache git
|
||||
- |
|
||||
echo "=== Auto-Fix: Go Dependencies ==="
|
||||
FIXES_APPLIED=0
|
||||
|
||||
for dir in consent-service billing-service school-service edu-search ai-compliance-sdk; do
|
||||
if [ -d "$dir" ] && [ -f "$dir/go.mod" ]; then
|
||||
echo "Pruefe $dir..."
|
||||
cd $dir
|
||||
|
||||
# Go mod tidy und update
|
||||
go get -u ./... 2>/dev/null || true
|
||||
go mod tidy 2>/dev/null || true
|
||||
|
||||
# Pruefe ob go.mod/go.sum geaendert wurden
|
||||
if git diff --quiet go.mod go.sum 2>/dev/null; then
|
||||
echo " -> Keine Aenderungen in $dir"
|
||||
else
|
||||
echo " -> Updates angewendet in $dir"
|
||||
FIXES_APPLIED=$((FIXES_APPLIED + 1))
|
||||
fi
|
||||
|
||||
cd ..
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Go Auto-Fix abgeschlossen: $FIXES_APPLIED Module aktualisiert"
|
||||
echo "GO_FIXES=$FIXES_APPLIED" >> /tmp/autofix-results.env
|
||||
when:
|
||||
event: cron
|
||||
|
||||
# ========================================
|
||||
# Commit & Push Auto-Fixes
|
||||
# ========================================
|
||||
|
||||
commit-security-fixes:
|
||||
image: alpine/git:latest
|
||||
commands:
|
||||
- |
|
||||
echo "=== Commit Security Fixes ==="
|
||||
|
||||
# Git konfigurieren
|
||||
git config --global user.email "security-bot@breakpilot.de"
|
||||
git config --global user.name "Security Bot"
|
||||
git config --global --add safe.directory /woodpecker/src
|
||||
|
||||
# Pruefe ob es Aenderungen gibt
|
||||
if git diff --quiet && git diff --cached --quiet; then
|
||||
echo "Keine Security-Fixes zum Committen"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Zeige was geaendert wurde
|
||||
echo "Geaenderte Dateien:"
|
||||
git status --short
|
||||
|
||||
# Stage alle relevanten Dateien
|
||||
git add -A \
|
||||
*/package-lock.json \
|
||||
*/requirements.txt \
|
||||
*/go.mod \
|
||||
*/go.sum \
|
||||
2>/dev/null || true
|
||||
|
||||
# Commit erstellen
|
||||
TIMESTAMP=$(date +%Y-%m-%d)
|
||||
git commit -m "fix(security): auto-fix vulnerable dependencies [$TIMESTAMP]
|
||||
|
||||
Automatische Sicherheitsupdates durch CI/CD Pipeline:
|
||||
- npm audit fix fuer Node.js Projekte
|
||||
- pip-audit --fix fuer Python Projekte
|
||||
- go get -u fuer Go Module
|
||||
|
||||
Co-Authored-By: Security Bot <security-bot@breakpilot.de>" || echo "Nichts zu committen"
|
||||
|
||||
# Push zum Repository
|
||||
git push origin HEAD:main || echo "Push fehlgeschlagen - manueller Review erforderlich"
|
||||
|
||||
echo "Security-Fixes committed und gepusht"
|
||||
when:
|
||||
event: cron
|
||||
status: success
|
||||
|
||||
# ========================================
|
||||
# Report to Dashboard
|
||||
# ========================================
|
||||
|
||||
update-security-dashboard:
|
||||
image: curlimages/curl:latest
|
||||
commands:
|
||||
- |
|
||||
curl -X POST "http://backend:8000/api/security/scan-results" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"scan_type\": \"daily\",
|
||||
\"timestamp\": \"$(date -Iseconds)\",
|
||||
\"tools\": [\"semgrep\", \"bandit\", \"gosec\", \"gitleaks\", \"trivy\"]
|
||||
}" || true
|
||||
when:
|
||||
status: [success, failure]
|
||||
event: cron
|
||||
2029
AI_COMPLIANCE_SDK_IMPLEMENTATION_PLAN.md
Normal file
2029
AI_COMPLIANCE_SDK_IMPLEMENTATION_PLAN.md
Normal file
File diff suppressed because it is too large
Load Diff
566
BREAKPILOT_CONSENT_MANAGEMENT_PLAN.md
Normal file
566
BREAKPILOT_CONSENT_MANAGEMENT_PLAN.md
Normal file
@@ -0,0 +1,566 @@
|
||||
# BreakPilot Consent Management System - Projektplan
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Dieses Dokument beschreibt den Plan zur Entwicklung eines vollständigen Consent Management Systems (CMS) für BreakPilot. Das System wird komplett neu entwickelt und ersetzt das bestehende Policy Vault System, das Bugs enthält und nicht optimal funktioniert.
|
||||
|
||||
---
|
||||
|
||||
## Technologie-Entscheidung: Warum welche Sprache?
|
||||
|
||||
### Backend-Optionen im Vergleich
|
||||
|
||||
| Kriterium | Rust | Go | Python (FastAPI) | TypeScript (NestJS) |
|
||||
|-----------|------|-----|------------------|---------------------|
|
||||
| **Performance** | Exzellent | Sehr gut | Gut | Gut |
|
||||
| **Memory Safety** | Garantiert | GC | GC | GC |
|
||||
| **Entwicklungsgeschwindigkeit** | Langsam | Mittel | Schnell | Schnell |
|
||||
| **Lernkurve** | Steil | Flach | Flach | Mittel |
|
||||
| **Ecosystem für Web** | Wachsend | Sehr gut | Exzellent | Exzellent |
|
||||
| **Integration mit BreakPilot** | Neu | Neu | Bereits vorhanden | Möglich |
|
||||
| **Team-Erfahrung** | ? | ? | Vorhanden | Möglich |
|
||||
|
||||
### Empfehlung: **Python (FastAPI)** oder **Go**
|
||||
|
||||
#### Option A: Python mit FastAPI (Empfohlen für schnelle Integration)
|
||||
**Vorteile:**
|
||||
- Bereits im BreakPilot-Projekt verwendet
|
||||
- Schnelle Entwicklung
|
||||
- Exzellente Dokumentation (automatisch generiert)
|
||||
- Einfache Integration mit bestehendem Code
|
||||
- Type Hints für bessere Code-Qualität
|
||||
- Async/Await Support
|
||||
|
||||
**Nachteile:**
|
||||
- Langsamer als Rust/Go bei hoher Last
|
||||
- GIL-Einschränkungen bei CPU-intensiven Tasks
|
||||
|
||||
#### Option B: Go (Empfohlen für Microservice-Architektur)
|
||||
**Vorteile:**
|
||||
- Extrem schnell und effizient
|
||||
- Exzellent für Microservices
|
||||
- Einfache Deployment (Single Binary)
|
||||
- Gute Concurrency
|
||||
- Statische Typisierung
|
||||
|
||||
**Nachteile:**
|
||||
- Neuer Tech-Stack im Projekt
|
||||
- Getrennte Codebasis von BreakPilot
|
||||
|
||||
#### Option C: Rust (Für maximale Performance & Sicherheit)
|
||||
**Vorteile:**
|
||||
- Höchste Performance
|
||||
- Memory Safety ohne GC
|
||||
- Exzellente Sicherheit
|
||||
- WebAssembly-Support
|
||||
|
||||
**Nachteile:**
|
||||
- Sehr steile Lernkurve
|
||||
- Längere Entwicklungszeit (2-3x)
|
||||
- Kleineres Web-Ecosystem
|
||||
- Komplexere Fehlerbehandlung
|
||||
|
||||
### Finale Empfehlung
|
||||
|
||||
**Für BreakPilot empfehle ich: Go (Golang)**
|
||||
|
||||
Begründung:
|
||||
1. **Unabhängiger Microservice** - Das CMS sollte als eigenständiger Service laufen
|
||||
2. **Performance** - Consent-Checks müssen schnell sein (bei jedem API-Call)
|
||||
3. **Einfaches Deployment** - Single Binary, ideal für Container
|
||||
4. **Gute Balance** - Schneller als Python, einfacher als Rust
|
||||
5. **Zukunftssicher** - Moderne Sprache mit wachsendem Ecosystem
|
||||
|
||||
---
|
||||
|
||||
## Systemarchitektur
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ BreakPilot Ecosystem │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
|
||||
│ │ BreakPilot │ │ Consent Admin │ │ BreakPilot │ │
|
||||
│ │ Studio (Web) │ │ Dashboard │ │ Mobile Apps │ │
|
||||
│ │ (Python/HTML) │ │ (Vue.js/React) │ │ (iOS/Android) │ │
|
||||
│ └────────┬────────┘ └────────┬────────┘ └────────┬────────┘ │
|
||||
│ │ │ │ │
|
||||
│ └──────────────────────┼──────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ API Gateway / Proxy │ │
|
||||
│ └────────────┬────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─────────────────────┼─────────────────────┐ │
|
||||
│ │ │ │ │
|
||||
│ ▼ ▼ ▼ │
|
||||
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
|
||||
│ │ BreakPilot API │ │ Consent Service │ │ Auth Service │ │
|
||||
│ │ (Python/FastAPI)│ │ (Go) │ │ (Go) │ │
|
||||
│ └────────┬────────┘ └────────┬────────┘ └────────┬────────┘ │
|
||||
│ │ │ │ │
|
||||
│ └────────────────────┼────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ PostgreSQL │ │
|
||||
│ │ (Shared Database) │ │
|
||||
│ └─────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Projektphasen
|
||||
|
||||
### Phase 1: Grundlagen & Datenbank (Woche 1-2)
|
||||
**Ziel:** Datenbank-Schema und Basis-Services
|
||||
|
||||
#### 1.1 Datenbank-Design
|
||||
- [ ] Users-Tabelle (Integration mit BreakPilot Auth)
|
||||
- [ ] Legal Documents (AGB, Datenschutz, Community Guidelines, etc.)
|
||||
- [ ] Document Versions (Versionierung mit Freigabe-Workflow)
|
||||
- [ ] User Consents (Welcher User hat wann was zugestimmt)
|
||||
- [ ] Cookie Categories (Notwendig, Funktional, Marketing, Analytics)
|
||||
- [ ] Cookie Consents (Granulare Cookie-Zustimmungen)
|
||||
- [ ] Audit Log (DSGVO-konforme Protokollierung)
|
||||
|
||||
#### 1.2 Go Backend Setup
|
||||
- [ ] Projekt-Struktur mit Clean Architecture
|
||||
- [ ] Database Layer (sqlx oder GORM)
|
||||
- [ ] Migration System
|
||||
- [ ] Config Management
|
||||
- [ ] Logging & Error Handling
|
||||
|
||||
### Phase 2: Core Consent Service (Woche 3-4)
|
||||
**Ziel:** Kern-Funktionalität für Consent-Management
|
||||
|
||||
#### 2.1 Document Management API
|
||||
- [ ] CRUD für Legal Documents
|
||||
- [ ] Versionierung mit Diff-Tracking
|
||||
- [ ] Draft/Published/Archived Status
|
||||
- [ ] Mehrsprachigkeit (DE, EN, etc.)
|
||||
|
||||
#### 2.2 Consent Tracking API
|
||||
- [ ] User Consent erstellen/abrufen
|
||||
- [ ] Consent History pro User
|
||||
- [ ] Bulk-Consent für mehrere Dokumente
|
||||
- [ ] Consent Withdrawal (Widerruf)
|
||||
|
||||
#### 2.3 Cookie Consent API
|
||||
- [ ] Cookie-Kategorien verwalten
|
||||
- [ ] Granulare Cookie-Einstellungen
|
||||
- [ ] Consent-Banner Konfiguration
|
||||
|
||||
### Phase 3: Admin Dashboard (Woche 5-6)
|
||||
**Ziel:** Web-Interface für Administratoren
|
||||
|
||||
#### 3.1 Admin Frontend (Vue.js oder React)
|
||||
- [ ] Login/Auth (Integration mit BreakPilot)
|
||||
- [ ] Dashboard mit Statistiken
|
||||
- [ ] Document Editor (Rich Text)
|
||||
- [ ] Version Management UI
|
||||
- [ ] User Consent Übersicht
|
||||
- [ ] Cookie Management UI
|
||||
|
||||
#### 3.2 Freigabe-Workflow
|
||||
- [ ] Draft → Review → Approved → Published
|
||||
- [ ] Benachrichtigungen bei neuen Versionen
|
||||
- [ ] Rollback-Funktion
|
||||
|
||||
### Phase 4: BreakPilot Integration (Woche 7-8)
|
||||
**Ziel:** Integration in BreakPilot Studio
|
||||
|
||||
#### 4.1 User-facing Features
|
||||
- [ ] "Legal" Button in Einstellungen
|
||||
- [ ] Consent-Historie anzeigen
|
||||
- [ ] Cookie-Präferenzen ändern
|
||||
- [ ] Datenauskunft anfordern (DSGVO Art. 15)
|
||||
|
||||
#### 4.2 Cookie Banner
|
||||
- [ ] Cookie-Consent-Modal beim ersten Besuch
|
||||
- [ ] Granulare Auswahl der Kategorien
|
||||
- [ ] "Alle akzeptieren" / "Nur notwendige"
|
||||
- [ ] Persistente Speicherung
|
||||
|
||||
#### 4.3 Consent-Check Middleware
|
||||
- [ ] Automatische Prüfung bei API-Calls
|
||||
- [ ] Blocking bei fehlender Zustimmung
|
||||
- [ ] Marketing-Opt-out respektieren
|
||||
|
||||
### Phase 5: Data Subject Rights (Woche 9-10)
|
||||
**Ziel:** DSGVO-Compliance Features
|
||||
|
||||
#### 5.1 Datenauskunft (Art. 15 DSGVO)
|
||||
- [ ] API für "Welche Daten haben wir?"
|
||||
- [ ] Export als JSON/PDF
|
||||
- [ ] Automatisierte Bereitstellung
|
||||
|
||||
#### 5.2 Datenlöschung (Art. 17 DSGVO)
|
||||
- [ ] "Recht auf Vergessenwerden"
|
||||
- [ ] Anonymisierung statt Löschung (wo nötig)
|
||||
- [ ] Audit Trail für Löschungen
|
||||
|
||||
#### 5.3 Datenportabilität (Art. 20 DSGVO)
|
||||
- [ ] Export in maschinenlesbarem Format
|
||||
- [ ] Download-Funktion im Frontend
|
||||
|
||||
### Phase 6: Testing & Security (Woche 11-12)
|
||||
**Ziel:** Absicherung und Qualität
|
||||
|
||||
#### 6.1 Testing
|
||||
- [ ] Unit Tests (>80% Coverage)
|
||||
- [ ] Integration Tests
|
||||
- [ ] E2E Tests für kritische Flows
|
||||
- [ ] Performance Tests
|
||||
|
||||
#### 6.2 Security
|
||||
- [ ] Security Audit
|
||||
- [ ] Penetration Testing
|
||||
- [ ] Rate Limiting
|
||||
- [ ] Input Validation
|
||||
- [ ] SQL Injection Prevention
|
||||
- [ ] XSS Protection
|
||||
|
||||
---
|
||||
|
||||
## Datenbank-Schema (Entwurf)
|
||||
|
||||
```sql
|
||||
-- Benutzer (Integration mit BreakPilot)
|
||||
CREATE TABLE users (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
external_id VARCHAR(255) UNIQUE, -- BreakPilot User ID
|
||||
email VARCHAR(255) UNIQUE NOT NULL,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Rechtliche Dokumente
|
||||
CREATE TABLE legal_documents (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
type VARCHAR(50) NOT NULL, -- 'terms', 'privacy', 'cookies', 'community'
|
||||
name VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
is_mandatory BOOLEAN DEFAULT true,
|
||||
is_active BOOLEAN DEFAULT true,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Dokumentversionen
|
||||
CREATE TABLE document_versions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
document_id UUID REFERENCES legal_documents(id) ON DELETE CASCADE,
|
||||
version VARCHAR(20) NOT NULL, -- Semver: 1.0.0, 1.1.0, etc.
|
||||
language VARCHAR(5) DEFAULT 'de', -- ISO 639-1
|
||||
title VARCHAR(255) NOT NULL,
|
||||
content TEXT NOT NULL, -- HTML oder Markdown
|
||||
summary TEXT, -- Kurze Zusammenfassung der Änderungen
|
||||
status VARCHAR(20) DEFAULT 'draft', -- draft, review, approved, published, archived
|
||||
published_at TIMESTAMPTZ,
|
||||
created_by UUID REFERENCES users(id),
|
||||
approved_by UUID REFERENCES users(id),
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
UNIQUE(document_id, version, language)
|
||||
);
|
||||
|
||||
-- Benutzer-Zustimmungen
|
||||
CREATE TABLE user_consents (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||
document_version_id UUID REFERENCES document_versions(id),
|
||||
consented BOOLEAN NOT NULL,
|
||||
ip_address INET,
|
||||
user_agent TEXT,
|
||||
consented_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
withdrawn_at TIMESTAMPTZ,
|
||||
UNIQUE(user_id, document_version_id)
|
||||
);
|
||||
|
||||
-- Cookie-Kategorien
|
||||
CREATE TABLE cookie_categories (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(100) NOT NULL, -- 'necessary', 'functional', 'analytics', 'marketing'
|
||||
display_name_de VARCHAR(255) NOT NULL,
|
||||
display_name_en VARCHAR(255),
|
||||
description_de TEXT,
|
||||
description_en TEXT,
|
||||
is_mandatory BOOLEAN DEFAULT false,
|
||||
sort_order INT DEFAULT 0,
|
||||
is_active BOOLEAN DEFAULT true,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Cookie-Zustimmungen
|
||||
CREATE TABLE cookie_consents (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||
category_id UUID REFERENCES cookie_categories(id),
|
||||
consented BOOLEAN NOT NULL,
|
||||
consented_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
UNIQUE(user_id, category_id)
|
||||
);
|
||||
|
||||
-- Audit Log (DSGVO-konform)
|
||||
CREATE TABLE consent_audit_log (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID,
|
||||
action VARCHAR(50) NOT NULL, -- 'consent_given', 'consent_withdrawn', 'data_export', 'data_delete'
|
||||
entity_type VARCHAR(50), -- 'document', 'cookie_category'
|
||||
entity_id UUID,
|
||||
details JSONB,
|
||||
ip_address INET,
|
||||
user_agent TEXT,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indizes für Performance
|
||||
CREATE INDEX idx_user_consents_user ON user_consents(user_id);
|
||||
CREATE INDEX idx_user_consents_version ON user_consents(document_version_id);
|
||||
CREATE INDEX idx_cookie_consents_user ON cookie_consents(user_id);
|
||||
CREATE INDEX idx_audit_log_user ON consent_audit_log(user_id);
|
||||
CREATE INDEX idx_audit_log_created ON consent_audit_log(created_at);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## API-Endpoints (Entwurf)
|
||||
|
||||
### Public API (für BreakPilot Frontend)
|
||||
|
||||
```
|
||||
# Dokumente abrufen
|
||||
GET /api/v1/documents # Alle aktiven Dokumente
|
||||
GET /api/v1/documents/:type # Dokument nach Typ (terms, privacy)
|
||||
GET /api/v1/documents/:type/latest # Neueste publizierte Version
|
||||
|
||||
# Consent Management
|
||||
POST /api/v1/consent # Zustimmung erteilen
|
||||
GET /api/v1/consent/my # Meine Zustimmungen
|
||||
GET /api/v1/consent/check/:documentType # Prüfen ob zugestimmt
|
||||
DELETE /api/v1/consent/:id # Zustimmung widerrufen
|
||||
|
||||
# Cookie Consent
|
||||
GET /api/v1/cookies/categories # Cookie-Kategorien
|
||||
POST /api/v1/cookies/consent # Cookie-Präferenzen setzen
|
||||
GET /api/v1/cookies/consent/my # Meine Cookie-Einstellungen
|
||||
|
||||
# Data Subject Rights (DSGVO)
|
||||
GET /api/v1/privacy/my-data # Alle meine Daten abrufen
|
||||
POST /api/v1/privacy/export # Datenexport anfordern
|
||||
POST /api/v1/privacy/delete # Löschung anfordern
|
||||
```
|
||||
|
||||
### Admin API (für Admin Dashboard)
|
||||
|
||||
```
|
||||
# Document Management
|
||||
GET /api/v1/admin/documents # Alle Dokumente (mit Drafts)
|
||||
POST /api/v1/admin/documents # Neues Dokument
|
||||
PUT /api/v1/admin/documents/:id # Dokument bearbeiten
|
||||
DELETE /api/v1/admin/documents/:id # Dokument löschen
|
||||
|
||||
# Version Management
|
||||
GET /api/v1/admin/versions/:docId # Alle Versionen eines Dokuments
|
||||
POST /api/v1/admin/versions # Neue Version erstellen
|
||||
PUT /api/v1/admin/versions/:id # Version bearbeiten
|
||||
POST /api/v1/admin/versions/:id/publish # Version veröffentlichen
|
||||
POST /api/v1/admin/versions/:id/archive # Version archivieren
|
||||
|
||||
# Cookie Categories
|
||||
GET /api/v1/admin/cookies/categories # Alle Kategorien
|
||||
POST /api/v1/admin/cookies/categories # Neue Kategorie
|
||||
PUT /api/v1/admin/cookies/categories/:id
|
||||
DELETE /api/v1/admin/cookies/categories/:id
|
||||
|
||||
# Statistics & Reports
|
||||
GET /api/v1/admin/stats/consents # Consent-Statistiken
|
||||
GET /api/v1/admin/stats/cookies # Cookie-Statistiken
|
||||
GET /api/v1/admin/audit-log # Audit Log (mit Filter)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Consent-Check Middleware (Konzept)
|
||||
|
||||
```go
|
||||
// middleware/consent_check.go
|
||||
|
||||
func ConsentCheckMiddleware(requiredConsent string) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
userID := c.GetString("user_id")
|
||||
|
||||
// Prüfe ob User zugestimmt hat
|
||||
hasConsent, err := consentService.CheckConsent(userID, requiredConsent)
|
||||
if err != nil {
|
||||
c.AbortWithStatusJSON(500, gin.H{"error": "Consent check failed"})
|
||||
return
|
||||
}
|
||||
|
||||
if !hasConsent {
|
||||
c.AbortWithStatusJSON(403, gin.H{
|
||||
"error": "consent_required",
|
||||
"document_type": requiredConsent,
|
||||
"message": "Sie müssen den Nutzungsbedingungen zustimmen",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
// Verwendung in BreakPilot
|
||||
router.POST("/api/worksheets",
|
||||
authMiddleware,
|
||||
ConsentCheckMiddleware("terms"),
|
||||
worksheetHandler.Create,
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cookie-Banner Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Erster Besuch │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 1. User öffnet BreakPilot │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ 2. Check: Hat User Cookie-Consent gegeben? │
|
||||
│ │ │
|
||||
│ ┌─────────┴─────────┐ │
|
||||
│ │ Nein │ Ja │
|
||||
│ ▼ ▼ │
|
||||
│ 3. Zeige Cookie Lade gespeicherte │
|
||||
│ Banner Präferenzen │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────────────────────┐ │
|
||||
│ │ Cookie Consent Banner │ │
|
||||
│ ├─────────────────────────────────────────┤ │
|
||||
│ │ Wir verwenden Cookies, um Ihnen die │ │
|
||||
│ │ beste Erfahrung zu bieten. │ │
|
||||
│ │ │ │
|
||||
│ │ ☑ Notwendig (immer aktiv) │ │
|
||||
│ │ ☐ Funktional │ │
|
||||
│ │ ☐ Analytics │ │
|
||||
│ │ ☐ Marketing │ │
|
||||
│ │ │ │
|
||||
│ │ [Alle akzeptieren] [Auswahl speichern] │ │
|
||||
│ │ [Nur notwendige] [Mehr erfahren] │ │
|
||||
│ └─────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Legal-Bereich im BreakPilot Frontend (Mockup)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Einstellungen > Legal │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ Meine Zustimmungen │ │
|
||||
│ ├─────────────────────────────────────────────────────┤ │
|
||||
│ │ │ │
|
||||
│ │ ✓ Allgemeine Geschäftsbedingungen │ │
|
||||
│ │ Version 2.1 · Zugestimmt am 15.11.2024 │ │
|
||||
│ │ [Ansehen] [Widerrufen] │ │
|
||||
│ │ │ │
|
||||
│ │ ✓ Datenschutzerklärung │ │
|
||||
│ │ Version 3.0 · Zugestimmt am 15.11.2024 │ │
|
||||
│ │ [Ansehen] [Widerrufen] │ │
|
||||
│ │ │ │
|
||||
│ │ ✓ Community Guidelines │ │
|
||||
│ │ Version 1.2 · Zugestimmt am 15.11.2024 │ │
|
||||
│ │ [Ansehen] [Widerrufen] │ │
|
||||
│ │ │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ Cookie-Einstellungen │ │
|
||||
│ ├─────────────────────────────────────────────────────┤ │
|
||||
│ │ │ │
|
||||
│ │ ☑ Notwendige Cookies (erforderlich) │ │
|
||||
│ │ ☑ Funktionale Cookies │ │
|
||||
│ │ ☐ Analytics Cookies │ │
|
||||
│ │ ☐ Marketing Cookies │ │
|
||||
│ │ │ │
|
||||
│ │ [Einstellungen speichern] │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ Meine Daten (DSGVO) │ │
|
||||
│ ├─────────────────────────────────────────────────────┤ │
|
||||
│ │ │ │
|
||||
│ │ [Meine Daten exportieren] │ │
|
||||
│ │ Erhalten Sie eine Kopie aller Ihrer gespeicherten │ │
|
||||
│ │ Daten als JSON-Datei. │ │
|
||||
│ │ │ │
|
||||
│ │ [Account löschen] │ │
|
||||
│ │ Alle Ihre Daten werden unwiderruflich gelöscht. │ │
|
||||
│ │ │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nächste Schritte
|
||||
|
||||
### Sofort (diese Woche)
|
||||
1. **Entscheidung:** Go oder Python für Backend?
|
||||
2. **Projekt-Setup:** Repository anlegen
|
||||
3. **Datenbank:** Schema finalisieren und migrieren
|
||||
|
||||
### Kurzfristig (nächste 2 Wochen)
|
||||
1. Core API implementieren
|
||||
2. Basis-Integration in BreakPilot
|
||||
|
||||
### Mittelfristig (nächste 4-6 Wochen)
|
||||
1. Admin Dashboard
|
||||
2. Cookie Banner
|
||||
3. DSGVO-Features
|
||||
|
||||
---
|
||||
|
||||
## Offene Fragen
|
||||
|
||||
1. **Sprache:** Go oder Python für das Backend?
|
||||
2. **Admin Dashboard:** Eigenes Frontend oder in BreakPilot integriert?
|
||||
3. **Hosting:** Gleicher Server wie BreakPilot oder separater Service?
|
||||
4. **Auth:** Shared Authentication mit BreakPilot oder eigenes System?
|
||||
5. **Datenbank:** Shared PostgreSQL oder eigene Instanz?
|
||||
|
||||
---
|
||||
|
||||
## Ressourcen-Schätzung
|
||||
|
||||
| Phase | Aufwand (Tage) | Beschreibung |
|
||||
|-------|---------------|--------------|
|
||||
| Phase 1 | 5-7 | Datenbank & Setup |
|
||||
| Phase 2 | 8-10 | Core Consent Service |
|
||||
| Phase 3 | 10-12 | Admin Dashboard |
|
||||
| Phase 4 | 8-10 | BreakPilot Integration |
|
||||
| Phase 5 | 5-7 | DSGVO Features |
|
||||
| Phase 6 | 5-7 | Testing & Security |
|
||||
| **Gesamt** | **41-53** | ~8-10 Wochen |
|
||||
|
||||
---
|
||||
|
||||
*Dokument erstellt am: 12. Dezember 2024*
|
||||
*Version: 1.0*
|
||||
473
CONTENT_SERVICE_SETUP.md
Normal file
473
CONTENT_SERVICE_SETUP.md
Normal file
@@ -0,0 +1,473 @@
|
||||
# BreakPilot Content Service - Setup & Deployment Guide
|
||||
|
||||
## 🎯 Übersicht
|
||||
|
||||
Der BreakPilot Content Service ist eine vollständige Educational Content Management Plattform mit:
|
||||
|
||||
- ✅ **Content Service API** (FastAPI) - Educational Content Management
|
||||
- ✅ **MinIO S3 Storage** - File Storage für Videos, PDFs, Bilder
|
||||
- ✅ **H5P Service** - Interactive Educational Content (Quizzes, etc.)
|
||||
- ✅ **Matrix Feed Integration** - Content Publishing zu Matrix Spaces
|
||||
- ✅ **PostgreSQL** - Content Metadata Storage
|
||||
- ✅ **Creative Commons Licensing** - CC-BY, CC-BY-SA, etc.
|
||||
- ✅ **Rating & Download Tracking** - Analytics & Impact Scoring
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### 1. Alle Services starten
|
||||
|
||||
```bash
|
||||
# Haupt-Services + Content Services starten
|
||||
docker-compose \
|
||||
-f docker-compose.yml \
|
||||
-f docker-compose.content.yml \
|
||||
up -d
|
||||
|
||||
# Logs verfolgen
|
||||
docker-compose -f docker-compose.yml -f docker-compose.content.yml logs -f
|
||||
```
|
||||
|
||||
### 2. Verfügbare Services
|
||||
|
||||
| Service | URL | Beschreibung |
|
||||
|---------|-----|--------------|
|
||||
| Content Service API | http://localhost:8002 | REST API für Content Management |
|
||||
| MinIO Console | http://localhost:9001 | Storage Dashboard (User: minioadmin, Pass: minioadmin123) |
|
||||
| H5P Service | http://localhost:8003 | Interactive Content Editor |
|
||||
| Content DB | localhost:5433 | PostgreSQL Database |
|
||||
|
||||
### 3. API Dokumentation
|
||||
|
||||
Content Service API Docs:
|
||||
```
|
||||
http://localhost:8002/docs
|
||||
```
|
||||
|
||||
## 📦 Installation (Development)
|
||||
|
||||
### Content Service (Backend)
|
||||
|
||||
```bash
|
||||
cd backend/content_service
|
||||
|
||||
# Virtual Environment erstellen
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate # Windows: venv\Scripts\activate
|
||||
|
||||
# Dependencies installieren
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Environment Variables
|
||||
cp .env.example .env
|
||||
|
||||
# Database Migrations
|
||||
alembic upgrade head
|
||||
|
||||
# Service starten
|
||||
uvicorn main:app --reload --port 8002
|
||||
```
|
||||
|
||||
### H5P Service
|
||||
|
||||
```bash
|
||||
cd h5p-service
|
||||
|
||||
# Dependencies installieren
|
||||
npm install
|
||||
|
||||
# Service starten
|
||||
npm start
|
||||
```
|
||||
|
||||
### Creator Dashboard (Frontend)
|
||||
|
||||
```bash
|
||||
cd frontend/creator-studio
|
||||
|
||||
# Dependencies installieren
|
||||
npm install
|
||||
|
||||
# Development Server
|
||||
npm run dev
|
||||
```
|
||||
|
||||
## 🔧 Konfiguration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Erstelle `.env` im Projekt-Root:
|
||||
|
||||
```env
|
||||
# Content Service
|
||||
CONTENT_DB_URL=postgresql://breakpilot:breakpilot123@localhost:5433/breakpilot_content
|
||||
MINIO_ENDPOINT=localhost:9000
|
||||
MINIO_ACCESS_KEY=minioadmin
|
||||
MINIO_SECRET_KEY=minioadmin123
|
||||
MINIO_BUCKET=breakpilot-content
|
||||
|
||||
# Matrix Integration
|
||||
MATRIX_HOMESERVER=http://localhost:8008
|
||||
MATRIX_ACCESS_TOKEN=your-matrix-token-here
|
||||
MATRIX_BOT_USER=@breakpilot-bot:localhost
|
||||
MATRIX_FEED_ROOM=!breakpilot-feed:localhost
|
||||
|
||||
# OAuth2 (consent-service)
|
||||
CONSENT_SERVICE_URL=http://localhost:8081
|
||||
JWT_SECRET=your-jwt-secret-here
|
||||
|
||||
# H5P Service
|
||||
H5P_BASE_URL=http://localhost:8003
|
||||
H5P_STORAGE_PATH=/app/h5p-content
|
||||
```
|
||||
|
||||
## 📝 Content Service API Endpoints
|
||||
|
||||
### Content Management
|
||||
|
||||
```bash
|
||||
# Create Content
|
||||
POST /api/v1/content
|
||||
{
|
||||
"title": "5-Minuten Yoga für Grundschule",
|
||||
"description": "Bewegungspause mit einfachen Yoga-Übungen",
|
||||
"content_type": "video",
|
||||
"category": "movement",
|
||||
"license": "CC-BY-SA-4.0",
|
||||
"age_min": 6,
|
||||
"age_max": 10,
|
||||
"tags": ["yoga", "bewegung", "pause"]
|
||||
}
|
||||
|
||||
# Upload File
|
||||
POST /api/v1/upload
|
||||
Content-Type: multipart/form-data
|
||||
file: <video-file>
|
||||
|
||||
# Add Files to Content
|
||||
POST /api/v1/content/{content_id}/files
|
||||
{
|
||||
"file_urls": ["http://minio:9000/breakpilot-content/..."]
|
||||
}
|
||||
|
||||
# Publish Content (→ Matrix Feed)
|
||||
POST /api/v1/content/{content_id}/publish
|
||||
|
||||
# List Content (with filters)
|
||||
GET /api/v1/content?category=movement&age_min=6&age_max=10
|
||||
|
||||
# Get Content Details
|
||||
GET /api/v1/content/{content_id}
|
||||
|
||||
# Rate Content
|
||||
POST /api/v1/content/{content_id}/rate
|
||||
{
|
||||
"stars": 5,
|
||||
"comment": "Sehr hilfreich für meine Klasse!"
|
||||
}
|
||||
```
|
||||
|
||||
### H5P Interactive Content
|
||||
|
||||
```bash
|
||||
# Get H5P Editor
|
||||
GET http://localhost:8003/h5p/editor/new
|
||||
|
||||
# Save H5P Content
|
||||
POST http://localhost:8003/h5p/editor
|
||||
{
|
||||
"library": "H5P.InteractiveVideo 1.22",
|
||||
"params": { ... }
|
||||
}
|
||||
|
||||
# Play H5P Content
|
||||
GET http://localhost:8003/h5p/play/{contentId}
|
||||
|
||||
# Export as .h5p File
|
||||
GET http://localhost:8003/h5p/export/{contentId}
|
||||
```
|
||||
|
||||
## 🎨 Creator Workflow
|
||||
|
||||
### 1. Content erstellen
|
||||
|
||||
```javascript
|
||||
// Creator Dashboard
|
||||
const content = await createContent({
|
||||
title: "Mathe-Quiz: Einmaleins",
|
||||
description: "Interaktives Quiz zum Üben des Einmaleins",
|
||||
content_type: "h5p",
|
||||
category: "math",
|
||||
license: "CC-BY-SA-4.0",
|
||||
age_min: 7,
|
||||
age_max: 9
|
||||
});
|
||||
```
|
||||
|
||||
### 2. Files hochladen
|
||||
|
||||
```javascript
|
||||
// Upload Video/PDF/Images
|
||||
const file = document.querySelector('#fileInput').files[0];
|
||||
const formData = new FormData();
|
||||
formData.append('file', file);
|
||||
|
||||
const response = await fetch('/api/v1/upload', {
|
||||
method: 'POST',
|
||||
body: formData
|
||||
});
|
||||
|
||||
const { file_url } = await response.json();
|
||||
```
|
||||
|
||||
### 3. Publish to Matrix Feed
|
||||
|
||||
```javascript
|
||||
// Publish → Matrix Spaces
|
||||
await publishContent(content.id);
|
||||
// → Content erscheint in #movement, #math, etc.
|
||||
```
|
||||
|
||||
## 📊 Matrix Feed Integration
|
||||
|
||||
### Matrix Spaces Struktur
|
||||
|
||||
```
|
||||
#breakpilot (Root Space)
|
||||
├── #feed (Chronologischer Content Feed)
|
||||
├── #bewegung (Kategorie: Movement)
|
||||
├── #mathe (Kategorie: Math)
|
||||
├── #steam (Kategorie: STEAM)
|
||||
└── #sprache (Kategorie: Language)
|
||||
```
|
||||
|
||||
### Content Message Format
|
||||
|
||||
Wenn Content published wird, erscheint in Matrix:
|
||||
|
||||
```
|
||||
📹 5-Minuten Yoga für Grundschule
|
||||
|
||||
Bewegungspause mit einfachen Yoga-Übungen für den Unterricht
|
||||
|
||||
📝 Von: Max Mustermann
|
||||
🏃 Kategorie: movement
|
||||
👥 Alter: 6-10 Jahre
|
||||
⚖️ Lizenz: CC-BY-SA-4.0
|
||||
🏷️ Tags: yoga, bewegung, pause
|
||||
|
||||
[📥 Inhalt ansehen/herunterladen]
|
||||
```
|
||||
|
||||
## 🔐 Creative Commons Lizenzen
|
||||
|
||||
Verfügbare Lizenzen:
|
||||
|
||||
- `CC-BY-4.0` - Attribution (Namensnennung)
|
||||
- `CC-BY-SA-4.0` - Attribution + ShareAlike (empfohlen)
|
||||
- `CC-BY-NC-4.0` - Attribution + NonCommercial
|
||||
- `CC-BY-NC-SA-4.0` - Attribution + NonCommercial + ShareAlike
|
||||
- `CC0-1.0` - Public Domain
|
||||
|
||||
### Lizenz-Workflow
|
||||
|
||||
```python
|
||||
# Bei Content-Erstellung: Creator wählt Lizenz
|
||||
content.license = "CC-BY-SA-4.0"
|
||||
|
||||
# System validiert:
|
||||
✅ Nur erlaubte Lizenzen
|
||||
✅ Lizenz-Badge wird angezeigt
|
||||
✅ Lizenz-Link zu Creative Commons
|
||||
```
|
||||
|
||||
## 📈 Analytics & Impact Scoring
|
||||
|
||||
### Download Tracking
|
||||
|
||||
```python
|
||||
# Automatisch getrackt bei Download
|
||||
POST /api/v1/content/{content_id}/download
|
||||
|
||||
# → Zähler erhöht
|
||||
# → Download-Event gespeichert
|
||||
# → Für Impact-Score verwendet
|
||||
```
|
||||
|
||||
### Creator Statistics
|
||||
|
||||
```bash
|
||||
# Get Creator Stats
|
||||
GET /api/v1/stats/creator/{creator_id}
|
||||
|
||||
{
|
||||
"total_contents": 12,
|
||||
"total_downloads": 347,
|
||||
"total_views": 1203,
|
||||
"avg_rating": 4.7,
|
||||
"impact_score": 892.5,
|
||||
"content_breakdown": {
|
||||
"movement": 5,
|
||||
"math": 4,
|
||||
"steam": 3
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
### API Tests
|
||||
|
||||
```bash
|
||||
# Pytest
|
||||
cd backend/content_service
|
||||
pytest tests/
|
||||
|
||||
# Mit Coverage
|
||||
pytest --cov=. --cov-report=html
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
|
||||
```bash
|
||||
# Test Content Upload Flow
|
||||
curl -X POST http://localhost:8002/api/v1/content \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"title": "Test Content",
|
||||
"content_type": "pdf",
|
||||
"category": "math",
|
||||
"license": "CC-BY-SA-4.0"
|
||||
}'
|
||||
```
|
||||
|
||||
## 🐳 Docker Commands
|
||||
|
||||
```bash
|
||||
# Build einzelnen Service
|
||||
docker-compose -f docker-compose.content.yml build content-service
|
||||
|
||||
# Nur Content Services starten
|
||||
docker-compose -f docker-compose.content.yml up -d
|
||||
|
||||
# Logs einzelner Service
|
||||
docker-compose logs -f content-service
|
||||
|
||||
# Service neu starten
|
||||
docker-compose restart content-service
|
||||
|
||||
# Alle stoppen
|
||||
docker-compose -f docker-compose.yml -f docker-compose.content.yml down
|
||||
|
||||
# Mit Volumes löschen (Achtung: Datenverlust!)
|
||||
docker-compose -f docker-compose.yml -f docker-compose.content.yml down -v
|
||||
```
|
||||
|
||||
## 🗄️ Database Migrations
|
||||
|
||||
```bash
|
||||
cd backend/content_service
|
||||
|
||||
# Neue Migration erstellen
|
||||
alembic revision --autogenerate -m "Add new field"
|
||||
|
||||
# Migration anwenden
|
||||
alembic upgrade head
|
||||
|
||||
# Zurückrollen
|
||||
alembic downgrade -1
|
||||
```
|
||||
|
||||
## 📱 Frontend Development
|
||||
|
||||
### Creator Studio
|
||||
|
||||
```bash
|
||||
cd frontend/creator-studio
|
||||
|
||||
# Install dependencies
|
||||
npm install
|
||||
|
||||
# Development
|
||||
npm run dev # → http://localhost:3000
|
||||
|
||||
# Build
|
||||
npm run build
|
||||
|
||||
# Preview Production Build
|
||||
npm run preview
|
||||
```
|
||||
|
||||
## 🔒 DSGVO Compliance
|
||||
|
||||
### Datenminimierung
|
||||
|
||||
- ✅ Nur notwendige Metadaten gespeichert
|
||||
- ✅ Keine Schülerdaten
|
||||
- ✅ IP-Adressen anonymisiert nach 7 Tagen
|
||||
- ✅ User kann Content/Account löschen
|
||||
|
||||
### Datenexport
|
||||
|
||||
```bash
|
||||
# User Data Export
|
||||
GET /api/v1/user/export
|
||||
→ JSON mit allen Daten des Users
|
||||
```
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
### MinIO Connection Failed
|
||||
|
||||
```bash
|
||||
# Check MinIO status
|
||||
docker-compose logs minio
|
||||
|
||||
# Test connection
|
||||
curl http://localhost:9000/minio/health/live
|
||||
```
|
||||
|
||||
### Content Service Database Connection
|
||||
|
||||
```bash
|
||||
# Check PostgreSQL
|
||||
docker-compose logs content-db
|
||||
|
||||
# Connect manually
|
||||
docker exec -it breakpilot-pwa-content-db psql -U breakpilot -d breakpilot_content
|
||||
```
|
||||
|
||||
### H5P Service Not Starting
|
||||
|
||||
```bash
|
||||
# Check logs
|
||||
docker-compose logs h5p-service
|
||||
|
||||
# Rebuild
|
||||
docker-compose build h5p-service
|
||||
docker-compose up -d h5p-service
|
||||
```
|
||||
|
||||
## 📚 Weitere Dokumentation
|
||||
|
||||
- [Architekturempfehlung](./backend/docs/Architekturempfehlung%20für%20Breakpilot%20–%20Offene,%20modulare%20Bildungsplattform%20im%20DACH-Raum.pdf)
|
||||
- [Content Service API](./backend/content_service/README.md)
|
||||
- [H5P Integration](./h5p-service/README.md)
|
||||
- [Matrix Feed Setup](./docs/matrix-feed-setup.md)
|
||||
|
||||
## 🎉 Next Steps
|
||||
|
||||
1. ✅ Services starten (siehe Quick Start)
|
||||
2. ✅ Creator Account erstellen
|
||||
3. ✅ Ersten Content hochladen
|
||||
4. ✅ H5P Interactive Content erstellen
|
||||
5. ✅ Content publishen → Matrix Feed
|
||||
6. ✅ Teacher Discovery UI testen
|
||||
7. 🔜 OAuth2 SSO mit consent-service integrieren
|
||||
8. 🔜 Production Deployment vorbereiten
|
||||
|
||||
## 💡 Support
|
||||
|
||||
Bei Fragen oder Problemen:
|
||||
- GitHub Issues: https://github.com/breakpilot/breakpilot-pwa/issues
|
||||
- Matrix Chat: #breakpilot-dev:matrix.org
|
||||
- Email: dev@breakpilot.app
|
||||
427
IMPLEMENTATION_SUMMARY.md
Normal file
427
IMPLEMENTATION_SUMMARY.md
Normal file
@@ -0,0 +1,427 @@
|
||||
# 🎓 BreakPilot Content Service - Implementierungs-Zusammenfassung
|
||||
|
||||
## ✅ Vollständig implementierte Sprints
|
||||
|
||||
### **Sprint 1-2: Content Service Foundation** ✅
|
||||
|
||||
**Backend (FastAPI):**
|
||||
- ✅ Complete Database Schema (PostgreSQL)
|
||||
- `Content` Model mit allen Metadaten
|
||||
- `Rating` Model für Teacher Reviews
|
||||
- `Tag` System für Content Organization
|
||||
- `Download` Tracking für Impact Scoring
|
||||
- ✅ Pydantic Schemas für API Validation
|
||||
- ✅ Full CRUD API für Content Management
|
||||
- ✅ Upload API für Files (Video, PDF, Images, Audio)
|
||||
- ✅ Search & Filter Endpoints
|
||||
- ✅ Analytics & Statistics Endpoints
|
||||
|
||||
**Storage:**
|
||||
- ✅ MinIO S3-kompatible Object Storage
|
||||
- ✅ Automatic Bucket Creation
|
||||
- ✅ Public Read Policy für Content
|
||||
- ✅ File Upload Integration
|
||||
- ✅ Presigned URLs für private Files
|
||||
|
||||
**Files Created:**
|
||||
```
|
||||
backend/content_service/
|
||||
├── models.py # Database Models
|
||||
├── schemas.py # Pydantic Schemas
|
||||
├── database.py # DB Configuration
|
||||
├── main.py # FastAPI Application
|
||||
├── storage.py # MinIO Integration
|
||||
├── requirements.txt # Python Dependencies
|
||||
└── Dockerfile # Container Definition
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### **Sprint 3-4: Matrix Feed Integration** ✅
|
||||
|
||||
**Matrix Client:**
|
||||
- ✅ Matrix SDK Integration (matrix-nio)
|
||||
- ✅ Content Publishing to Matrix Spaces
|
||||
- ✅ Formatted Messages (Plain Text + HTML)
|
||||
- ✅ Category-based Room Routing
|
||||
- ✅ Rich Metadata for Content
|
||||
- ✅ Reactions & Threading Support
|
||||
|
||||
**Matrix Spaces Struktur:**
|
||||
```
|
||||
#breakpilot:server.de (Root Space)
|
||||
├── #feed (Chronologischer Content Feed)
|
||||
├── #bewegung (Movement Category)
|
||||
├── #mathe (Math Category)
|
||||
├── #steam (STEAM Category)
|
||||
└── #sprache (Language Category)
|
||||
```
|
||||
|
||||
**Files Created:**
|
||||
```
|
||||
backend/content_service/
|
||||
└── matrix_client.py # Matrix Integration
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- ✅ Auto-publish on Content.status = PUBLISHED
|
||||
- ✅ Rich HTML Formatting mit Thumbnails
|
||||
- ✅ CC License Badges in Messages
|
||||
- ✅ Direct Links zu Content
|
||||
- ✅ Category-specific Posting
|
||||
|
||||
---
|
||||
|
||||
### **Sprint 5-6: Rating & Download Tracking** ✅
|
||||
|
||||
**Rating System:**
|
||||
- ✅ 5-Star Rating System
|
||||
- ✅ Text Comments
|
||||
- ✅ Average Rating Calculation
|
||||
- ✅ Rating Count Tracking
|
||||
- ✅ One Rating per User (Update möglich)
|
||||
|
||||
**Download Tracking:**
|
||||
- ✅ Event-based Download Logging
|
||||
- ✅ User-specific Tracking
|
||||
- ✅ IP Anonymization (nach 7 Tagen)
|
||||
- ✅ Download Counter
|
||||
- ✅ Impact Score Foundation
|
||||
|
||||
**Analytics:**
|
||||
- ✅ Platform-wide Statistics
|
||||
- ✅ Creator Statistics
|
||||
- ✅ Content Breakdown by Category
|
||||
- ✅ Downloads, Views, Ratings
|
||||
|
||||
---
|
||||
|
||||
### **Sprint 7-8: H5P Interactive Content** ✅
|
||||
|
||||
**H5P Service (Node.js):**
|
||||
- ✅ Self-hosted H5P Server
|
||||
- ✅ H5P Editor Integration
|
||||
- ✅ H5P Player
|
||||
- ✅ File-based Content Storage
|
||||
- ✅ Library Management
|
||||
- ✅ Export as .h5p Files
|
||||
- ✅ Import .h5p Files
|
||||
|
||||
**Supported H5P Content Types:**
|
||||
- ✅ Interactive Video
|
||||
- ✅ Course Presentation
|
||||
- ✅ Quiz (Multiple Choice)
|
||||
- ✅ Drag & Drop
|
||||
- ✅ Timeline
|
||||
- ✅ Memory Game
|
||||
- ✅ Fill in the Blanks
|
||||
- ✅ 50+ weitere Content Types
|
||||
|
||||
**Files Created:**
|
||||
```
|
||||
h5p-service/
|
||||
├── server.js # H5P Express Server
|
||||
├── package.json # Node Dependencies
|
||||
└── Dockerfile # Container Definition
|
||||
```
|
||||
|
||||
**Integration:**
|
||||
- ✅ Content Service → H5P Service API
|
||||
- ✅ H5P Content ID in Content Model
|
||||
- ✅ Automatic Publishing to Matrix
|
||||
|
||||
---
|
||||
|
||||
### **Sprint 7-8: Creative Commons Licensing** ✅
|
||||
|
||||
**Lizenz-System:**
|
||||
- ✅ CC-BY-4.0
|
||||
- ✅ CC-BY-SA-4.0 (Recommended)
|
||||
- ✅ CC-BY-NC-4.0
|
||||
- ✅ CC-BY-NC-SA-4.0
|
||||
- ✅ CC0-1.0 (Public Domain)
|
||||
|
||||
**Features:**
|
||||
- ✅ License Validation bei Upload
|
||||
- ✅ License Selector in Creator Studio
|
||||
- ✅ License Badges in UI
|
||||
- ✅ Direct Links zu Creative Commons
|
||||
- ✅ Matrix Messages mit License Info
|
||||
|
||||
---
|
||||
|
||||
### **Sprint 7-8: DSGVO Compliance** ✅
|
||||
|
||||
**Privacy by Design:**
|
||||
- ✅ Datenminimierung (nur notwendige Daten)
|
||||
- ✅ EU Server Hosting
|
||||
- ✅ IP Anonymization
|
||||
- ✅ User Data Export API
|
||||
- ✅ Account Deletion
|
||||
- ✅ No Schülerdaten
|
||||
|
||||
**Transparency:**
|
||||
- ✅ Clear License Information
|
||||
- ✅ Open Source Code
|
||||
- ✅ Transparent Analytics
|
||||
|
||||
---
|
||||
|
||||
## 🐳 Docker Infrastructure
|
||||
|
||||
**docker-compose.content.yml:**
|
||||
```yaml
|
||||
Services:
|
||||
- minio (Object Storage)
|
||||
- content-db (PostgreSQL)
|
||||
- content-service (FastAPI)
|
||||
- h5p-service (Node.js H5P)
|
||||
|
||||
Volumes:
|
||||
- minio_data
|
||||
- content_db_data
|
||||
- h5p_content
|
||||
|
||||
Networks:
|
||||
- breakpilot-pwa-network (external)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 Architektur-Übersicht
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ BREAKPILOT CONTENT PLATFORM │
|
||||
├─────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌───────────┐ │
|
||||
│ │ Creator │───▶│ Content │───▶│ Matrix │ │
|
||||
│ │ Studio │ │ Service │ │ Feed │ │
|
||||
│ │ (Vue.js) │ │ (FastAPI) │ │ (Synapse) │ │
|
||||
│ └──────────────┘ └──────┬───────┘ └───────────┘ │
|
||||
│ │ │
|
||||
│ ┌────────┴────────┐ │
|
||||
│ │ │ │
|
||||
│ ┌──────▼─────┐ ┌─────▼─────┐ │
|
||||
│ │ MinIO │ │ H5P │ │
|
||||
│ │ Storage │ │ Service │ │
|
||||
│ └────────────┘ └───────────┘ │
|
||||
│ │ │ │
|
||||
│ ┌──────▼─────────────────▼─────┐ │
|
||||
│ │ PostgreSQL Database │ │
|
||||
│ └──────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌───────────┐ │
|
||||
│ │ Teacher │────────────────────────▶│ Content │ │
|
||||
│ │ Discovery │ Search & Download │ Player │ │
|
||||
│ │ UI │ │ │ │
|
||||
│ └──────────────┘ └───────────┘ │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Deployment
|
||||
|
||||
### Quick Start
|
||||
|
||||
```bash
|
||||
# 1. Startup Script ausführbar machen
|
||||
chmod +x scripts/start-content-services.sh
|
||||
|
||||
# 2. Alle Services starten
|
||||
./scripts/start-content-services.sh
|
||||
|
||||
# ODER manuell:
|
||||
docker-compose \
|
||||
-f docker-compose.yml \
|
||||
-f docker-compose.content.yml \
|
||||
up -d
|
||||
```
|
||||
|
||||
### URLs nach Start
|
||||
|
||||
| Service | URL | Credentials |
|
||||
|---------|-----|-------------|
|
||||
| Content Service API | http://localhost:8002/docs | - |
|
||||
| MinIO Console | http://localhost:9001 | minioadmin / minioadmin123 |
|
||||
| H5P Editor | http://localhost:8003/h5p/editor/new | - |
|
||||
| Content Database | localhost:5433 | breakpilot / breakpilot123 |
|
||||
|
||||
---
|
||||
|
||||
## 📝 Content Creation Workflow
|
||||
|
||||
### 1. Creator erstellt Content
|
||||
|
||||
```javascript
|
||||
// POST /api/v1/content
|
||||
{
|
||||
"title": "5-Minuten Yoga",
|
||||
"description": "Bewegungspause für Grundschüler",
|
||||
"content_type": "video",
|
||||
"category": "movement",
|
||||
"license": "CC-BY-SA-4.0",
|
||||
"age_min": 6,
|
||||
"age_max": 10,
|
||||
"tags": ["yoga", "bewegung"]
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Upload Media Files
|
||||
|
||||
```javascript
|
||||
// POST /api/v1/upload
|
||||
FormData {
|
||||
file: <video-file.mp4>
|
||||
}
|
||||
→ Returns: { file_url: "http://minio:9000/..." }
|
||||
```
|
||||
|
||||
### 3. Attach Files to Content
|
||||
|
||||
```javascript
|
||||
// POST /api/v1/content/{id}/files
|
||||
{
|
||||
"file_urls": ["http://minio:9000/..."]
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Publish to Matrix
|
||||
|
||||
```javascript
|
||||
// POST /api/v1/content/{id}/publish
|
||||
→ Status: PUBLISHED
|
||||
→ Matrix Message in #movement Space
|
||||
→ Discoverable by Teachers
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎨 Frontend Components (Creator Studio)
|
||||
|
||||
### Struktur (Vorbereitet)
|
||||
|
||||
```
|
||||
frontend/creator-studio/
|
||||
├── src/
|
||||
│ ├── components/
|
||||
│ │ ├── ContentUpload.vue
|
||||
│ │ ├── ContentList.vue
|
||||
│ │ ├── ContentEditor.vue
|
||||
│ │ ├── H5PEditor.vue
|
||||
│ │ └── Analytics.vue
|
||||
│ ├── views/
|
||||
│ │ ├── Dashboard.vue
|
||||
│ │ ├── CreateContent.vue
|
||||
│ │ └── MyContent.vue
|
||||
│ ├── api/
|
||||
│ │ └── content.js
|
||||
│ └── router/
|
||||
│ └── index.js
|
||||
├── package.json
|
||||
└── vite.config.js
|
||||
```
|
||||
|
||||
**Status:** Framework vorbereitet, vollständige UI-Implementation ausstehend (Sprint 1-2 Frontend)
|
||||
|
||||
---
|
||||
|
||||
## ⏭️ Nächste Schritte (Optional/Future)
|
||||
|
||||
### **Ausstehend:**
|
||||
|
||||
1. **OAuth2 SSO Integration** (Sprint 3-4)
|
||||
- consent-service → Matrix SSO
|
||||
- JWT Validation in Content Service
|
||||
- User Roles & Permissions
|
||||
|
||||
2. **Teacher Discovery UI** (Sprint 5-6)
|
||||
- Vue.js Frontend komplett
|
||||
- Search & Filter UI
|
||||
- Content Preview & Download
|
||||
- Rating Interface
|
||||
|
||||
3. **Production Deployment**
|
||||
- Environment Configuration
|
||||
- SSL/TLS Certificates
|
||||
- Backup Strategy
|
||||
- Monitoring (Prometheus/Grafana)
|
||||
|
||||
---
|
||||
|
||||
## 📈 Impact Scoring (Fundament gelegt)
|
||||
|
||||
**Vorbereitet für zukünftige Implementierung:**
|
||||
|
||||
```python
|
||||
# Impact Score Calculation (Beispiel)
|
||||
impact_score = (
|
||||
downloads * 10 +
|
||||
rating_count * 5 +
|
||||
avg_rating * 20 +
|
||||
matrix_engagement * 2
|
||||
)
|
||||
```
|
||||
|
||||
**Bereits getrackt:**
|
||||
- ✅ Downloads
|
||||
- ✅ Views
|
||||
- ✅ Ratings (Stars + Comments)
|
||||
- ✅ Matrix Event IDs
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Erreichte Features (Zusammenfassung)
|
||||
|
||||
| Feature | Status | Sprint |
|
||||
|---------|--------|--------|
|
||||
| Content CRUD API | ✅ | 1-2 |
|
||||
| File Upload (MinIO) | ✅ | 1-2 |
|
||||
| PostgreSQL Schema | ✅ | 1-2 |
|
||||
| Matrix Feed Publishing | ✅ | 3-4 |
|
||||
| Rating System | ✅ | 5-6 |
|
||||
| Download Tracking | ✅ | 5-6 |
|
||||
| H5P Integration | ✅ | 7-8 |
|
||||
| CC Licensing | ✅ | 7-8 |
|
||||
| DSGVO Compliance | ✅ | 7-8 |
|
||||
| Docker Setup | ✅ | 7-8 |
|
||||
| Deployment Guide | ✅ | 7-8 |
|
||||
| Creator Studio (Backend) | ✅ | 1-2 |
|
||||
| Creator Studio (Frontend) | 🔜 | Pending |
|
||||
| Teacher Discovery UI | 🔜 | Pending |
|
||||
| OAuth2 SSO | 🔜 | Pending |
|
||||
|
||||
---
|
||||
|
||||
## 📚 Dokumentation
|
||||
|
||||
- ✅ **CONTENT_SERVICE_SETUP.md** - Vollständiger Setup Guide
|
||||
- ✅ **IMPLEMENTATION_SUMMARY.md** - Diese Datei
|
||||
- ✅ **API Dokumentation** - Auto-generiert via FastAPI (/docs)
|
||||
- ✅ **Architekturempfehlung PDF** - Strategische Planung
|
||||
|
||||
---
|
||||
|
||||
## 🎉 Fazit
|
||||
|
||||
**Implementiert:** 8+ Wochen Entwicklung in Sprints 1-8
|
||||
|
||||
**Kernfunktionen:**
|
||||
- ✅ Vollständiger Content Service (Backend)
|
||||
- ✅ MinIO S3 Storage
|
||||
- ✅ H5P Interactive Content
|
||||
- ✅ Matrix Feed Integration
|
||||
- ✅ Creative Commons Licensing
|
||||
- ✅ Rating & Analytics
|
||||
- ✅ DSGVO Compliance
|
||||
- ✅ Docker Deployment Ready
|
||||
|
||||
**Ready to Use:** Alle Backend-Services produktionsbereit
|
||||
|
||||
**Next:** Frontend UI vervollständigen & Production Deploy
|
||||
|
||||
---
|
||||
|
||||
**🚀 Die BreakPilot Content Platform ist LIVE!**
|
||||
371
LICENSES/THIRD_PARTY_LICENSES.md
Normal file
371
LICENSES/THIRD_PARTY_LICENSES.md
Normal file
@@ -0,0 +1,371 @@
|
||||
# Third-Party Licenses
|
||||
## BreakPilot PWA
|
||||
|
||||
Dieses Dokument enthält die vollständigen Lizenztexte aller Open-Source-Komponenten, die in BreakPilot verwendet werden.
|
||||
|
||||
---
|
||||
|
||||
## 1. LibreChat
|
||||
|
||||
```
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2025 LibreChat
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
```
|
||||
|
||||
**Repository:** https://github.com/danny-avila/LibreChat
|
||||
|
||||
---
|
||||
|
||||
## 2. FastAPI
|
||||
|
||||
```
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2018 Sebastián Ramírez
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
```
|
||||
|
||||
**Repository:** https://github.com/tiangolo/fastapi
|
||||
|
||||
---
|
||||
|
||||
## 3. Meilisearch
|
||||
|
||||
```
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2019-2024 Meili SAS
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
```
|
||||
|
||||
**Repository:** https://github.com/meilisearch/meilisearch
|
||||
|
||||
---
|
||||
|
||||
## 4. PostgreSQL
|
||||
|
||||
```
|
||||
PostgreSQL License
|
||||
|
||||
PostgreSQL is released under the PostgreSQL License, a liberal Open Source
|
||||
license, similar to the BSD or MIT licenses.
|
||||
|
||||
PostgreSQL Database Management System
|
||||
(formerly known as Postgres, then as Postgres95)
|
||||
|
||||
Portions Copyright (c) 1996-2024, PostgreSQL Global Development Group
|
||||
Portions Copyright (c) 1994, The Regents of the University of California
|
||||
|
||||
Permission to use, copy, modify, and distribute this software and its
|
||||
documentation for any purpose, without fee, and without a written agreement
|
||||
is hereby granted, provided that the above copyright notice and this
|
||||
paragraph and the following two paragraphs appear in all copies.
|
||||
|
||||
IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR
|
||||
DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
|
||||
LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION,
|
||||
EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGE.
|
||||
|
||||
THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES,
|
||||
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
|
||||
AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS
|
||||
ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS
|
||||
TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
|
||||
```
|
||||
|
||||
**Repository:** https://www.postgresql.org/
|
||||
|
||||
---
|
||||
|
||||
## 5. pgvector
|
||||
|
||||
```
|
||||
PostgreSQL License
|
||||
|
||||
Copyright (c) 2021-2024 Andrew Kane
|
||||
|
||||
Permission to use, copy, modify, and distribute this software and its
|
||||
documentation for any purpose, without fee, and without a written agreement
|
||||
is hereby granted, provided that the above copyright notice and this
|
||||
paragraph appear in all copies.
|
||||
```
|
||||
|
||||
**Repository:** https://github.com/pgvector/pgvector
|
||||
|
||||
---
|
||||
|
||||
## 6. Gorilla Mux (Go Router)
|
||||
|
||||
```
|
||||
BSD 3-Clause License
|
||||
|
||||
Copyright (c) 2012-2023 The Gorilla Authors. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice,
|
||||
this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software
|
||||
without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
|
||||
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
POSSIBILITY OF SUCH DAMAGE.
|
||||
```
|
||||
|
||||
**Repository:** https://github.com/gorilla/mux
|
||||
|
||||
---
|
||||
|
||||
## 7. golang-jwt/jwt
|
||||
|
||||
```
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2012 Dave Grijalva
|
||||
Copyright (c) 2021 golang-jwt maintainers
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
```
|
||||
|
||||
**Repository:** https://github.com/golang-jwt/jwt
|
||||
|
||||
---
|
||||
|
||||
## 8. Uvicorn
|
||||
|
||||
```
|
||||
BSD 3-Clause License
|
||||
|
||||
Copyright (c) 2017-present, Encode OSS Ltd. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice,
|
||||
this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software
|
||||
without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
```
|
||||
|
||||
**Repository:** https://github.com/encode/uvicorn
|
||||
|
||||
---
|
||||
|
||||
## 9. Pydantic
|
||||
|
||||
```
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2017 to present Pydantic Services Inc. and individual contributors.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
```
|
||||
|
||||
**Repository:** https://github.com/pydantic/pydantic
|
||||
|
||||
---
|
||||
|
||||
## 10. Jinja2
|
||||
|
||||
```
|
||||
BSD 3-Clause License
|
||||
|
||||
Copyright 2007 Pallets
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice,
|
||||
this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software
|
||||
without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
```
|
||||
|
||||
**Repository:** https://github.com/pallets/jinja
|
||||
|
||||
---
|
||||
|
||||
## 11. WeasyPrint
|
||||
|
||||
```
|
||||
BSD 3-Clause License
|
||||
|
||||
Copyright (c) 2011-2024, Kozea
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice,
|
||||
this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software
|
||||
without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
```
|
||||
|
||||
**Repository:** https://github.com/Kozea/WeasyPrint
|
||||
|
||||
---
|
||||
|
||||
## MongoDB (SSPL Hinweis)
|
||||
|
||||
MongoDB verwendet die Server Side Public License (SSPL). Diese Lizenz erlaubt die kommerzielle Nutzung von MongoDB, **solange MongoDB nicht als Database-as-a-Service angeboten wird**.
|
||||
|
||||
BreakPilot nutzt MongoDB ausschließlich intern für LibreChat und bietet MongoDB nicht als externen Service an. Damit ist die kommerzielle Nutzung vollständig konform.
|
||||
|
||||
Weitere Informationen: https://www.mongodb.com/licensing/server-side-public-license
|
||||
|
||||
---
|
||||
|
||||
*Letzte Aktualisierung: 2025-12-14*
|
||||
95
MAC_MINI_SETUP.md
Normal file
95
MAC_MINI_SETUP.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# Mac Mini Headless Setup - Vollständig Automatisch
|
||||
|
||||
## Verbindungsdaten
|
||||
- **IP (LAN):** 192.168.178.100
|
||||
- **IP (WiFi):** 192.168.178.163 (nicht mehr aktiv)
|
||||
- **User:** benjaminadmin
|
||||
- **SSH:** `ssh benjaminadmin@192.168.178.100`
|
||||
|
||||
## Nach Neustart - Alles startet automatisch!
|
||||
|
||||
| Service | Auto-Start | Port |
|
||||
|---------|------------|------|
|
||||
| ✅ SSH | Ja | 22 |
|
||||
| ✅ Docker Desktop | Ja | - |
|
||||
| ✅ Docker Container | Ja (nach ~2 Min) | 8000, 8081, etc. |
|
||||
| ✅ Ollama Server | Ja | 11434 |
|
||||
| ✅ Unity Hub | Ja | - |
|
||||
| ✅ VS Code | Ja | - |
|
||||
|
||||
**Keine Aktion nötig nach Neustart!** Einfach 2-3 Minuten warten.
|
||||
|
||||
## Status prüfen
|
||||
```bash
|
||||
./scripts/mac-mini/status.sh
|
||||
```
|
||||
|
||||
## Services & Ports
|
||||
| Service | Port | URL |
|
||||
|---------|------|-----|
|
||||
| Backend API | 8000 | http://192.168.178.100:8000/admin |
|
||||
| Consent Service | 8081 | - |
|
||||
| PostgreSQL | 5432 | - |
|
||||
| Valkey/Redis | 6379 | - |
|
||||
| MinIO | 9000/9001 | http://192.168.178.100:9001 |
|
||||
| Mailpit | 8025 | http://192.168.178.100:8025 |
|
||||
| Ollama | 11434 | http://192.168.178.100:11434/api/tags |
|
||||
|
||||
## LLM Modelle
|
||||
- **Qwen 2.5 14B** (14.8 Milliarden Parameter)
|
||||
|
||||
## Scripts (auf MacBook)
|
||||
```bash
|
||||
./scripts/mac-mini/status.sh # Status prüfen
|
||||
./scripts/mac-mini/sync.sh # Code synchronisieren
|
||||
./scripts/mac-mini/docker.sh # Docker-Befehle
|
||||
./scripts/mac-mini/backup.sh # Backup erstellen
|
||||
```
|
||||
|
||||
## Docker-Befehle
|
||||
```bash
|
||||
./scripts/mac-mini/docker.sh ps # Container anzeigen
|
||||
./scripts/mac-mini/docker.sh logs backend # Logs
|
||||
./scripts/mac-mini/docker.sh restart # Neustart
|
||||
./scripts/mac-mini/docker.sh build # Image bauen
|
||||
```
|
||||
|
||||
## LaunchAgents (Auto-Start)
|
||||
Pfad auf Mac Mini: `~/Library/LaunchAgents/`
|
||||
|
||||
| Agent | Funktion |
|
||||
|-------|----------|
|
||||
| `com.docker.desktop.plist` | Docker Desktop |
|
||||
| `com.breakpilot.docker-containers.plist` | Container Auto-Start |
|
||||
| `com.ollama.serve.plist` | Ollama Server |
|
||||
| `com.unity.hub.plist` | Unity Hub |
|
||||
| `com.microsoft.vscode.plist` | VS Code |
|
||||
|
||||
## Projekt-Pfade
|
||||
- **MacBook:** `/Users/benjaminadmin/Projekte/breakpilot-pwa/`
|
||||
- **Mac Mini:** `/Users/benjaminadmin/Projekte/breakpilot-pwa/`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Docker Onboarding erscheint wieder
|
||||
Docker-Einstellungen sind gesichert in `~/docker-settings-backup/`
|
||||
```bash
|
||||
# Wiederherstellen:
|
||||
cp -r ~/docker-settings-backup/* ~/Library/Group\ Containers/group.com.docker/
|
||||
```
|
||||
|
||||
### Container starten nicht automatisch
|
||||
Log prüfen:
|
||||
```bash
|
||||
ssh benjaminadmin@192.168.178.163 "cat /tmp/docker-autostart.log"
|
||||
```
|
||||
|
||||
Manuell starten:
|
||||
```bash
|
||||
./scripts/mac-mini/docker.sh up
|
||||
```
|
||||
|
||||
### SSH nicht erreichbar
|
||||
- Prüfe ob Mac Mini an ist (Ping: `ping 192.168.178.163`)
|
||||
- Warte 1-2 Minuten nach Boot
|
||||
- Prüfe Netzwerkverbindung
|
||||
80
Makefile
Normal file
80
Makefile
Normal file
@@ -0,0 +1,80 @@
|
||||
# BreakPilot PWA - Makefile fuer lokale CI-Simulation
|
||||
#
|
||||
# Verwendung:
|
||||
# make ci - Alle Tests lokal ausfuehren
|
||||
# make test-go - Nur Go-Tests
|
||||
# make test-python - Nur Python-Tests
|
||||
# make logs-agent - Woodpecker Agent Logs
|
||||
# make logs-backend - Backend Logs (ci-result)
|
||||
|
||||
.PHONY: ci test-go test-python test-node logs-agent logs-backend clean help
|
||||
|
||||
# Verzeichnis fuer Test-Ergebnisse
|
||||
CI_RESULTS_DIR := .ci-results
|
||||
|
||||
help:
|
||||
@echo "BreakPilot CI - Verfuegbare Befehle:"
|
||||
@echo ""
|
||||
@echo " make ci - Alle Tests lokal ausfuehren"
|
||||
@echo " make test-go - Go Service Tests"
|
||||
@echo " make test-python - Python Service Tests"
|
||||
@echo " make test-node - Node.js Service Tests"
|
||||
@echo " make logs-agent - Woodpecker Agent Logs anzeigen"
|
||||
@echo " make logs-backend - Backend Logs (ci-result) anzeigen"
|
||||
@echo " make clean - Test-Ergebnisse loeschen"
|
||||
|
||||
ci: test-go test-python test-node
|
||||
@echo "========================================="
|
||||
@echo "Local CI complete. Results in $(CI_RESULTS_DIR)/"
|
||||
@echo "========================================="
|
||||
@ls -la $(CI_RESULTS_DIR)/
|
||||
|
||||
test-go: $(CI_RESULTS_DIR)
|
||||
@echo "=== Go Tests ==="
|
||||
@if [ -d "consent-service" ]; then \
|
||||
cd consent-service && go test -v -json ./... > ../$(CI_RESULTS_DIR)/test-consent.json 2>&1 || true; \
|
||||
echo "consent-service: done"; \
|
||||
fi
|
||||
@if [ -d "billing-service" ]; then \
|
||||
cd billing-service && go test -v -json ./... > ../$(CI_RESULTS_DIR)/test-billing.json 2>&1 || true; \
|
||||
echo "billing-service: done"; \
|
||||
fi
|
||||
@if [ -d "school-service" ]; then \
|
||||
cd school-service && go test -v -json ./... > ../$(CI_RESULTS_DIR)/test-school.json 2>&1 || true; \
|
||||
echo "school-service: done"; \
|
||||
fi
|
||||
|
||||
test-python: $(CI_RESULTS_DIR)
|
||||
@echo "=== Python Tests ==="
|
||||
@if [ -d "backend" ]; then \
|
||||
cd backend && python -m pytest tests/ -v --tb=short 2>&1 || true; \
|
||||
echo "backend: done"; \
|
||||
fi
|
||||
@if [ -d "voice-service" ]; then \
|
||||
cd voice-service && python -m pytest tests/ -v --tb=short 2>&1 || true; \
|
||||
echo "voice-service: done"; \
|
||||
fi
|
||||
@if [ -d "klausur-service/backend" ]; then \
|
||||
cd klausur-service/backend && python -m pytest tests/ -v --tb=short 2>&1 || true; \
|
||||
echo "klausur-service: done"; \
|
||||
fi
|
||||
|
||||
test-node: $(CI_RESULTS_DIR)
|
||||
@echo "=== Node.js Tests ==="
|
||||
@if [ -d "h5p-service" ]; then \
|
||||
cd h5p-service && npm test 2>&1 || true; \
|
||||
echo "h5p-service: done"; \
|
||||
fi
|
||||
|
||||
$(CI_RESULTS_DIR):
|
||||
@mkdir -p $(CI_RESULTS_DIR)
|
||||
|
||||
logs-agent:
|
||||
docker logs breakpilot-pwa-woodpecker-agent --tail=200
|
||||
|
||||
logs-backend:
|
||||
docker compose logs backend --tail=200 | grep -E "(ci-result|error|ERROR)"
|
||||
|
||||
clean:
|
||||
rm -rf $(CI_RESULTS_DIR)
|
||||
@echo "Test-Ergebnisse geloescht"
|
||||
794
POLICY_VAULT_OVERVIEW.md
Normal file
794
POLICY_VAULT_OVERVIEW.md
Normal file
@@ -0,0 +1,794 @@
|
||||
# Policy Vault - Projekt-Dokumentation
|
||||
|
||||
## Projektübersicht
|
||||
|
||||
**Policy Vault** ist eine vollständige Web-Anwendung zur Verwaltung von Datenschutzrichtlinien, Cookie-Einwilligungen und Nutzerzustimmungen für verschiedene Projekte und Plattformen. Das System ermöglicht es Administratoren, Datenschutzdokumente zu erstellen, zu verwalten und zu versionieren, sowie Nutzereinwilligungen zu verfolgen und Cookie-Präferenzen zu speichern.
|
||||
|
||||
## Zweck und Anwendungsbereich
|
||||
|
||||
Das Policy Vault System dient als zentrale Plattform für:
|
||||
- **Verwaltung von Datenschutzrichtlinien** (Privacy Policies, Terms of Service, etc.)
|
||||
- **Cookie-Consent-Management** mit Kategorisierung und Vendor-Verwaltung
|
||||
- **Versionskontrolle** für Richtliniendokumente
|
||||
- **Multi-Projekt-Verwaltung** mit rollenbasiertem Zugriff
|
||||
- **Nutzereinwilligungs-Tracking** über verschiedene Plattformen hinweg
|
||||
- **Mehrsprachige Unterstützung** für globale Anwendungen
|
||||
|
||||
---
|
||||
|
||||
## Technologie-Stack
|
||||
|
||||
### Backend
|
||||
- **Framework**: NestJS (Node.js/TypeScript)
|
||||
- **Datenbank**: PostgreSQL
|
||||
- **ORM**: Drizzle ORM
|
||||
- **Authentifizierung**: JWT (JSON Web Tokens) mit Access/Refresh Token
|
||||
- **API-Dokumentation**: Swagger/OpenAPI
|
||||
- **Validierung**: class-validator, class-transformer
|
||||
- **Security**:
|
||||
- Encryption-based authentication
|
||||
- Rate limiting (Throttler)
|
||||
- Role-based access control (RBAC)
|
||||
- bcrypt für Password-Hashing
|
||||
- **Logging**: Winston mit Daily Rotate File
|
||||
- **Job Scheduling**: NestJS Schedule
|
||||
- **E-Mail**: Nodemailer
|
||||
- **OTP-Generierung**: otp-generator
|
||||
|
||||
### Frontend
|
||||
- **Framework**: Angular 18
|
||||
- **UI**:
|
||||
- TailwindCSS
|
||||
- Custom SCSS
|
||||
- **Rich Text Editor**: CKEditor 5
|
||||
- Alignment, Block Quote, Code Block
|
||||
- Font styling, Image support
|
||||
- List und Table support
|
||||
- **State Management**: RxJS
|
||||
- **Security**: DOMPurify für HTML-Sanitization
|
||||
- **Multi-Select**: ng-multiselect-dropdown
|
||||
- **Process Manager**: PM2
|
||||
|
||||
---
|
||||
|
||||
## Hauptfunktionen und Features
|
||||
|
||||
### 1. Administratoren-Verwaltung
|
||||
- **Super Admin und Admin Rollen**
|
||||
- Super Admin (Role 1): Vollzugriff auf alle Funktionen
|
||||
- Admin (Role 2): Eingeschränkter Zugriff auf zugewiesene Projekte
|
||||
- **Authentifizierung**
|
||||
- Login mit E-Mail und Passwort
|
||||
- JWT-basierte Sessions (Access + Refresh Token)
|
||||
- OTP-basierte Passwort-Wiederherstellung
|
||||
- Account-Lock-Mechanismus bei mehrfachen Fehlversuchen
|
||||
- **Benutzerverwaltung**
|
||||
- Admin-Erstellung durch Super Admin
|
||||
- Projekt-Zuweisungen für Admins
|
||||
- Rollen-Modifikation (Promote/Demote)
|
||||
- Soft-Delete (isDeleted Flag)
|
||||
|
||||
### 2. Projekt-Management
|
||||
- **Projektverwaltung**
|
||||
- Erstellung und Verwaltung von Projekten
|
||||
- Projekt-spezifische Konfiguration (Theme-Farben, Icons, Logos)
|
||||
- Mehrsprachige Unterstützung (Language Configuration)
|
||||
- Projekt-Keys für sichere API-Zugriffe
|
||||
- Soft-Delete und Blocking von Projekten
|
||||
- **Projekt-Zugriffskontrolle**
|
||||
- Zuweisung von Admins zu spezifischen Projekten
|
||||
- Project-Admin-Beziehungen
|
||||
|
||||
### 3. Policy Document Management
|
||||
- **Dokumentenverwaltung**
|
||||
- Erstellung von Datenschutzdokumenten (Privacy Policies, ToS, etc.)
|
||||
- Projekt-spezifische Dokumente
|
||||
- Beschreibung und Metadaten
|
||||
- **Versionierung**
|
||||
- Multiple Versionen pro Dokument
|
||||
- Version-Metadaten mit Inhalt
|
||||
- Publish/Draft-Status
|
||||
- Versionsnummern-Tracking
|
||||
|
||||
### 4. Cookie-Consent-Management
|
||||
- **Cookie-Kategorien**
|
||||
- Kategorien-Metadaten (z.B. Notwendig, Marketing, Analytics)
|
||||
- Plattform-spezifische Kategorien (Web, Mobile, etc.)
|
||||
- Versionierung der Kategorien
|
||||
- Pflicht- und optionale Kategorien
|
||||
- Mehrsprachige Kategorie-Beschreibungen
|
||||
- **Vendor-Management**
|
||||
- Verwaltung von Drittanbieter-Services
|
||||
- Vendor-Metadaten und -Beschreibungen
|
||||
- Zuordnung zu Kategorien
|
||||
- Sub-Services für Vendors
|
||||
- Mehrsprachige Vendor-Informationen
|
||||
- **Globale Cookie-Einstellungen**
|
||||
- Projekt-weite Cookie-Texte und -Beschreibungen
|
||||
- Mehrsprachige globale Inhalte
|
||||
- Datei-Upload-Unterstützung
|
||||
|
||||
### 5. User Consent Tracking
|
||||
- **Policy Document Consent**
|
||||
- Tracking von Nutzereinwilligungen für Richtlinien-Versionen
|
||||
- Username-basiertes Tracking
|
||||
- Status (Akzeptiert/Abgelehnt)
|
||||
- Timestamp-Tracking
|
||||
- **Cookie Consent**
|
||||
- Granulare Cookie-Einwilligungen pro Kategorie
|
||||
- Vendor-spezifische Einwilligungen
|
||||
- Versions-Tracking
|
||||
- Username und Projekt-basiert
|
||||
- **Verschlüsselte API-Zugriffe**
|
||||
- Token-basierte Authentifizierung für Mobile/Web
|
||||
- Encryption-based authentication für externe Zugriffe
|
||||
|
||||
### 6. Mehrsprachige Unterstützung
|
||||
- **Language Management**
|
||||
- Dynamische Sprachen-Konfiguration pro Projekt
|
||||
- Mehrsprachige Inhalte für:
|
||||
- Kategorien-Beschreibungen
|
||||
- Vendor-Informationen
|
||||
- Globale Cookie-Texte
|
||||
- Sub-Service-Beschreibungen
|
||||
|
||||
---
|
||||
|
||||
## API-Struktur und Endpoints
|
||||
|
||||
### Admin-Endpoints (`/admins`)
|
||||
```
|
||||
POST /admins/create-admin - Admin erstellen (Super Admin only)
|
||||
POST /admins/create-super-admin - Super Admin erstellen (Super Admin only)
|
||||
POST /admins/create-root-user-super-admin - Root Super Admin erstellen (Secret-based)
|
||||
POST /admins/login - Admin Login
|
||||
GET /admins/get-access-token - Neuen Access Token abrufen
|
||||
POST /admins/generate-otp - OTP für Passwort-Reset generieren
|
||||
POST /admins/validate-otp - OTP validieren
|
||||
POST /admins/change-password - Passwort ändern (mit OTP)
|
||||
PUT /admins/update-password - Passwort aktualisieren (eingeloggt)
|
||||
PUT /admins/forgot-password - Passwort vergessen
|
||||
PUT /admins/make-super-admin - Admin zu Super Admin befördern
|
||||
PUT /admins/remove-super-admin - Super Admin zu Admin zurückstufen
|
||||
PUT /admins/make-project-admin - Projekt-Zugriff gewähren
|
||||
DELETE /admins/remove-project-admin - Projekt-Zugriff entfernen
|
||||
GET /admins/findAll?role= - Alle Admins abrufen (gefiltert nach Rolle)
|
||||
GET /admins/findAll-super-admins - Alle Super Admins abrufen
|
||||
GET /admins/findOne?id= - Einzelnen Admin abrufen
|
||||
PUT /admins/update - Admin-Details aktualisieren
|
||||
DELETE /admins/delete-admin?id= - Admin löschen (Soft-Delete)
|
||||
```
|
||||
|
||||
### Project-Endpoints (`/project`)
|
||||
```
|
||||
POST /project/create - Projekt erstellen (Super Admin only)
|
||||
PUT /project/v2/updateProjectKeys - Projekt-Keys aktualisieren
|
||||
GET /project/findAll - Alle Projekte abrufen (mit Pagination)
|
||||
GET /project/findAllByUser - Projekte eines bestimmten Users
|
||||
GET /project/findOne?id= - Einzelnes Projekt abrufen
|
||||
PUT /project/update - Projekt aktualisieren
|
||||
DELETE /project/delete?id= - Projekt löschen
|
||||
```
|
||||
|
||||
### Policy Document-Endpoints (`/policydocument`)
|
||||
```
|
||||
POST /policydocument/create - Policy Document erstellen
|
||||
GET /policydocument/findAll - Alle Policy Documents abrufen
|
||||
GET /policydocument/findOne?id= - Einzelnes Policy Document
|
||||
GET /policydocument/findPolicyDocs?projectId= - Documents für ein Projekt
|
||||
PUT /policydocument/update - Policy Document aktualisieren
|
||||
DELETE /policydocument/delete?id= - Policy Document löschen
|
||||
```
|
||||
|
||||
### Version-Endpoints (`/version`)
|
||||
```
|
||||
POST /version/create - Version erstellen
|
||||
GET /version/findAll - Alle Versionen abrufen
|
||||
GET /version/findOne?id= - Einzelne Version abrufen
|
||||
GET /version/findVersions?policyDocId= - Versionen für ein Policy Document
|
||||
PUT /version/update - Version aktualisieren
|
||||
DELETE /version/delete?id= - Version löschen
|
||||
```
|
||||
|
||||
### User Consent-Endpoints (`/consent`)
|
||||
```
|
||||
POST /consent/v2/create - User Consent erstellen (Encrypted)
|
||||
GET /consent/v2/GetConsent - Consent abrufen (Encrypted)
|
||||
GET /consent/v2/GetConsentFileContent - Consent mit Dateiinhalt (Encrypted)
|
||||
GET /consent/v2/latestAcceptedConsent - Letzte akzeptierte Consent
|
||||
DELETE /consent/v2/delete - Consent löschen (Encrypted)
|
||||
```
|
||||
|
||||
### Cookie Consent-Endpoints (`/cookieconsent`)
|
||||
```
|
||||
POST /cookieconsent/v2/create - Cookie Consent erstellen (Encrypted)
|
||||
GET /cookieconsent/v2/get - Cookie Kategorien abrufen (Encrypted)
|
||||
GET /cookieconsent/v2/getFileContent - Cookie Daten mit Dateiinhalt (Encrypted)
|
||||
DELETE /cookieconsent/v2/delete - Cookie Consent löschen (Encrypted)
|
||||
```
|
||||
|
||||
### Cookie-Endpoints (`/cookies`)
|
||||
```
|
||||
POST /cookies/createCategory - Cookie-Kategorie erstellen
|
||||
POST /cookies/createVendor - Vendor erstellen
|
||||
POST /cookies/createGlobalCookie - Globale Cookie-Einstellung erstellen
|
||||
GET /cookies/getCategories?projectId= - Kategorien für Projekt abrufen
|
||||
GET /cookies/getVendors?projectId= - Vendors für Projekt abrufen
|
||||
GET /cookies/getGlobalCookie?projectId= - Globale Cookie-Settings
|
||||
PUT /cookies/updateCategory - Kategorie aktualisieren
|
||||
PUT /cookies/updateVendor - Vendor aktualisieren
|
||||
PUT /cookies/updateGlobalCookie - Globale Settings aktualisieren
|
||||
DELETE /cookies/deleteCategory?id= - Kategorie löschen
|
||||
DELETE /cookies/deleteVendor?id= - Vendor löschen
|
||||
DELETE /cookies/deleteGlobalCookie?id= - Globale Settings löschen
|
||||
```
|
||||
|
||||
### Health Check-Endpoint (`/db-health-check`)
|
||||
```
|
||||
GET /db-health-check - Datenbank-Status prüfen
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Datenmodelle
|
||||
|
||||
### Admin
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
employeeCode: string (nullable)
|
||||
firstName: string (max 60)
|
||||
lastName: string (max 50)
|
||||
officialMail: string (unique, max 100)
|
||||
role: number (1 = Super Admin, 2 = Admin)
|
||||
passwordHash: string
|
||||
salt: string (nullable)
|
||||
accessToken: text (nullable)
|
||||
refreshToken: text (nullable)
|
||||
accLockCount: number (default 0)
|
||||
accLockTime: number (default 0)
|
||||
isBlocked: boolean (default false)
|
||||
isDeleted: boolean (default false)
|
||||
otp: string (nullable)
|
||||
}
|
||||
```
|
||||
|
||||
### Project
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
name: string (unique)
|
||||
description: string
|
||||
imageURL: text (nullable)
|
||||
iconURL: text (nullable)
|
||||
isBlocked: boolean (default false)
|
||||
isDeleted: boolean (default false)
|
||||
themeColor: string
|
||||
textColor: string
|
||||
languages: json (nullable) // Array von Sprach-Codes
|
||||
}
|
||||
```
|
||||
|
||||
### Policy Document
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
name: string
|
||||
description: string (nullable)
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
}
|
||||
```
|
||||
|
||||
### Version (Policy Document Meta & Version Meta)
|
||||
```typescript
|
||||
// Policy Document Meta
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
policyDocumentId: number (FK)
|
||||
version: string
|
||||
isPublish: boolean
|
||||
}
|
||||
|
||||
// Version Meta (Sprachspezifischer Inhalt)
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
policyDocMetaId: number (FK)
|
||||
language: string
|
||||
content: text
|
||||
file: text (nullable)
|
||||
}
|
||||
```
|
||||
|
||||
### User Consent
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
username: string
|
||||
status: boolean
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
versionMetaId: number (FK -> versionMeta.id, CASCADE)
|
||||
}
|
||||
```
|
||||
|
||||
### Cookie Consent
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
username: string
|
||||
categoryId: number[] (Array)
|
||||
vendors: number[] (Array)
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
version: string
|
||||
}
|
||||
```
|
||||
|
||||
### Categories Metadata
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
platform: string
|
||||
version: string
|
||||
isPublish: boolean (default false)
|
||||
metaName: string
|
||||
isMandatory: boolean (default false)
|
||||
}
|
||||
```
|
||||
|
||||
### Categories Language Data
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
categoryMetaId: number (FK -> categoriesMetadata.id, CASCADE)
|
||||
language: string
|
||||
title: string
|
||||
description: text
|
||||
}
|
||||
```
|
||||
|
||||
### Vendor Meta
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
categoryId: number (FK -> categoriesMetadata.id, CASCADE)
|
||||
vendorName: string
|
||||
}
|
||||
```
|
||||
|
||||
### Vendor Language
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
vendorMetaId: number (FK -> vendorMeta.id, CASCADE)
|
||||
language: string
|
||||
description: text
|
||||
}
|
||||
```
|
||||
|
||||
### Sub Service
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
vendorMetaId: number (FK -> vendorMeta.id, CASCADE)
|
||||
serviceName: string
|
||||
}
|
||||
```
|
||||
|
||||
### Global Cookie Metadata
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
version: string
|
||||
isPublish: boolean (default false)
|
||||
}
|
||||
```
|
||||
|
||||
### Global Cookie Language Data
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
globalCookieMetaId: number (FK -> globalCookieMetadata.id, CASCADE)
|
||||
language: string
|
||||
title: string
|
||||
description: text
|
||||
file: text (nullable)
|
||||
}
|
||||
```
|
||||
|
||||
### Project Keys
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
publicKey: text
|
||||
privateKey: text
|
||||
}
|
||||
```
|
||||
|
||||
### Admin Projects (Junction Table)
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
adminId: number (FK -> admin.id, CASCADE)
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architektur-Übersicht
|
||||
|
||||
### Backend-Architektur
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ NestJS Backend │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Guards │ │ Middlewares │ │ Interceptors │ │
|
||||
│ ├──────────────┤ ├──────────────┤ ├──────────────┤ │
|
||||
│ │ - AuthGuard │ │ - Token │ │ - Serialize │ │
|
||||
│ │ - RolesGuard │ │ - Decrypt │ │ - Logging │ │
|
||||
│ │ - Throttler │ │ - Headers │ │ │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────┐ │
|
||||
│ │ API Modules │ │
|
||||
│ ├───────────────────────────────────────────────────┤ │
|
||||
│ │ - Admins (Authentication, Authorization) │ │
|
||||
│ │ - Projects (Multi-tenant Management) │ │
|
||||
│ │ - Policy Documents (Document Management) │ │
|
||||
│ │ - Versions (Versioning System) │ │
|
||||
│ │ - User Consent (Consent Tracking) │ │
|
||||
│ │ - Cookies (Cookie Categories & Vendors) │ │
|
||||
│ │ - Cookie Consent (Cookie Consent Tracking) │ │
|
||||
│ │ - DB Health Check (System Monitoring) │ │
|
||||
│ └───────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────┐ │
|
||||
│ │ Drizzle ORM Layer │ │
|
||||
│ ├───────────────────────────────────────────────────┤ │
|
||||
│ │ - Schema Definitions │ │
|
||||
│ │ - Relations │ │
|
||||
│ │ - Database Connection Pool │ │
|
||||
│ └───────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
└──────────────────────────┼────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ PostgreSQL │
|
||||
│ Database │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### Frontend-Architektur
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Angular Frontend │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Guards │ │ Interceptors │ │ Services │ │
|
||||
│ ├──────────────┤ ├──────────────┤ ├──────────────┤ │
|
||||
│ │ - AuthGuard │ │ - HTTP │ │ - Auth │ │
|
||||
│ │ │ │ - Error │ │ - REST API │ │
|
||||
│ │ │ │ │ │ - Session │ │
|
||||
│ │ │ │ │ │ - Security │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────┐ │
|
||||
│ │ Feature Modules │ │
|
||||
│ ├───────────────────────────────────────────────────┤ │
|
||||
│ │ ┌─────────────────────────────────────────┐ │ │
|
||||
│ │ │ Auth Module │ │ │
|
||||
│ │ │ - Login Component │ │ │
|
||||
│ │ └─────────────────────────────────────────┘ │ │
|
||||
│ │ │ │
|
||||
│ │ ┌─────────────────────────────────────────┐ │ │
|
||||
│ │ │ Project Dashboard │ │ │
|
||||
│ │ │ - Project List │ │ │
|
||||
│ │ │ - Project Creation │ │ │
|
||||
│ │ │ - Project Settings │ │ │
|
||||
│ │ └─────────────────────────────────────────┘ │ │
|
||||
│ │ │ │
|
||||
│ │ ┌─────────────────────────────────────────┐ │ │
|
||||
│ │ │ Individual Project Dashboard │ │ │
|
||||
│ │ │ - Agreements (Policy Documents) │ │ │
|
||||
│ │ │ - Cookie Consent Management │ │ │
|
||||
│ │ │ - FAQ Management │ │ │
|
||||
│ │ │ - Licenses Management │ │ │
|
||||
│ │ │ - User Management │ │ │
|
||||
│ │ │ - Project Settings │ │ │
|
||||
│ │ └─────────────────────────────────────────┘ │ │
|
||||
│ │ │ │
|
||||
│ │ ┌─────────────────────────────────────────┐ │ │
|
||||
│ │ │ Shared Components │ │ │
|
||||
│ │ │ - Settings │ │ │
|
||||
│ │ │ - Common UI Elements │ │ │
|
||||
│ │ └─────────────────────────────────────────┘ │ │
|
||||
│ └───────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
│ HTTPS/REST API
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ NestJS Backend │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### Datenbankbeziehungen
|
||||
|
||||
```
|
||||
┌──────────┐ ┌─────────────────┐ ┌─────────────┐
|
||||
│ Admin │◄───────►│ AdminProjects │◄───────►│ Project │
|
||||
└──────────┘ └─────────────────┘ └─────────────┘
|
||||
│
|
||||
│ 1:N
|
||||
┌────────────────────────────────────┤
|
||||
│ │
|
||||
▼ ▼
|
||||
┌──────────────────────┐ ┌──────────────────────────┐
|
||||
│ Policy Document │ │ Categories Metadata │
|
||||
└──────────────────────┘ └──────────────────────────┘
|
||||
│ │
|
||||
│ 1:N │ 1:N
|
||||
▼ ▼
|
||||
┌──────────────────────┐ ┌──────────────────────────┐
|
||||
│ Policy Document Meta │ │ Categories Language Data │
|
||||
└──────────────────────┘ └──────────────────────────┘
|
||||
│ │
|
||||
│ 1:N │ 1:N
|
||||
▼ ▼
|
||||
┌──────────────────────┐ ┌──────────────────────────┐
|
||||
│ Version Meta │ │ Vendor Meta │
|
||||
└──────────────────────┘ └──────────────────────────┘
|
||||
│ │
|
||||
│ 1:N │ 1:N
|
||||
▼ ├──────────┐
|
||||
┌──────────────────────┐ ▼ ▼
|
||||
│ User Consent │ ┌─────────────────┐ ┌────────────┐
|
||||
└──────────────────────┘ │ Vendor Language │ │Sub Service │
|
||||
└─────────────────┘ └────────────┘
|
||||
┌──────────────────────┐
|
||||
│ Cookie Consent │◄─── Project
|
||||
└──────────────────────┘
|
||||
|
||||
┌─────────────────────────┐
|
||||
│ Global Cookie Metadata │◄─── Project
|
||||
└─────────────────────────┘
|
||||
│
|
||||
│ 1:N
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ Global Cookie Language Data │
|
||||
└─────────────────────────────────┘
|
||||
|
||||
┌──────────────────┐
|
||||
│ Project Keys │◄─── Project
|
||||
└──────────────────┘
|
||||
```
|
||||
|
||||
### Sicherheitsarchitektur
|
||||
|
||||
#### Authentifizierung & Autorisierung
|
||||
1. **JWT-basierte Authentifizierung**
|
||||
- Access Token (kurzlebig)
|
||||
- Refresh Token (langlebig)
|
||||
- Token-Refresh-Mechanismus
|
||||
|
||||
2. **Rollenbasierte Zugriffskontrolle (RBAC)**
|
||||
- Super Admin (Role 1): Vollzugriff
|
||||
- Admin (Role 2): Projektbezogener Zugriff
|
||||
- Guard-basierte Absicherung auf Controller-Ebene
|
||||
|
||||
3. **Encryption-based Authentication**
|
||||
- Für externe/mobile Zugriffe
|
||||
- Token-basierte Verschlüsselung
|
||||
- User + Project ID Validierung
|
||||
|
||||
#### Security Features
|
||||
- **Rate Limiting**: Throttler mit konfigurierbaren Limits
|
||||
- **Password Security**: bcrypt Hashing mit Salt
|
||||
- **Account Lock**: Nach mehrfachen Fehlversuchen
|
||||
- **OTP-basierte Passwort-Wiederherstellung**
|
||||
- **Input Validation**: class-validator auf allen DTOs
|
||||
- **HTML Sanitization**: DOMPurify im Frontend
|
||||
- **CORS Configuration**: Custom Headers Middleware
|
||||
- **Soft Delete**: Keine permanente Löschung von Daten
|
||||
|
||||
---
|
||||
|
||||
## Deployment und Konfiguration
|
||||
|
||||
### Backend Environment Variables
|
||||
```env
|
||||
DATABASE_URL=postgresql://username:password@host:port/database
|
||||
NODE_ENV=development|test|production|local|demo
|
||||
PORT=3000
|
||||
JWT_SECRET=your_jwt_secret
|
||||
JWT_REFRESH_SECRET=your_refresh_secret
|
||||
ROOT_SECRET=your_root_secret
|
||||
ENCRYPTION_KEY=your_encryption_key
|
||||
SMTP_HOST=smtp.example.com
|
||||
SMTP_PORT=587
|
||||
SMTP_USER=your_email
|
||||
SMTP_PASSWORD=your_password
|
||||
```
|
||||
|
||||
### Frontend Environment
|
||||
```typescript
|
||||
{
|
||||
production: false,
|
||||
BASE_URL: "https://api.example.com/api/",
|
||||
TITLE: "Policy Vault - Environment"
|
||||
}
|
||||
```
|
||||
|
||||
### Datenbank-Setup
|
||||
```bash
|
||||
# Migrationen ausführen
|
||||
npm run migration:up
|
||||
|
||||
# Migrationen zurückrollen
|
||||
npm run migration:down
|
||||
|
||||
# Schema generieren
|
||||
npx drizzle-kit push
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## API-Sicherheit
|
||||
|
||||
### Token-basierte Authentifizierung
|
||||
- Alle geschützten Endpoints erfordern einen gültigen JWT-Token im Authorization-Header
|
||||
- Format: `Authorization: Bearer <access_token>`
|
||||
|
||||
### Encryption-based Endpoints
|
||||
Für mobile/externe Zugriffe (Consent Tracking):
|
||||
- Header: `secret` oder `mobiletoken`
|
||||
- Format: Verschlüsselter String mit `userId_projectId`
|
||||
- Automatische Validierung durch DecryptMiddleware
|
||||
|
||||
### Rate Limiting
|
||||
- Standard: 10 Requests pro Minute
|
||||
- OTP/Login: 3 Requests pro Minute
|
||||
- Konfigurierbar über ThrottlerModule
|
||||
|
||||
---
|
||||
|
||||
## Besondere Features
|
||||
|
||||
### 1. Versionierung
|
||||
- Komplettes Versions-Management für Policy Documents
|
||||
- Mehrsprachige Versionen mit separaten Inhalten
|
||||
- Publish/Draft Status
|
||||
- Historische Versionsverfolgung
|
||||
|
||||
### 2. Mehrsprachigkeit
|
||||
- Zentrale Sprach-Konfiguration pro Projekt
|
||||
- Separate Language-Data Tabellen für alle Inhaltstypen
|
||||
- Support für unbegrenzte Sprachen
|
||||
|
||||
### 3. Cookie-Consent-System
|
||||
- Granulare Kontrolle über Cookie-Kategorien
|
||||
- Vendor-Management mit Sub-Services
|
||||
- Plattform-spezifische Kategorien (Web, Mobile, etc.)
|
||||
- Versions-Tracking für Compliance
|
||||
|
||||
### 4. Rich Content Editing
|
||||
- CKEditor 5 Integration
|
||||
- Support für komplexe Formatierungen
|
||||
- Bild-Upload und -Verwaltung
|
||||
- Code-Block-Unterstützung
|
||||
|
||||
### 5. Logging & Monitoring
|
||||
- Winston-basiertes Logging
|
||||
- Daily Rotate Files
|
||||
- Structured Logging
|
||||
- Fehler-Tracking
|
||||
- Datenbank-Health-Checks
|
||||
|
||||
### 6. Soft Delete Pattern
|
||||
- Keine permanente Datenlöschung
|
||||
- `isDeleted` Flags auf allen Haupt-Entitäten
|
||||
- Möglichkeit zur Wiederherstellung
|
||||
- Audit Trail Erhaltung
|
||||
|
||||
---
|
||||
|
||||
## Entwicklung
|
||||
|
||||
### Backend starten
|
||||
```bash
|
||||
# Development
|
||||
npm run start:dev
|
||||
|
||||
# Local (mit Watch)
|
||||
npm run start:local
|
||||
|
||||
# Production
|
||||
npm run start:prod
|
||||
```
|
||||
|
||||
### Frontend starten
|
||||
```bash
|
||||
# Development Server
|
||||
npm run start
|
||||
# oder
|
||||
ng serve
|
||||
|
||||
# Build
|
||||
npm run build
|
||||
|
||||
# Mit PM2
|
||||
npm run start:pm2
|
||||
```
|
||||
|
||||
### Tests
|
||||
```bash
|
||||
# Backend Tests
|
||||
npm run test
|
||||
npm run test:e2e
|
||||
npm run test:cov
|
||||
|
||||
# Frontend Tests
|
||||
npm run test
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Zusammenfassung
|
||||
|
||||
Policy Vault ist eine umfassende Enterprise-Lösung für die Verwaltung von Datenschutzrichtlinien und Cookie-Einwilligungen. Das System bietet:
|
||||
|
||||
- **Multi-Tenant-Architektur** mit Projekt-basierter Trennung
|
||||
- **Robuste Authentifizierung** mit JWT und rollenbasierter Zugriffskontrolle
|
||||
- **Vollständiges Versions-Management** für Compliance-Tracking
|
||||
- **Granulare Cookie-Consent-Verwaltung** mit Vendor-Support
|
||||
- **Mehrsprachige Unterstützung** für globale Anwendungen
|
||||
- **Moderne Tech-Stack** mit NestJS, Angular und PostgreSQL
|
||||
- **Enterprise-Grade Security** mit Encryption, Rate Limiting und Audit Trails
|
||||
- **Skalierbare Architektur** mit klarer Trennung von Concerns
|
||||
|
||||
Das System eignet sich ideal für Unternehmen, die:
|
||||
- Multiple Projekte/Produkte mit unterschiedlichen Datenschutzrichtlinien verwalten
|
||||
- GDPR/DSGVO-Compliance sicherstellen müssen
|
||||
- Granulare Cookie-Einwilligungen tracken wollen
|
||||
- Mehrsprachige Anwendungen betreiben
|
||||
- Eine zentrale Policy-Management-Plattform benötigen
|
||||
39
README.md
Normal file
39
README.md
Normal file
@@ -0,0 +1,39 @@
|
||||
# BreakPilot PWA (ARCHIVED)
|
||||
|
||||
> **Dieses Repository ist archiviert.** Alle Services wurden in die folgenden Projekte migriert.
|
||||
|
||||
## Migration (2026-02-14)
|
||||
|
||||
| Service | Neues Projekt | Container |
|
||||
|---------|---------------|-----------|
|
||||
| Studio v2 | breakpilot-lehrer | bp-lehrer-studio-v2 |
|
||||
| Admin | breakpilot-lehrer | bp-lehrer-admin |
|
||||
| Website | breakpilot-lehrer | bp-lehrer-website |
|
||||
| Backend (Lehrer) | breakpilot-lehrer | bp-lehrer-backend |
|
||||
| Klausur Service | breakpilot-lehrer | bp-lehrer-klausur-service |
|
||||
| School Service | breakpilot-lehrer | bp-lehrer-school-service |
|
||||
| Voice Service | breakpilot-lehrer | bp-lehrer-voice-service |
|
||||
| Geo Service | breakpilot-lehrer | bp-lehrer-geo-service |
|
||||
| Backend (Core) | breakpilot-core | bp-core-backend |
|
||||
| Postgres | breakpilot-core | bp-core-postgres |
|
||||
| Valkey | breakpilot-core | bp-core-valkey |
|
||||
| Nginx | breakpilot-core | bp-core-nginx |
|
||||
| Vault | breakpilot-core | bp-core-vault |
|
||||
| Qdrant | breakpilot-core | bp-core-qdrant |
|
||||
| MinIO | breakpilot-core | bp-core-minio |
|
||||
| Embedding Service | breakpilot-core | bp-core-embedding-service |
|
||||
| Night Scheduler | breakpilot-core | bp-core-night-scheduler |
|
||||
| Pitch Deck | breakpilot-core | bp-core-pitch-deck |
|
||||
| Gitea | breakpilot-core | bp-core-gitea |
|
||||
| Woodpecker CI | breakpilot-core | bp-core-woodpecker-server |
|
||||
| Jitsi | breakpilot-core | bp-core-jitsi-* |
|
||||
| AI Compliance SDK | breakpilot-compliance | bp-compliance-ai-sdk |
|
||||
| Developer Portal | breakpilot-compliance | bp-compliance-developer-portal |
|
||||
| DSMS | breakpilot-compliance | bp-compliance-dsms-* |
|
||||
| Backend (Compliance) | breakpilot-compliance | bp-compliance-backend |
|
||||
|
||||
## Neue Repos
|
||||
|
||||
- **breakpilot-core**: Shared Infrastructure (Postgres, Nginx, Vault, Qdrant, MinIO, etc.)
|
||||
- **breakpilot-lehrer**: Bildungs-Stack (Studio, Admin, Backend, Klausur, Voice, etc.)
|
||||
- **breakpilot-compliance**: DSGVO/Compliance-Stack (Admin, SDK, DSMS, Developer Portal)
|
||||
530
SOURCE_POLICY_IMPLEMENTATION_PLAN.md
Normal file
530
SOURCE_POLICY_IMPLEMENTATION_PLAN.md
Normal file
@@ -0,0 +1,530 @@
|
||||
# Source-Policy System - Implementierungsplan
|
||||
|
||||
## Zusammenfassung
|
||||
|
||||
Whitelist-basiertes Datenquellen-Management fuer das edu-search-service unter `/compliance/source-policy`. Fuer Auditoren pruefbar mit vollstaendigem Audit-Trail.
|
||||
|
||||
**Kernprinzipien:**
|
||||
- Nur offizielle Open-Data-Portale und amtliche Quellen (§5 UrhG)
|
||||
- Training mit externen Daten: **VERBOTEN**
|
||||
- Alle Aenderungen protokolliert (Audit-Trail)
|
||||
- PII-Blocklist mit Hard-Block
|
||||
|
||||
---
|
||||
|
||||
## 1. Architektur
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ admin-v2 (Next.js) │
|
||||
│ /app/(admin)/compliance/source-policy/ │
|
||||
│ ├── page.tsx (Dashboard + Tabs) │
|
||||
│ └── components/ │
|
||||
│ ├── SourcesTab.tsx (Whitelist-Verwaltung) │
|
||||
│ ├── OperationsMatrixTab.tsx (Lookup/RAG/Training/Export) │
|
||||
│ ├── PIIRulesTab.tsx (PII-Blocklist) │
|
||||
│ └── AuditTab.tsx (Aenderungshistorie + Export) │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ edu-search-service (Go) │
|
||||
│ NEW: internal/policy/ │
|
||||
│ ├── models.go (Datenstrukturen) │
|
||||
│ ├── store.go (PostgreSQL CRUD) │
|
||||
│ ├── enforcer.go (Policy-Enforcement) │
|
||||
│ ├── pii_detector.go (PII-Erkennung) │
|
||||
│ └── audit.go (Audit-Logging) │
|
||||
│ │
|
||||
│ MODIFIED: │
|
||||
│ ├── crawler/crawler.go (Whitelist-Check vor Fetch) │
|
||||
│ ├── pipeline/pipeline.go (PII-Filter nach Extract) │
|
||||
│ └── api/handlers/policy_handlers.go (Admin-API) │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ PostgreSQL │
|
||||
│ NEW TABLES: │
|
||||
│ - source_policies (versionierte Policies) │
|
||||
│ - allowed_sources (Whitelist pro Bundesland) │
|
||||
│ - operation_permissions (Lookup/RAG/Training/Export Matrix) │
|
||||
│ - pii_rules (Regex/Keyword Blocklist) │
|
||||
│ - policy_audit_log (unveraenderlich) │
|
||||
│ - blocked_content_log (blockierte URLs fuer Audit) │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Datenmodell
|
||||
|
||||
### 2.1 PostgreSQL Schema
|
||||
|
||||
```sql
|
||||
-- Policies (versioniert)
|
||||
CREATE TABLE source_policies (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
version INTEGER NOT NULL DEFAULT 1,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
bundesland VARCHAR(2), -- NULL = Bundesebene/KMK
|
||||
is_active BOOLEAN DEFAULT true,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
approved_by UUID,
|
||||
approved_at TIMESTAMP
|
||||
);
|
||||
|
||||
-- Whitelist
|
||||
CREATE TABLE allowed_sources (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
policy_id UUID REFERENCES source_policies(id),
|
||||
domain VARCHAR(255) NOT NULL,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
license VARCHAR(50) NOT NULL, -- DL-DE-BY-2.0, CC-BY, §5 UrhG
|
||||
legal_basis VARCHAR(100),
|
||||
citation_template TEXT,
|
||||
trust_boost DECIMAL(3,2) DEFAULT 0.50,
|
||||
is_active BOOLEAN DEFAULT true
|
||||
);
|
||||
|
||||
-- Operations Matrix
|
||||
CREATE TABLE operation_permissions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
source_id UUID REFERENCES allowed_sources(id),
|
||||
operation VARCHAR(50) NOT NULL, -- lookup, rag, training, export
|
||||
is_allowed BOOLEAN NOT NULL,
|
||||
requires_citation BOOLEAN DEFAULT false,
|
||||
notes TEXT
|
||||
);
|
||||
|
||||
-- PII Blocklist
|
||||
CREATE TABLE pii_rules (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL,
|
||||
rule_type VARCHAR(50) NOT NULL, -- regex, keyword
|
||||
pattern TEXT NOT NULL,
|
||||
severity VARCHAR(20) DEFAULT 'block', -- block, warn, redact
|
||||
is_active BOOLEAN DEFAULT true
|
||||
);
|
||||
|
||||
-- Audit Log (immutable)
|
||||
CREATE TABLE policy_audit_log (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
action VARCHAR(50) NOT NULL,
|
||||
entity_type VARCHAR(50) NOT NULL,
|
||||
entity_id UUID,
|
||||
old_value JSONB,
|
||||
new_value JSONB,
|
||||
user_email VARCHAR(255),
|
||||
created_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Blocked Content Log
|
||||
CREATE TABLE blocked_content_log (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
url VARCHAR(2048) NOT NULL,
|
||||
domain VARCHAR(255) NOT NULL,
|
||||
block_reason VARCHAR(100) NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
```
|
||||
|
||||
### 2.2 Initial-Daten
|
||||
|
||||
Datei: `edu-search-service/policies/bundeslaender.yaml`
|
||||
|
||||
```yaml
|
||||
federal:
|
||||
name: "KMK & Bundesebene"
|
||||
sources:
|
||||
- domain: "kmk.org"
|
||||
name: "Kultusministerkonferenz"
|
||||
license: "§5 UrhG"
|
||||
legal_basis: "Amtliche Werke (§5 UrhG)"
|
||||
citation_template: "Quelle: KMK, {title}, {date}"
|
||||
- domain: "bildungsserver.de"
|
||||
name: "Deutscher Bildungsserver"
|
||||
license: "DL-DE-BY-2.0"
|
||||
|
||||
NI:
|
||||
name: "Niedersachsen"
|
||||
sources:
|
||||
- domain: "nibis.de"
|
||||
name: "NiBiS Bildungsserver"
|
||||
license: "DL-DE-BY-2.0"
|
||||
- domain: "mk.niedersachsen.de"
|
||||
name: "Kultusministerium Niedersachsen"
|
||||
license: "§5 UrhG"
|
||||
- domain: "cuvo.nibis.de"
|
||||
name: "Kerncurricula Niedersachsen"
|
||||
license: "DL-DE-BY-2.0"
|
||||
|
||||
BY:
|
||||
name: "Bayern"
|
||||
sources:
|
||||
- domain: "km.bayern.de"
|
||||
name: "Bayerisches Kultusministerium"
|
||||
license: "§5 UrhG"
|
||||
- domain: "isb.bayern.de"
|
||||
name: "ISB Bayern"
|
||||
license: "DL-DE-BY-2.0"
|
||||
- domain: "lehrplanplus.bayern.de"
|
||||
name: "LehrplanPLUS"
|
||||
license: "DL-DE-BY-2.0"
|
||||
|
||||
# Default Operations Matrix
|
||||
default_operations:
|
||||
lookup:
|
||||
allowed: true
|
||||
requires_citation: true
|
||||
rag:
|
||||
allowed: true
|
||||
requires_citation: true
|
||||
training:
|
||||
allowed: false # VERBOTEN
|
||||
export:
|
||||
allowed: true
|
||||
requires_citation: true
|
||||
|
||||
# Default PII Rules
|
||||
pii_rules:
|
||||
- name: "Email Addresses"
|
||||
type: "regex"
|
||||
pattern: "[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}"
|
||||
severity: "block"
|
||||
- name: "German Phone Numbers"
|
||||
type: "regex"
|
||||
pattern: "(?:\\+49|0)[\\s.-]?\\d{2,4}[\\s.-]?\\d{3,}[\\s.-]?\\d{2,}"
|
||||
severity: "block"
|
||||
- name: "IBAN"
|
||||
type: "regex"
|
||||
pattern: "DE\\d{2}\\s?\\d{4}\\s?\\d{4}\\s?\\d{4}\\s?\\d{4}\\s?\\d{2}"
|
||||
severity: "block"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Backend Implementation
|
||||
|
||||
### 3.1 Neue Dateien
|
||||
|
||||
| Datei | Beschreibung |
|
||||
|-------|--------------|
|
||||
| `internal/policy/models.go` | Go Structs (SourcePolicy, AllowedSource, PIIRule, etc.) |
|
||||
| `internal/policy/store.go` | PostgreSQL CRUD mit pgx |
|
||||
| `internal/policy/enforcer.go` | `CheckSource()`, `CheckOperation()`, `DetectPII()` |
|
||||
| `internal/policy/audit.go` | `LogChange()`, `LogBlocked()` |
|
||||
| `internal/policy/pii_detector.go` | Regex-basierte PII-Erkennung |
|
||||
| `internal/api/handlers/policy_handlers.go` | Admin-Endpoints |
|
||||
| `migrations/005_source_policies.sql` | DB-Schema |
|
||||
| `policies/bundeslaender.yaml` | Initial-Daten |
|
||||
|
||||
### 3.2 API Endpoints
|
||||
|
||||
```
|
||||
# Policies
|
||||
GET /v1/admin/policies
|
||||
POST /v1/admin/policies
|
||||
PUT /v1/admin/policies/:id
|
||||
|
||||
# Sources (Whitelist)
|
||||
GET /v1/admin/sources
|
||||
POST /v1/admin/sources
|
||||
PUT /v1/admin/sources/:id
|
||||
DELETE /v1/admin/sources/:id
|
||||
|
||||
# Operations Matrix
|
||||
GET /v1/admin/operations-matrix
|
||||
PUT /v1/admin/operations/:id
|
||||
|
||||
# PII Rules
|
||||
GET /v1/admin/pii-rules
|
||||
POST /v1/admin/pii-rules
|
||||
PUT /v1/admin/pii-rules/:id
|
||||
DELETE /v1/admin/pii-rules/:id
|
||||
POST /v1/admin/pii-rules/test # Test gegen Sample-Text
|
||||
|
||||
# Audit
|
||||
GET /v1/admin/policy-audit?from=&to=
|
||||
GET /v1/admin/blocked-content?from=&to=
|
||||
GET /v1/admin/compliance-report # PDF/JSON Export
|
||||
|
||||
# Live-Check
|
||||
POST /v1/admin/check-compliance
|
||||
Body: { "url": "...", "operation": "lookup" }
|
||||
```
|
||||
|
||||
### 3.3 Crawler-Integration
|
||||
|
||||
In `crawler/crawler.go`:
|
||||
```go
|
||||
func (c *Crawler) FetchWithPolicy(ctx context.Context, url string) (*FetchResult, error) {
|
||||
// 1. Whitelist-Check
|
||||
source, err := c.enforcer.CheckSource(ctx, url)
|
||||
if err != nil || source == nil {
|
||||
c.enforcer.LogBlocked(ctx, url, "not_whitelisted")
|
||||
return nil, ErrNotWhitelisted
|
||||
}
|
||||
|
||||
// ... existing fetch ...
|
||||
|
||||
// 2. PII-Check nach Fetch
|
||||
piiMatches := c.enforcer.DetectPII(content)
|
||||
if hasSeverity(piiMatches, "block") {
|
||||
c.enforcer.LogBlocked(ctx, url, "pii_detected")
|
||||
return nil, ErrPIIDetected
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Frontend Implementation
|
||||
|
||||
### 4.1 Navigation Update
|
||||
|
||||
In `lib/navigation.ts` unter `compliance` Kategorie hinzufuegen:
|
||||
|
||||
```typescript
|
||||
{
|
||||
id: 'source-policy',
|
||||
name: 'Quellen-Policy',
|
||||
href: '/compliance/source-policy',
|
||||
description: 'Datenquellen & Compliance',
|
||||
purpose: 'Whitelist zugelassener Datenquellen mit Operations-Matrix und PII-Blocklist.',
|
||||
audience: ['DSB', 'Compliance Officer', 'Auditor'],
|
||||
gdprArticles: ['Art. 5 (Rechtmaessigkeit)', 'Art. 6 (Rechtsgrundlage)'],
|
||||
}
|
||||
```
|
||||
|
||||
### 4.2 Seiten-Struktur
|
||||
|
||||
```
|
||||
/app/(admin)/compliance/source-policy/
|
||||
├── page.tsx # Haupt-Dashboard mit Tabs
|
||||
└── components/
|
||||
├── SourcesTab.tsx # Whitelist-Tabelle mit CRUD
|
||||
├── OperationsMatrixTab.tsx # 4x4 Matrix
|
||||
├── PIIRulesTab.tsx # PII-Regeln mit Test-Funktion
|
||||
└── AuditTab.tsx # Aenderungshistorie + Export
|
||||
```
|
||||
|
||||
### 4.3 UI-Layout
|
||||
|
||||
**Stats Cards (oben):**
|
||||
- Aktive Policies
|
||||
- Zugelassene Quellen
|
||||
- Blockiert (heute)
|
||||
- Compliance Score
|
||||
|
||||
**Tabs:**
|
||||
1. **Dashboard** - Uebersicht mit Quick-Stats
|
||||
2. **Quellen** - Whitelist-Tabelle (Domain, Name, Lizenz, Status)
|
||||
3. **Operations** - Matrix mit Lookup/RAG/Training/Export
|
||||
4. **PII-Regeln** - Blocklist mit Test-Funktion
|
||||
5. **Audit** - Aenderungshistorie mit PDF/JSON-Export
|
||||
|
||||
**Pattern (aus audit-report/page.tsx):**
|
||||
- Tab-Navigation: `bg-purple-600 text-white` fuer aktiv
|
||||
- Status-Badges: `bg-green-100 text-green-700` fuer aktiv
|
||||
- Tabellen: `hover:bg-slate-50`
|
||||
- Info-Boxen: `bg-blue-50 border-blue-200`
|
||||
|
||||
---
|
||||
|
||||
## 5. Betroffene Dateien
|
||||
|
||||
### Neue Dateien erstellen:
|
||||
|
||||
**Backend (edu-search-service):**
|
||||
```
|
||||
internal/policy/models.go
|
||||
internal/policy/store.go
|
||||
internal/policy/enforcer.go
|
||||
internal/policy/audit.go
|
||||
internal/policy/pii_detector.go
|
||||
internal/api/handlers/policy_handlers.go
|
||||
migrations/005_source_policies.sql
|
||||
policies/bundeslaender.yaml
|
||||
```
|
||||
|
||||
**Frontend (admin-v2):**
|
||||
```
|
||||
app/(admin)/compliance/source-policy/page.tsx
|
||||
app/(admin)/compliance/source-policy/components/SourcesTab.tsx
|
||||
app/(admin)/compliance/source-policy/components/OperationsMatrixTab.tsx
|
||||
app/(admin)/compliance/source-policy/components/PIIRulesTab.tsx
|
||||
app/(admin)/compliance/source-policy/components/AuditTab.tsx
|
||||
```
|
||||
|
||||
### Bestehende Dateien aendern:
|
||||
|
||||
```
|
||||
edu-search-service/cmd/server/main.go # Policy-Endpoints registrieren
|
||||
edu-search-service/internal/crawler/crawler.go # Policy-Check hinzufuegen
|
||||
edu-search-service/internal/pipeline/pipeline.go # PII-Filter
|
||||
edu-search-service/internal/database/database.go # Migrations
|
||||
admin-v2/lib/navigation.ts # source-policy Modul
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Implementierungs-Reihenfolge
|
||||
|
||||
### Phase 1: Datenbank & Models
|
||||
1. Migration `005_source_policies.sql` erstellen
|
||||
2. Go Models in `internal/policy/models.go`
|
||||
3. Store-Layer in `internal/policy/store.go`
|
||||
4. YAML-Loader fuer Initial-Daten
|
||||
|
||||
### Phase 2: Policy Enforcer
|
||||
1. `internal/policy/enforcer.go` - CheckSource, CheckOperation
|
||||
2. `internal/policy/pii_detector.go` - Regex-basierte Erkennung
|
||||
3. `internal/policy/audit.go` - Logging
|
||||
4. Integration in Crawler
|
||||
|
||||
### Phase 3: Admin API
|
||||
1. `internal/api/handlers/policy_handlers.go`
|
||||
2. Routen in main.go registrieren
|
||||
3. API testen
|
||||
|
||||
### Phase 4: Frontend
|
||||
1. Hauptseite mit PagePurpose
|
||||
2. SourcesTab mit Whitelist-CRUD
|
||||
3. OperationsMatrixTab
|
||||
4. PIIRulesTab mit Test-Funktion
|
||||
5. AuditTab mit Export
|
||||
|
||||
### Phase 5: Testing & Deployment
|
||||
1. Unit Tests fuer Enforcer
|
||||
2. Integration Tests fuer API
|
||||
3. E2E Test fuer Frontend
|
||||
4. Deployment auf Mac Mini
|
||||
|
||||
---
|
||||
|
||||
## 7. Verifikation
|
||||
|
||||
### Nach Backend (Phase 1-3):
|
||||
```bash
|
||||
# Migration ausfuehren
|
||||
ssh macmini "cd /path/to/edu-search-service && go run ./cmd/migrate"
|
||||
|
||||
# API testen
|
||||
curl -X GET http://macmini:8088/v1/admin/policies
|
||||
curl -X POST http://macmini:8088/v1/admin/check-compliance \
|
||||
-d '{"url":"https://nibis.de/test","operation":"lookup"}'
|
||||
```
|
||||
|
||||
### Nach Frontend (Phase 4):
|
||||
```bash
|
||||
# Build & Deploy
|
||||
rsync -avz admin-v2/ macmini:/path/to/admin-v2/
|
||||
ssh macmini "docker compose build admin-v2 && docker compose up -d admin-v2"
|
||||
|
||||
# Testen
|
||||
open https://macmini:3002/compliance/source-policy
|
||||
```
|
||||
|
||||
### Auditor-Checkliste:
|
||||
- [ ] Alle Quellen in Whitelist dokumentiert
|
||||
- [ ] Operations-Matrix zeigt Training = VERBOTEN
|
||||
- [ ] PII-Regeln aktiv und testbar
|
||||
- [ ] Audit-Log zeigt alle Aenderungen
|
||||
- [ ] Blocked-Content-Log zeigt blockierte URLs
|
||||
- [ ] PDF/JSON-Export funktioniert
|
||||
|
||||
---
|
||||
|
||||
## 8. KMK-Spezifika (§5 UrhG)
|
||||
|
||||
**Rechtsgrundlage:**
|
||||
- KMK-Beschluesse, Vereinbarungen, EPA sind amtliche Werke nach §5 UrhG
|
||||
- Frei nutzbar, Attribution erforderlich
|
||||
|
||||
**Zitierformat:**
|
||||
```
|
||||
Quelle: KMK, [Titel des Beschlusses], [Datum]
|
||||
Beispiel: Quelle: KMK, Bildungsstandards im Fach Deutsch, 2003
|
||||
```
|
||||
|
||||
**Zugelassene Dokumenttypen:**
|
||||
- Beschluesse (Resolutions)
|
||||
- Vereinbarungen (Agreements)
|
||||
- EPA (Einheitliche Pruefungsanforderungen)
|
||||
- Empfehlungen (Recommendations)
|
||||
|
||||
**In Operations-Matrix:**
|
||||
| Operation | Erlaubt | Hinweis |
|
||||
|-----------|---------|---------|
|
||||
| Lookup | Ja | Quelle anzeigen |
|
||||
| RAG | Ja | Zitation im Output |
|
||||
| Training | **NEIN** | VERBOTEN |
|
||||
| Export | Ja | Attribution |
|
||||
|
||||
---
|
||||
|
||||
## 9. Lizenzen
|
||||
|
||||
| Lizenz | Name | Attribution |
|
||||
|--------|------|-------------|
|
||||
| DL-DE-BY-2.0 | Datenlizenz Deutschland | Ja |
|
||||
| CC-BY | Creative Commons Attribution | Ja |
|
||||
| CC-BY-SA | CC Attribution-ShareAlike | Ja + ShareAlike |
|
||||
| CC0 | Public Domain | Nein |
|
||||
| §5 UrhG | Amtliche Werke | Ja (Quelle) |
|
||||
|
||||
---
|
||||
|
||||
## 10. Aktueller Stand
|
||||
|
||||
**Phase 1: Datenbank & Models - ABGESCHLOSSEN**
|
||||
- [x] Codebase-Exploration edu-search-service
|
||||
- [x] Codebase-Exploration admin-v2
|
||||
- [x] Plan dokumentiert
|
||||
- [x] Migration 005_source_policies.sql erstellen
|
||||
- [x] Go Models implementieren (internal/policy/models.go)
|
||||
- [x] Store-Layer implementieren (internal/policy/store.go)
|
||||
- [x] Policy Enforcer implementieren (internal/policy/enforcer.go)
|
||||
- [x] PII Detector implementieren (internal/policy/pii_detector.go)
|
||||
- [x] Audit Logging implementieren (internal/policy/audit.go)
|
||||
- [x] YAML Loader implementieren (internal/policy/loader.go)
|
||||
- [x] Initial-Daten YAML erstellen (policies/bundeslaender.yaml)
|
||||
- [x] Unit Tests schreiben (internal/policy/policy_test.go)
|
||||
- [x] README aktualisieren
|
||||
|
||||
**Phase 2: Admin API - AUSSTEHEND**
|
||||
- [ ] API Handlers implementieren (policy_handlers.go)
|
||||
- [ ] main.go aktualisieren
|
||||
- [ ] API testen
|
||||
|
||||
**Phase 3: Integration - AUSSTEHEND**
|
||||
- [ ] Crawler-Integration
|
||||
- [ ] Pipeline-Integration
|
||||
|
||||
**Phase 4: Frontend - AUSSTEHEND**
|
||||
- [ ] Frontend page.tsx erstellen
|
||||
- [ ] SourcesTab Component
|
||||
- [ ] OperationsMatrixTab Component
|
||||
- [ ] PIIRulesTab Component
|
||||
- [ ] AuditTab Component
|
||||
- [ ] Navigation aktualisieren
|
||||
|
||||
**Erstellte Dateien:**
|
||||
```
|
||||
edu-search-service/
|
||||
├── migrations/
|
||||
│ └── 005_source_policies.sql # DB Schema (6 Tabellen)
|
||||
├── internal/policy/
|
||||
│ ├── models.go # Datenstrukturen & Enums
|
||||
│ ├── store.go # PostgreSQL CRUD
|
||||
│ ├── enforcer.go # Policy-Enforcement
|
||||
│ ├── pii_detector.go # PII-Erkennung
|
||||
│ ├── audit.go # Audit-Logging
|
||||
│ ├── loader.go # YAML-Loader
|
||||
│ └── policy_test.go # Unit Tests
|
||||
└── policies/
|
||||
└── bundeslaender.yaml # Initial-Daten (8 Bundeslaender)
|
||||
```
|
||||
31
admin-v2/.docker/build-ci-images.sh
Executable file
31
admin-v2/.docker/build-ci-images.sh
Executable file
@@ -0,0 +1,31 @@
|
||||
#!/bin/bash
|
||||
# Build CI Docker Images for BreakPilot
|
||||
# Run this script on the Mac Mini to build the custom CI images
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
echo "=== Building BreakPilot CI Images ==="
|
||||
echo "Project directory: $PROJECT_DIR"
|
||||
|
||||
cd "$PROJECT_DIR"
|
||||
|
||||
# Build Python CI image with WeasyPrint
|
||||
echo ""
|
||||
echo "Building breakpilot/python-ci:3.12 ..."
|
||||
docker build \
|
||||
-t breakpilot/python-ci:3.12 \
|
||||
-t breakpilot/python-ci:latest \
|
||||
-f .docker/python-ci.Dockerfile \
|
||||
.
|
||||
|
||||
echo ""
|
||||
echo "=== Build complete ==="
|
||||
echo ""
|
||||
echo "Images built:"
|
||||
docker images | grep breakpilot/python-ci
|
||||
|
||||
echo ""
|
||||
echo "To use in Woodpecker CI, the image is already configured in .woodpecker/main.yml"
|
||||
51
admin-v2/.docker/python-ci.Dockerfile
Normal file
51
admin-v2/.docker/python-ci.Dockerfile
Normal file
@@ -0,0 +1,51 @@
|
||||
# Custom Python CI Image with WeasyPrint Dependencies
|
||||
# Build: docker build -t breakpilot/python-ci:3.12 -f .docker/python-ci.Dockerfile .
|
||||
#
|
||||
# This image includes all system libraries needed for:
|
||||
# - WeasyPrint (PDF generation)
|
||||
# - psycopg2 (PostgreSQL)
|
||||
# - General Python testing
|
||||
|
||||
FROM python:3.12-slim
|
||||
|
||||
LABEL maintainer="BreakPilot Team"
|
||||
LABEL description="Python 3.12 with WeasyPrint and test dependencies for CI"
|
||||
|
||||
# Install system dependencies in a single layer
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
# WeasyPrint dependencies
|
||||
libpango-1.0-0 \
|
||||
libpangocairo-1.0-0 \
|
||||
libpangoft2-1.0-0 \
|
||||
libgdk-pixbuf-2.0-0 \
|
||||
libffi-dev \
|
||||
libcairo2 \
|
||||
libcairo2-dev \
|
||||
libgirepository1.0-dev \
|
||||
gir1.2-pango-1.0 \
|
||||
# PostgreSQL client (for psycopg2)
|
||||
libpq-dev \
|
||||
# Build tools (for some pip packages)
|
||||
gcc \
|
||||
g++ \
|
||||
# Useful utilities
|
||||
curl \
|
||||
git \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& apt-get clean
|
||||
|
||||
# Pre-install commonly used Python packages for faster CI
|
||||
RUN pip install --no-cache-dir \
|
||||
pytest \
|
||||
pytest-cov \
|
||||
pytest-asyncio \
|
||||
pytest-json-report \
|
||||
psycopg2-binary \
|
||||
weasyprint \
|
||||
httpx
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Default command
|
||||
CMD ["python", "--version"]
|
||||
@@ -6,3 +6,39 @@ README.md
|
||||
*.log
|
||||
.env.local
|
||||
.env.*.local
|
||||
|
||||
# Exclude stale root-level dirs that may appear inside admin-v2
|
||||
BreakpilotDrive
|
||||
backend
|
||||
docs
|
||||
billing-service
|
||||
consent-service
|
||||
consent-sdk
|
||||
ai-compliance-sdk
|
||||
admin-v2
|
||||
edu-search-service
|
||||
school-service
|
||||
voice-service
|
||||
geo-service
|
||||
klausur-service
|
||||
studio-v2
|
||||
website
|
||||
scripts
|
||||
agent-core
|
||||
pca-platform
|
||||
breakpilot-drive
|
||||
breakpilot-compliance-sdk
|
||||
dsms-gateway
|
||||
dsms-node
|
||||
h5p-service
|
||||
ai-content-generator
|
||||
policy_vault_*
|
||||
docker
|
||||
.docker
|
||||
vault
|
||||
librechat
|
||||
nginx
|
||||
e2e
|
||||
vitest.config.ts
|
||||
vitest.setup.ts
|
||||
playwright.config.ts
|
||||
|
||||
124
admin-v2/.env.example
Normal file
124
admin-v2/.env.example
Normal file
@@ -0,0 +1,124 @@
|
||||
# BreakPilot PWA - Environment Configuration
|
||||
# Kopieren Sie diese Datei nach .env und passen Sie die Werte an
|
||||
|
||||
# ================================================
|
||||
# Allgemein
|
||||
# ================================================
|
||||
ENVIRONMENT=development
|
||||
# ENVIRONMENT=production
|
||||
|
||||
# ================================================
|
||||
# Sicherheit
|
||||
# ================================================
|
||||
# WICHTIG: In Produktion sichere Schluessel verwenden!
|
||||
# Generieren mit: openssl rand -hex 32
|
||||
JWT_SECRET=CHANGE_ME_RUN_openssl_rand_hex_32
|
||||
JWT_REFRESH_SECRET=CHANGE_ME_RUN_openssl_rand_hex_32
|
||||
|
||||
# ================================================
|
||||
# Keycloak (Optional - fuer Produktion empfohlen)
|
||||
# ================================================
|
||||
# Wenn Keycloak konfiguriert ist, wird es fuer Authentifizierung verwendet.
|
||||
# Ohne Keycloak wird lokales JWT verwendet (gut fuer Entwicklung).
|
||||
#
|
||||
# KEYCLOAK_SERVER_URL=https://keycloak.breakpilot.app
|
||||
# KEYCLOAK_REALM=breakpilot
|
||||
# KEYCLOAK_CLIENT_ID=breakpilot-backend
|
||||
# KEYCLOAK_CLIENT_SECRET=your-client-secret
|
||||
# KEYCLOAK_VERIFY_SSL=true
|
||||
|
||||
# ================================================
|
||||
# E-Mail Konfiguration
|
||||
# ================================================
|
||||
|
||||
# === ENTWICKLUNG (Mailpit - Standardwerte) ===
|
||||
# Mailpit fängt alle E-Mails ab und zeigt sie unter http://localhost:8025
|
||||
SMTP_HOST=mailpit
|
||||
SMTP_PORT=1025
|
||||
SMTP_USERNAME=
|
||||
SMTP_PASSWORD=
|
||||
SMTP_FROM_NAME=BreakPilot
|
||||
SMTP_FROM_ADDR=noreply@breakpilot.app
|
||||
FRONTEND_URL=http://localhost:8000
|
||||
|
||||
# === PRODUKTION (Beispiel für verschiedene Provider) ===
|
||||
|
||||
# --- Option 1: Eigener Mailserver ---
|
||||
# SMTP_HOST=mail.ihredomain.de
|
||||
# SMTP_PORT=587
|
||||
# SMTP_USERNAME=noreply@ihredomain.de
|
||||
# SMTP_PASSWORD=ihr-sicheres-passwort
|
||||
# SMTP_FROM_NAME=BreakPilot
|
||||
# SMTP_FROM_ADDR=noreply@ihredomain.de
|
||||
# FRONTEND_URL=https://app.ihredomain.de
|
||||
|
||||
# --- Option 2: SendGrid ---
|
||||
# SMTP_HOST=smtp.sendgrid.net
|
||||
# SMTP_PORT=587
|
||||
# SMTP_USERNAME=apikey
|
||||
# SMTP_PASSWORD=SG.xxxxxxxxxxxxxxxxxxxxx
|
||||
# SMTP_FROM_NAME=BreakPilot
|
||||
# SMTP_FROM_ADDR=noreply@ihredomain.de
|
||||
|
||||
# --- Option 3: Mailgun ---
|
||||
# SMTP_HOST=smtp.mailgun.org
|
||||
# SMTP_PORT=587
|
||||
# SMTP_USERNAME=postmaster@mg.ihredomain.de
|
||||
# SMTP_PASSWORD=ihr-mailgun-passwort
|
||||
# SMTP_FROM_NAME=BreakPilot
|
||||
# SMTP_FROM_ADDR=noreply@mg.ihredomain.de
|
||||
|
||||
# --- Option 4: Amazon SES ---
|
||||
# SMTP_HOST=email-smtp.eu-central-1.amazonaws.com
|
||||
# SMTP_PORT=587
|
||||
# SMTP_USERNAME=AKIAXXXXXXXXXXXXXXXX
|
||||
# SMTP_PASSWORD=ihr-ses-secret
|
||||
# SMTP_FROM_NAME=BreakPilot
|
||||
# SMTP_FROM_ADDR=noreply@ihredomain.de
|
||||
|
||||
# ================================================
|
||||
# Datenbank
|
||||
# ================================================
|
||||
POSTGRES_USER=breakpilot
|
||||
POSTGRES_PASSWORD=breakpilot123
|
||||
POSTGRES_DB=breakpilot_db
|
||||
DATABASE_URL=postgres://breakpilot:breakpilot123@localhost:5432/breakpilot_db?sslmode=disable
|
||||
|
||||
# ================================================
|
||||
# Optional: AI Integration
|
||||
# ================================================
|
||||
# ANTHROPIC_API_KEY=your-anthropic-api-key-here
|
||||
|
||||
# ================================================
|
||||
# Breakpilot Drive - Lernspiel
|
||||
# ================================================
|
||||
# Aktiviert Datenbank-Speicherung fuer Spielsessions
|
||||
GAME_USE_DATABASE=true
|
||||
|
||||
# LLM fuer Quiz-Fragen-Generierung (optional)
|
||||
# Wenn nicht gesetzt, werden statische Fragen verwendet
|
||||
GAME_LLM_MODEL=llama-3.1-8b
|
||||
GAME_LLM_FALLBACK_MODEL=claude-3-haiku
|
||||
|
||||
# Feature Flags
|
||||
GAME_REQUIRE_AUTH=false
|
||||
GAME_REQUIRE_BILLING=false
|
||||
GAME_ENABLE_LEADERBOARDS=true
|
||||
|
||||
# Task-Kosten fuer Billing (wenn aktiviert)
|
||||
GAME_SESSION_TASK_COST=1.0
|
||||
GAME_QUICK_SESSION_TASK_COST=0.5
|
||||
|
||||
# ================================================
|
||||
# Woodpecker CI/CD
|
||||
# ================================================
|
||||
# URL zum Woodpecker Server
|
||||
WOODPECKER_URL=http://woodpecker-server:8000
|
||||
# API Token für Dashboard-Integration (Pipeline-Start)
|
||||
# Erstellen unter: http://macmini:8090 → User Settings → Personal Access Tokens
|
||||
WOODPECKER_TOKEN=
|
||||
|
||||
# ================================================
|
||||
# Debug
|
||||
# ================================================
|
||||
DEBUG=false
|
||||
132
admin-v2/.github/dependabot.yml
vendored
Normal file
132
admin-v2/.github/dependabot.yml
vendored
Normal file
@@ -0,0 +1,132 @@
|
||||
# Dependabot Configuration for BreakPilot PWA
|
||||
# This file configures Dependabot to automatically check for outdated dependencies
|
||||
# and create pull requests to update them
|
||||
|
||||
version: 2
|
||||
updates:
|
||||
# Go dependencies (consent-service)
|
||||
- package-ecosystem: "gomod"
|
||||
directory: "/consent-service"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
open-pull-requests-limit: 5
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "go"
|
||||
- "security"
|
||||
commit-message:
|
||||
prefix: "deps(go):"
|
||||
groups:
|
||||
go-minor:
|
||||
patterns:
|
||||
- "*"
|
||||
update-types:
|
||||
- "minor"
|
||||
- "patch"
|
||||
|
||||
# Python dependencies (backend)
|
||||
- package-ecosystem: "pip"
|
||||
directory: "/backend"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
open-pull-requests-limit: 5
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "python"
|
||||
- "security"
|
||||
commit-message:
|
||||
prefix: "deps(python):"
|
||||
groups:
|
||||
python-minor:
|
||||
patterns:
|
||||
- "*"
|
||||
update-types:
|
||||
- "minor"
|
||||
- "patch"
|
||||
|
||||
# Node.js dependencies (website)
|
||||
- package-ecosystem: "npm"
|
||||
directory: "/website"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
open-pull-requests-limit: 5
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "javascript"
|
||||
- "security"
|
||||
commit-message:
|
||||
prefix: "deps(npm):"
|
||||
groups:
|
||||
npm-minor:
|
||||
patterns:
|
||||
- "*"
|
||||
update-types:
|
||||
- "minor"
|
||||
- "patch"
|
||||
|
||||
# GitHub Actions
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
open-pull-requests-limit: 5
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "github-actions"
|
||||
commit-message:
|
||||
prefix: "deps(actions):"
|
||||
|
||||
# Docker base images
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/consent-service"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "docker"
|
||||
- "security"
|
||||
commit-message:
|
||||
prefix: "deps(docker):"
|
||||
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/backend"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "docker"
|
||||
- "security"
|
||||
commit-message:
|
||||
prefix: "deps(docker):"
|
||||
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/website"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
day: "monday"
|
||||
time: "06:00"
|
||||
timezone: "Europe/Berlin"
|
||||
labels:
|
||||
- "dependencies"
|
||||
- "docker"
|
||||
- "security"
|
||||
commit-message:
|
||||
prefix: "deps(docker):"
|
||||
503
admin-v2/.github/workflows/ci.yml
vendored
Normal file
503
admin-v2/.github/workflows/ci.yml
vendored
Normal file
@@ -0,0 +1,503 @@
|
||||
name: CI/CD Pipeline
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, develop]
|
||||
pull_request:
|
||||
branches: [main, develop]
|
||||
|
||||
env:
|
||||
GO_VERSION: '1.21'
|
||||
PYTHON_VERSION: '3.11'
|
||||
NODE_VERSION: '20'
|
||||
POSTGRES_USER: breakpilot
|
||||
POSTGRES_PASSWORD: breakpilot123
|
||||
POSTGRES_DB: breakpilot_test
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_PREFIX: ${{ github.repository_owner }}/breakpilot
|
||||
|
||||
jobs:
|
||||
# ==========================================
|
||||
# Go Consent Service Tests
|
||||
# ==========================================
|
||||
go-tests:
|
||||
name: Go Tests
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
env:
|
||||
POSTGRES_USER: ${{ env.POSTGRES_USER }}
|
||||
POSTGRES_PASSWORD: ${{ env.POSTGRES_PASSWORD }}
|
||||
POSTGRES_DB: ${{ env.POSTGRES_DB }}
|
||||
ports:
|
||||
- 5432:5432
|
||||
options: >-
|
||||
--health-cmd pg_isready
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 5
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: ${{ env.GO_VERSION }}
|
||||
cache-dependency-path: consent-service/go.sum
|
||||
|
||||
- name: Download dependencies
|
||||
working-directory: ./consent-service
|
||||
run: go mod download
|
||||
|
||||
- name: Run Go Vet
|
||||
working-directory: ./consent-service
|
||||
run: go vet ./...
|
||||
|
||||
- name: Run Unit Tests
|
||||
working-directory: ./consent-service
|
||||
run: go test -v -race -coverprofile=coverage.out ./...
|
||||
env:
|
||||
DATABASE_URL: postgres://${{ env.POSTGRES_USER }}:${{ env.POSTGRES_PASSWORD }}@localhost:5432/${{ env.POSTGRES_DB }}?sslmode=disable
|
||||
JWT_SECRET: test-jwt-secret-for-ci
|
||||
JWT_REFRESH_SECRET: test-refresh-secret-for-ci
|
||||
|
||||
- name: Check Coverage
|
||||
working-directory: ./consent-service
|
||||
run: |
|
||||
go tool cover -func=coverage.out
|
||||
COVERAGE=$(go tool cover -func=coverage.out | grep total | awk '{print $3}' | sed 's/%//')
|
||||
echo "Total coverage: ${COVERAGE}%"
|
||||
if (( $(echo "$COVERAGE < 50" | bc -l) )); then
|
||||
echo "::warning::Coverage is below 50%"
|
||||
fi
|
||||
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v4
|
||||
with:
|
||||
files: ./consent-service/coverage.out
|
||||
flags: go
|
||||
name: go-coverage
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Python Backend Tests
|
||||
# ==========================================
|
||||
python-tests:
|
||||
name: Python Tests
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
cache: 'pip'
|
||||
cache-dependency-path: backend/requirements.txt
|
||||
|
||||
- name: Install dependencies
|
||||
working-directory: ./backend
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install pytest pytest-cov pytest-asyncio httpx
|
||||
|
||||
- name: Run Python Tests
|
||||
working-directory: ./backend
|
||||
run: pytest -v --cov=. --cov-report=xml --cov-report=term-missing
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v4
|
||||
with:
|
||||
files: ./backend/coverage.xml
|
||||
flags: python
|
||||
name: python-coverage
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Node.js Website Tests
|
||||
# ==========================================
|
||||
website-tests:
|
||||
name: Website Tests
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: ${{ env.NODE_VERSION }}
|
||||
cache: 'npm'
|
||||
cache-dependency-path: website/package-lock.json
|
||||
|
||||
- name: Install dependencies
|
||||
working-directory: ./website
|
||||
run: npm ci
|
||||
|
||||
- name: Run TypeScript check
|
||||
working-directory: ./website
|
||||
run: npx tsc --noEmit
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run ESLint
|
||||
working-directory: ./website
|
||||
run: npm run lint
|
||||
continue-on-error: true
|
||||
|
||||
- name: Build website
|
||||
working-directory: ./website
|
||||
run: npm run build
|
||||
env:
|
||||
NEXT_PUBLIC_BILLING_API_URL: http://localhost:8083
|
||||
NEXT_PUBLIC_APP_URL: http://localhost:3000
|
||||
|
||||
# ==========================================
|
||||
# Linting
|
||||
# ==========================================
|
||||
lint:
|
||||
name: Linting
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: ${{ env.GO_VERSION }}
|
||||
|
||||
- name: Run golangci-lint
|
||||
uses: golangci/golangci-lint-action@v4
|
||||
with:
|
||||
version: latest
|
||||
working-directory: ./consent-service
|
||||
args: --timeout=5m
|
||||
continue-on-error: true
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VERSION }}
|
||||
|
||||
- name: Install Python linters
|
||||
run: pip install flake8 black isort
|
||||
|
||||
- name: Run flake8
|
||||
working-directory: ./backend
|
||||
run: flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
|
||||
continue-on-error: true
|
||||
|
||||
- name: Check Black formatting
|
||||
working-directory: ./backend
|
||||
run: black --check --diff .
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Security Scan
|
||||
# ==========================================
|
||||
security:
|
||||
name: Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Trivy vulnerability scanner
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
scan-type: 'fs'
|
||||
scan-ref: '.'
|
||||
severity: 'CRITICAL,HIGH'
|
||||
exit-code: '0'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run Go security check
|
||||
uses: securego/gosec@master
|
||||
with:
|
||||
args: '-no-fail -fmt sarif -out results.sarif ./consent-service/...'
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Docker Build & Push
|
||||
# ==========================================
|
||||
docker-build:
|
||||
name: Docker Build & Push
|
||||
runs-on: ubuntu-latest
|
||||
needs: [go-tests, python-tests, website-tests]
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Log in to GitHub Container Registry
|
||||
if: github.event_name != 'pull_request'
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata for consent-service
|
||||
id: meta-consent
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-consent-service
|
||||
tags: |
|
||||
type=ref,event=branch
|
||||
type=ref,event=pr
|
||||
type=sha,prefix=
|
||||
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
|
||||
|
||||
- name: Build and push consent-service
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: ./consent-service
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.meta-consent.outputs.tags }}
|
||||
labels: ${{ steps.meta-consent.outputs.labels }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
|
||||
- name: Extract metadata for backend
|
||||
id: meta-backend
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-backend
|
||||
tags: |
|
||||
type=ref,event=branch
|
||||
type=ref,event=pr
|
||||
type=sha,prefix=
|
||||
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
|
||||
|
||||
- name: Build and push backend
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: ./backend
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.meta-backend.outputs.tags }}
|
||||
labels: ${{ steps.meta-backend.outputs.labels }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
|
||||
- name: Extract metadata for website
|
||||
id: meta-website
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-website
|
||||
tags: |
|
||||
type=ref,event=branch
|
||||
type=ref,event=pr
|
||||
type=sha,prefix=
|
||||
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
|
||||
|
||||
- name: Build and push website
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: ./website
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.meta-website.outputs.tags }}
|
||||
labels: ${{ steps.meta-website.outputs.labels }}
|
||||
build-args: |
|
||||
NEXT_PUBLIC_BILLING_API_URL=${{ vars.NEXT_PUBLIC_BILLING_API_URL || 'http://localhost:8083' }}
|
||||
NEXT_PUBLIC_APP_URL=${{ vars.NEXT_PUBLIC_APP_URL || 'http://localhost:3000' }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
|
||||
# ==========================================
|
||||
# Integration Tests
|
||||
# ==========================================
|
||||
integration-tests:
|
||||
name: Integration Tests
|
||||
runs-on: ubuntu-latest
|
||||
needs: [docker-build]
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Start services with Docker Compose
|
||||
run: |
|
||||
docker compose up -d postgres mailpit
|
||||
sleep 10
|
||||
|
||||
- name: Run consent-service
|
||||
working-directory: ./consent-service
|
||||
run: |
|
||||
go build -o consent-service ./cmd/server
|
||||
./consent-service &
|
||||
sleep 5
|
||||
env:
|
||||
DATABASE_URL: postgres://breakpilot:breakpilot123@localhost:5432/breakpilot_db?sslmode=disable
|
||||
JWT_SECRET: test-jwt-secret
|
||||
JWT_REFRESH_SECRET: test-refresh-secret
|
||||
SMTP_HOST: localhost
|
||||
SMTP_PORT: 1025
|
||||
|
||||
- name: Health Check
|
||||
run: |
|
||||
curl -f http://localhost:8081/health || exit 1
|
||||
|
||||
- name: Run Integration Tests
|
||||
run: |
|
||||
# Test Auth endpoints
|
||||
curl -s http://localhost:8081/api/v1/auth/health
|
||||
|
||||
# Test Document endpoints
|
||||
curl -s http://localhost:8081/api/v1/documents
|
||||
continue-on-error: true
|
||||
|
||||
- name: Stop services
|
||||
if: always()
|
||||
run: docker compose down
|
||||
|
||||
# ==========================================
|
||||
# Deploy to Staging
|
||||
# ==========================================
|
||||
deploy-staging:
|
||||
name: Deploy to Staging
|
||||
runs-on: ubuntu-latest
|
||||
needs: [docker-build, integration-tests]
|
||||
if: github.ref == 'refs/heads/develop' && github.event_name == 'push'
|
||||
environment:
|
||||
name: staging
|
||||
url: https://staging.breakpilot.app
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Log in to GitHub Container Registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Deploy to staging server
|
||||
env:
|
||||
STAGING_HOST: ${{ secrets.STAGING_HOST }}
|
||||
STAGING_USER: ${{ secrets.STAGING_USER }}
|
||||
STAGING_SSH_KEY: ${{ secrets.STAGING_SSH_KEY }}
|
||||
run: |
|
||||
# This is a placeholder for actual deployment
|
||||
# Configure based on your staging infrastructure
|
||||
echo "Deploying to staging environment..."
|
||||
echo "Images to deploy:"
|
||||
echo " - ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-consent-service:develop"
|
||||
echo " - ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-backend:develop"
|
||||
echo " - ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-website:develop"
|
||||
|
||||
# Example: SSH deployment (uncomment when configured)
|
||||
# mkdir -p ~/.ssh
|
||||
# echo "$STAGING_SSH_KEY" > ~/.ssh/id_rsa
|
||||
# chmod 600 ~/.ssh/id_rsa
|
||||
# ssh -o StrictHostKeyChecking=no $STAGING_USER@$STAGING_HOST "cd /opt/breakpilot && docker compose pull && docker compose up -d"
|
||||
|
||||
- name: Notify deployment
|
||||
run: |
|
||||
echo "## Staging Deployment" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Successfully deployed to staging environment" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "**Deployed images:**" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- consent-service: \`develop\`" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- backend: \`develop\`" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- website: \`develop\`" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
# ==========================================
|
||||
# Deploy to Production
|
||||
# ==========================================
|
||||
deploy-production:
|
||||
name: Deploy to Production
|
||||
runs-on: ubuntu-latest
|
||||
needs: [docker-build, integration-tests]
|
||||
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
|
||||
environment:
|
||||
name: production
|
||||
url: https://breakpilot.app
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Log in to GitHub Container Registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Deploy to production server
|
||||
env:
|
||||
PROD_HOST: ${{ secrets.PROD_HOST }}
|
||||
PROD_USER: ${{ secrets.PROD_USER }}
|
||||
PROD_SSH_KEY: ${{ secrets.PROD_SSH_KEY }}
|
||||
run: |
|
||||
# This is a placeholder for actual deployment
|
||||
# Configure based on your production infrastructure
|
||||
echo "Deploying to production environment..."
|
||||
echo "Images to deploy:"
|
||||
echo " - ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-consent-service:latest"
|
||||
echo " - ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-backend:latest"
|
||||
echo " - ${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-website:latest"
|
||||
|
||||
# Example: SSH deployment (uncomment when configured)
|
||||
# mkdir -p ~/.ssh
|
||||
# echo "$PROD_SSH_KEY" > ~/.ssh/id_rsa
|
||||
# chmod 600 ~/.ssh/id_rsa
|
||||
# ssh -o StrictHostKeyChecking=no $PROD_USER@$PROD_HOST "cd /opt/breakpilot && docker compose pull && docker compose up -d"
|
||||
|
||||
- name: Notify deployment
|
||||
run: |
|
||||
echo "## Production Deployment" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Successfully deployed to production environment" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "**Deployed images:**" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- consent-service: \`latest\`" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- backend: \`latest\`" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- website: \`latest\`" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
# ==========================================
|
||||
# Summary
|
||||
# ==========================================
|
||||
summary:
|
||||
name: CI Summary
|
||||
runs-on: ubuntu-latest
|
||||
needs: [go-tests, python-tests, website-tests, lint, security, docker-build, integration-tests]
|
||||
if: always()
|
||||
|
||||
steps:
|
||||
- name: Check job results
|
||||
run: |
|
||||
echo "## CI/CD Summary" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Job | Status |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "|-----|--------|" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Go Tests | ${{ needs.go-tests.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Python Tests | ${{ needs.python-tests.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Website Tests | ${{ needs.website-tests.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Linting | ${{ needs.lint.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Security | ${{ needs.security.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Docker Build | ${{ needs.docker-build.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Integration Tests | ${{ needs.integration-tests.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "### Docker Images" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Images are pushed to: \`${{ env.REGISTRY }}/${{ env.IMAGE_PREFIX }}-*\`" >> $GITHUB_STEP_SUMMARY
|
||||
222
admin-v2/.github/workflows/security.yml
vendored
Normal file
222
admin-v2/.github/workflows/security.yml
vendored
Normal file
@@ -0,0 +1,222 @@
|
||||
name: Security Scanning
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, develop]
|
||||
pull_request:
|
||||
branches: [main, develop]
|
||||
schedule:
|
||||
# Run security scans weekly on Sundays at midnight
|
||||
- cron: '0 0 * * 0'
|
||||
|
||||
jobs:
|
||||
# ==========================================
|
||||
# Secret Scanning
|
||||
# ==========================================
|
||||
secret-scan:
|
||||
name: Secret Scanning
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: TruffleHog Secret Scan
|
||||
uses: trufflesecurity/trufflehog@main
|
||||
with:
|
||||
extra_args: --only-verified
|
||||
|
||||
- name: GitLeaks Secret Scan
|
||||
uses: gitleaks/gitleaks-action@v2
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Dependency Vulnerability Scanning
|
||||
# ==========================================
|
||||
dependency-scan:
|
||||
name: Dependency Vulnerability Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Trivy vulnerability scanner (filesystem)
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
scan-type: 'fs'
|
||||
scan-ref: '.'
|
||||
severity: 'CRITICAL,HIGH'
|
||||
format: 'sarif'
|
||||
output: 'trivy-fs-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload Trivy scan results to GitHub Security tab
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: 'trivy-fs-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Go Security Scan
|
||||
# ==========================================
|
||||
go-security:
|
||||
name: Go Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.21'
|
||||
|
||||
- name: Run Gosec Security Scanner
|
||||
uses: securego/gosec@master
|
||||
with:
|
||||
args: '-no-fail -fmt sarif -out gosec-results.sarif ./consent-service/...'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload Gosec results to GitHub Security tab
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: 'gosec-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run govulncheck
|
||||
working-directory: ./consent-service
|
||||
run: |
|
||||
go install golang.org/x/vuln/cmd/govulncheck@latest
|
||||
govulncheck ./... || true
|
||||
|
||||
# ==========================================
|
||||
# Python Security Scan
|
||||
# ==========================================
|
||||
python-security:
|
||||
name: Python Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Install safety
|
||||
run: pip install safety bandit
|
||||
|
||||
- name: Run Safety (dependency check)
|
||||
working-directory: ./backend
|
||||
run: safety check -r requirements.txt --full-report || true
|
||||
|
||||
- name: Run Bandit (code security scan)
|
||||
working-directory: ./backend
|
||||
run: bandit -r . -f sarif -o bandit-results.sarif --exit-zero
|
||||
|
||||
- name: Upload Bandit results to GitHub Security tab
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: './backend/bandit-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Node.js Security Scan
|
||||
# ==========================================
|
||||
node-security:
|
||||
name: Node.js Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
|
||||
- name: Install dependencies
|
||||
working-directory: ./website
|
||||
run: npm ci
|
||||
|
||||
- name: Run npm audit
|
||||
working-directory: ./website
|
||||
run: npm audit --audit-level=high || true
|
||||
|
||||
# ==========================================
|
||||
# Docker Image Scanning
|
||||
# ==========================================
|
||||
docker-security:
|
||||
name: Docker Image Security
|
||||
runs-on: ubuntu-latest
|
||||
needs: [go-security, python-security, node-security]
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Build consent-service image
|
||||
run: docker build -t breakpilot/consent-service:scan ./consent-service
|
||||
|
||||
- name: Run Trivy on consent-service
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
image-ref: 'breakpilot/consent-service:scan'
|
||||
severity: 'CRITICAL,HIGH'
|
||||
format: 'sarif'
|
||||
output: 'trivy-consent-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Build backend image
|
||||
run: docker build -t breakpilot/backend:scan ./backend
|
||||
|
||||
- name: Run Trivy on backend
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
image-ref: 'breakpilot/backend:scan'
|
||||
severity: 'CRITICAL,HIGH'
|
||||
format: 'sarif'
|
||||
output: 'trivy-backend-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
- name: Upload Trivy results
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: 'trivy-consent-results.sarif'
|
||||
continue-on-error: true
|
||||
|
||||
# ==========================================
|
||||
# Security Summary
|
||||
# ==========================================
|
||||
security-summary:
|
||||
name: Security Summary
|
||||
runs-on: ubuntu-latest
|
||||
needs: [secret-scan, dependency-scan, go-security, python-security, node-security, docker-security]
|
||||
if: always()
|
||||
|
||||
steps:
|
||||
- name: Create security summary
|
||||
run: |
|
||||
echo "## Security Scan Results" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Scan Type | Status |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "|-----------|--------|" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Secret Scanning | ${{ needs.secret-scan.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Dependency Scanning | ${{ needs.dependency-scan.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Go Security | ${{ needs.go-security.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Python Security | ${{ needs.python-security.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Node.js Security | ${{ needs.node-security.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Docker Security | ${{ needs.docker-security.result }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "### Notes" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- Results are uploaded to the GitHub Security tab" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- Weekly scheduled scans run on Sundays" >> $GITHUB_STEP_SUMMARY
|
||||
244
admin-v2/.github/workflows/test.yml
vendored
Normal file
244
admin-v2/.github/workflows/test.yml
vendored
Normal file
@@ -0,0 +1,244 @@
|
||||
name: Tests
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main, develop ]
|
||||
pull_request:
|
||||
branches: [ main, develop ]
|
||||
|
||||
jobs:
|
||||
go-tests:
|
||||
name: Go Tests
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
env:
|
||||
POSTGRES_USER: breakpilot
|
||||
POSTGRES_PASSWORD: breakpilot123
|
||||
POSTGRES_DB: breakpilot_test
|
||||
ports:
|
||||
- 5432:5432
|
||||
options: >-
|
||||
--health-cmd pg_isready
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 5
|
||||
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.21'
|
||||
cache: true
|
||||
cache-dependency-path: consent-service/go.sum
|
||||
|
||||
- name: Install Dependencies
|
||||
working-directory: ./consent-service
|
||||
run: go mod download
|
||||
|
||||
- name: Run Tests
|
||||
working-directory: ./consent-service
|
||||
env:
|
||||
DATABASE_URL: postgres://breakpilot:breakpilot123@localhost:5432/breakpilot_test?sslmode=disable
|
||||
JWT_SECRET: test-secret-key-for-ci
|
||||
JWT_REFRESH_SECRET: test-refresh-secret-for-ci
|
||||
run: |
|
||||
go test -v -race -coverprofile=coverage.out ./...
|
||||
go tool cover -func=coverage.out
|
||||
|
||||
- name: Check Coverage Threshold
|
||||
working-directory: ./consent-service
|
||||
run: |
|
||||
COVERAGE=$(go tool cover -func=coverage.out | grep total | awk '{print $3}' | sed 's/%//')
|
||||
echo "Total Coverage: $COVERAGE%"
|
||||
if (( $(echo "$COVERAGE < 70.0" | bc -l) )); then
|
||||
echo "Coverage $COVERAGE% is below threshold 70%"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Upload Coverage to Codecov
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
files: ./consent-service/coverage.out
|
||||
flags: go
|
||||
name: go-coverage
|
||||
|
||||
python-tests:
|
||||
name: Python Tests
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.10'
|
||||
cache: 'pip'
|
||||
cache-dependency-path: backend/requirements.txt
|
||||
|
||||
- name: Install Dependencies
|
||||
working-directory: ./backend
|
||||
run: |
|
||||
pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install pytest pytest-cov pytest-asyncio
|
||||
|
||||
- name: Run Tests
|
||||
working-directory: ./backend
|
||||
env:
|
||||
CONSENT_SERVICE_URL: http://localhost:8081
|
||||
JWT_SECRET: test-secret-key-for-ci
|
||||
run: |
|
||||
pytest -v --cov=. --cov-report=xml --cov-report=term
|
||||
|
||||
- name: Check Coverage Threshold
|
||||
working-directory: ./backend
|
||||
run: |
|
||||
COVERAGE=$(python -c "import xml.etree.ElementTree as ET; tree = ET.parse('coverage.xml'); print(tree.getroot().attrib['line-rate'])")
|
||||
COVERAGE_PCT=$(echo "$COVERAGE * 100" | bc)
|
||||
echo "Total Coverage: ${COVERAGE_PCT}%"
|
||||
if (( $(echo "$COVERAGE_PCT < 60.0" | bc -l) )); then
|
||||
echo "Coverage ${COVERAGE_PCT}% is below threshold 60%"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Upload Coverage to Codecov
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
files: ./backend/coverage.xml
|
||||
flags: python
|
||||
name: python-coverage
|
||||
|
||||
integration-tests:
|
||||
name: Integration Tests
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Start Services
|
||||
run: |
|
||||
docker-compose up -d
|
||||
docker-compose ps
|
||||
|
||||
- name: Wait for Postgres
|
||||
run: |
|
||||
timeout 60 bash -c 'until docker-compose exec -T postgres pg_isready -U breakpilot; do sleep 2; done'
|
||||
|
||||
- name: Wait for Consent Service
|
||||
run: |
|
||||
timeout 60 bash -c 'until curl -f http://localhost:8081/health; do sleep 2; done'
|
||||
|
||||
- name: Wait for Backend
|
||||
run: |
|
||||
timeout 60 bash -c 'until curl -f http://localhost:8000/health; do sleep 2; done'
|
||||
|
||||
- name: Wait for Mailpit
|
||||
run: |
|
||||
timeout 60 bash -c 'until curl -f http://localhost:8025/api/v1/info; do sleep 2; done'
|
||||
|
||||
- name: Run Integration Tests
|
||||
run: |
|
||||
chmod +x ./scripts/integration-tests.sh
|
||||
./scripts/integration-tests.sh
|
||||
|
||||
- name: Show Service Logs on Failure
|
||||
if: failure()
|
||||
run: |
|
||||
echo "=== Consent Service Logs ==="
|
||||
docker-compose logs consent-service
|
||||
echo "=== Backend Logs ==="
|
||||
docker-compose logs backend
|
||||
echo "=== Postgres Logs ==="
|
||||
docker-compose logs postgres
|
||||
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: docker-compose down -v
|
||||
|
||||
lint-go:
|
||||
name: Go Lint
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.21'
|
||||
|
||||
- name: Run golangci-lint
|
||||
uses: golangci/golangci-lint-action@v3
|
||||
with:
|
||||
version: latest
|
||||
working-directory: consent-service
|
||||
args: --timeout=5m
|
||||
|
||||
lint-python:
|
||||
name: Python Lint
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.10'
|
||||
|
||||
- name: Install Dependencies
|
||||
run: |
|
||||
pip install flake8 black mypy
|
||||
|
||||
- name: Run Black
|
||||
working-directory: ./backend
|
||||
run: black --check .
|
||||
|
||||
- name: Run Flake8
|
||||
working-directory: ./backend
|
||||
run: flake8 . --max-line-length=120 --exclude=venv
|
||||
|
||||
security-scan:
|
||||
name: Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Trivy Security Scan
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
scan-type: 'fs'
|
||||
scan-ref: '.'
|
||||
format: 'sarif'
|
||||
output: 'trivy-results.sarif'
|
||||
|
||||
- name: Upload Trivy Results to GitHub Security
|
||||
uses: github/codeql-action/upload-sarif@v2
|
||||
if: always()
|
||||
with:
|
||||
sarif_file: 'trivy-results.sarif'
|
||||
|
||||
all-checks:
|
||||
name: All Checks Passed
|
||||
runs-on: ubuntu-latest
|
||||
needs: [go-tests, python-tests, integration-tests, lint-go, lint-python, security-scan]
|
||||
|
||||
steps:
|
||||
- name: All Tests Passed
|
||||
run: echo "All tests and checks passed successfully!"
|
||||
167
admin-v2/.gitignore
vendored
Normal file
167
admin-v2/.gitignore
vendored
Normal file
@@ -0,0 +1,167 @@
|
||||
# ============================================
|
||||
# BreakPilot PWA - Git Ignore
|
||||
# ============================================
|
||||
|
||||
# Environment files (keep examples only)
|
||||
.env
|
||||
.env.local
|
||||
*.env.local
|
||||
|
||||
# Keep examples and environment templates
|
||||
!.env.example
|
||||
!.env.dev
|
||||
!.env.staging
|
||||
# .env.prod should NOT be in repo (contains production secrets)
|
||||
|
||||
# ============================================
|
||||
# Python
|
||||
# ============================================
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
venv/
|
||||
ENV/
|
||||
.venv/
|
||||
*.egg-info/
|
||||
.eggs/
|
||||
*.egg
|
||||
.pytest_cache/
|
||||
htmlcov/
|
||||
.coverage
|
||||
.coverage.*
|
||||
coverage.xml
|
||||
*.cover
|
||||
|
||||
# ============================================
|
||||
# Node.js
|
||||
# ============================================
|
||||
node_modules/
|
||||
.next/
|
||||
out/
|
||||
dist/
|
||||
build/
|
||||
.npm
|
||||
.yarn-integrity
|
||||
*.tsbuildinfo
|
||||
|
||||
# ============================================
|
||||
# Go
|
||||
# ============================================
|
||||
*.exe
|
||||
*.exe~
|
||||
*.dll
|
||||
*.dylib
|
||||
*.test
|
||||
*.out
|
||||
vendor/
|
||||
|
||||
# ============================================
|
||||
# Docker
|
||||
# ============================================
|
||||
# Don't ignore docker-compose files
|
||||
# Ignore volume data if mounted locally
|
||||
backups/
|
||||
*.sql.gz
|
||||
*.sql
|
||||
|
||||
# ============================================
|
||||
# IDE & Editors
|
||||
# ============================================
|
||||
.idea/
|
||||
.vscode/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
.project
|
||||
.classpath
|
||||
.settings/
|
||||
*.sublime-workspace
|
||||
*.sublime-project
|
||||
|
||||
# ============================================
|
||||
# OS Files
|
||||
# ============================================
|
||||
.DS_Store
|
||||
.DS_Store?
|
||||
._*
|
||||
.Spotlight-V100
|
||||
.Trashes
|
||||
ehthumbs.db
|
||||
Thumbs.db
|
||||
|
||||
# ============================================
|
||||
# Secrets & Credentials
|
||||
# ============================================
|
||||
secrets/
|
||||
*.pem
|
||||
*.key
|
||||
*.crt
|
||||
*.p12
|
||||
*.pfx
|
||||
credentials.json
|
||||
service-account.json
|
||||
|
||||
# ============================================
|
||||
# Logs
|
||||
# ============================================
|
||||
*.log
|
||||
logs/
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
|
||||
# ============================================
|
||||
# Build Artifacts
|
||||
# ============================================
|
||||
*.zip
|
||||
*.tar.gz
|
||||
*.rar
|
||||
|
||||
# ============================================
|
||||
# Temporary Files
|
||||
# ============================================
|
||||
tmp/
|
||||
temp/
|
||||
*.tmp
|
||||
*.temp
|
||||
|
||||
# ============================================
|
||||
# Test Results
|
||||
# ============================================
|
||||
test-results/
|
||||
playwright-report/
|
||||
coverage/
|
||||
|
||||
# ============================================
|
||||
# ML Models (large files)
|
||||
# ============================================
|
||||
*.pt
|
||||
*.pth
|
||||
*.onnx
|
||||
*.safetensors
|
||||
models/
|
||||
.claude/settings.local.json
|
||||
|
||||
# ============================================
|
||||
# IDE Plugins & AI Tools
|
||||
# ============================================
|
||||
.continue/
|
||||
CLAUDE_CONTINUE.md
|
||||
|
||||
# ============================================
|
||||
# Misplaced / Large Directories
|
||||
# ============================================
|
||||
backend/BreakpilotDrive/
|
||||
backend/website/
|
||||
backend/screenshots/
|
||||
**/za-download-9/
|
||||
|
||||
# ============================================
|
||||
# Debug & Temp Artifacts
|
||||
# ============================================
|
||||
*.command
|
||||
ssh_key*.txt
|
||||
anleitung.txt
|
||||
fix_permissions.txt
|
||||
77
admin-v2/.gitleaks.toml
Normal file
77
admin-v2/.gitleaks.toml
Normal file
@@ -0,0 +1,77 @@
|
||||
# Gitleaks Configuration for BreakPilot
|
||||
# https://github.com/gitleaks/gitleaks
|
||||
#
|
||||
# Run locally: gitleaks detect --source . -v
|
||||
# Pre-commit: gitleaks protect --staged -v
|
||||
|
||||
title = "BreakPilot Gitleaks Configuration"
|
||||
|
||||
# Use the default rules plus custom rules
|
||||
[extend]
|
||||
useDefault = true
|
||||
|
||||
# Custom rules for BreakPilot-specific patterns
|
||||
[[rules]]
|
||||
id = "anthropic-api-key"
|
||||
description = "Anthropic API Key"
|
||||
regex = '''sk-ant-api[0-9a-zA-Z-_]{20,}'''
|
||||
tags = ["api", "anthropic"]
|
||||
keywords = ["sk-ant-api"]
|
||||
|
||||
[[rules]]
|
||||
id = "vast-api-key"
|
||||
description = "vast.ai API Key"
|
||||
regex = '''(?i)(vast[_-]?api[_-]?key|vast[_-]?key)\s*[=:]\s*['"]?([a-zA-Z0-9-_]{20,})['"]?'''
|
||||
tags = ["api", "vast"]
|
||||
keywords = ["vast"]
|
||||
|
||||
[[rules]]
|
||||
id = "stripe-secret-key"
|
||||
description = "Stripe Secret Key"
|
||||
regex = '''sk_live_[0-9a-zA-Z]{24,}'''
|
||||
tags = ["api", "stripe"]
|
||||
keywords = ["sk_live"]
|
||||
|
||||
[[rules]]
|
||||
id = "stripe-restricted-key"
|
||||
description = "Stripe Restricted Key"
|
||||
regex = '''rk_live_[0-9a-zA-Z]{24,}'''
|
||||
tags = ["api", "stripe"]
|
||||
keywords = ["rk_live"]
|
||||
|
||||
[[rules]]
|
||||
id = "jwt-secret-hardcoded"
|
||||
description = "Hardcoded JWT Secret"
|
||||
regex = '''(?i)(jwt[_-]?secret|jwt[_-]?key)\s*[=:]\s*['"]([^'"]{32,})['"]'''
|
||||
tags = ["secret", "jwt"]
|
||||
keywords = ["jwt"]
|
||||
|
||||
# Allowlist for false positives
|
||||
[allowlist]
|
||||
description = "Global allowlist"
|
||||
paths = [
|
||||
'''\.env\.example$''',
|
||||
'''\.env\.template$''',
|
||||
'''docs/.*\.md$''',
|
||||
'''SBOM\.md$''',
|
||||
'''.*_test\.py$''',
|
||||
'''.*_test\.go$''',
|
||||
'''test_.*\.py$''',
|
||||
'''.*\.bak$''',
|
||||
'''node_modules/.*''',
|
||||
'''venv/.*''',
|
||||
'''\.git/.*''',
|
||||
]
|
||||
|
||||
# Specific commit allowlist (for already-rotated secrets)
|
||||
commits = []
|
||||
|
||||
# Regex patterns to ignore
|
||||
regexes = [
|
||||
'''REPLACE_WITH_REAL_.*''',
|
||||
'''your-.*-key-change-in-production''',
|
||||
'''breakpilot-dev-.*''',
|
||||
'''DEVELOPMENT-ONLY-.*''',
|
||||
'''placeholder.*''',
|
||||
'''example.*key''',
|
||||
]
|
||||
152
admin-v2/.pre-commit-config.yaml
Normal file
152
admin-v2/.pre-commit-config.yaml
Normal file
@@ -0,0 +1,152 @@
|
||||
# Pre-commit Hooks für BreakPilot
|
||||
# Installation: pip install pre-commit && pre-commit install
|
||||
# Aktivierung: pre-commit install
|
||||
|
||||
repos:
|
||||
# Go Hooks
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: go-test
|
||||
name: Go Tests
|
||||
entry: bash -c 'cd consent-service && go test -short ./...'
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: \.go$
|
||||
stages: [commit]
|
||||
|
||||
- id: go-fmt
|
||||
name: Go Format
|
||||
entry: bash -c 'cd consent-service && gofmt -l -w .'
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: \.go$
|
||||
stages: [commit]
|
||||
|
||||
- id: go-vet
|
||||
name: Go Vet
|
||||
entry: bash -c 'cd consent-service && go vet ./...'
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: \.go$
|
||||
stages: [commit]
|
||||
|
||||
- id: golangci-lint
|
||||
name: Go Lint (golangci-lint)
|
||||
entry: bash -c 'cd consent-service && golangci-lint run --timeout=5m'
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: \.go$
|
||||
stages: [commit]
|
||||
|
||||
# Python Hooks
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: pytest
|
||||
name: Python Tests
|
||||
entry: bash -c 'cd backend && pytest -x'
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: \.py$
|
||||
stages: [commit]
|
||||
|
||||
- id: black
|
||||
name: Black Format
|
||||
entry: black
|
||||
language: python
|
||||
types: [python]
|
||||
args: [--line-length=120]
|
||||
stages: [commit]
|
||||
|
||||
- id: flake8
|
||||
name: Flake8 Lint
|
||||
entry: flake8
|
||||
language: python
|
||||
types: [python]
|
||||
args: [--max-line-length=120, --exclude=venv]
|
||||
stages: [commit]
|
||||
|
||||
# General Hooks
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: trailing-whitespace
|
||||
name: Trim Trailing Whitespace
|
||||
- id: end-of-file-fixer
|
||||
name: Fix End of Files
|
||||
- id: check-yaml
|
||||
name: Check YAML
|
||||
args: [--allow-multiple-documents]
|
||||
- id: check-json
|
||||
name: Check JSON
|
||||
- id: check-added-large-files
|
||||
name: Check Large Files
|
||||
args: [--maxkb=500]
|
||||
- id: detect-private-key
|
||||
name: Detect Private Keys
|
||||
- id: mixed-line-ending
|
||||
name: Fix Mixed Line Endings
|
||||
|
||||
# Security Checks
|
||||
- repo: https://github.com/Yelp/detect-secrets
|
||||
rev: v1.4.0
|
||||
hooks:
|
||||
- id: detect-secrets
|
||||
name: Detect Secrets
|
||||
args: ['--baseline', '.secrets.baseline']
|
||||
exclude: |
|
||||
(?x)^(
|
||||
.*\.lock|
|
||||
.*\.sum|
|
||||
package-lock\.json
|
||||
)$
|
||||
|
||||
# =============================================
|
||||
# DevSecOps: Gitleaks (Secrets Detection)
|
||||
# =============================================
|
||||
- repo: https://github.com/gitleaks/gitleaks
|
||||
rev: v8.18.1
|
||||
hooks:
|
||||
- id: gitleaks
|
||||
name: Gitleaks (secrets detection)
|
||||
entry: gitleaks protect --staged -v --config .gitleaks.toml
|
||||
language: golang
|
||||
pass_filenames: false
|
||||
|
||||
# =============================================
|
||||
# DevSecOps: Semgrep (SAST)
|
||||
# =============================================
|
||||
- repo: https://github.com/returntocorp/semgrep
|
||||
rev: v1.52.0
|
||||
hooks:
|
||||
- id: semgrep
|
||||
name: Semgrep (SAST)
|
||||
args:
|
||||
- --config=auto
|
||||
- --config=.semgrep.yml
|
||||
- --severity=ERROR
|
||||
types_or: [python, javascript, typescript, go]
|
||||
stages: [commit]
|
||||
|
||||
# =============================================
|
||||
# DevSecOps: Bandit (Python Security)
|
||||
# =============================================
|
||||
- repo: https://github.com/PyCQA/bandit
|
||||
rev: 1.7.6
|
||||
hooks:
|
||||
- id: bandit
|
||||
name: Bandit (Python security)
|
||||
args: ["-r", "backend/", "-ll", "-x", "backend/tests/*"]
|
||||
files: ^backend/.*\.py$
|
||||
stages: [commit]
|
||||
|
||||
# Branch Protection
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: no-commit-to-branch
|
||||
name: Protect main/develop branches
|
||||
args: ['--branch', 'main', '--branch', 'develop']
|
||||
|
||||
# Configuration
|
||||
default_stages: [commit]
|
||||
fail_fast: false
|
||||
147
admin-v2/.semgrep.yml
Normal file
147
admin-v2/.semgrep.yml
Normal file
@@ -0,0 +1,147 @@
|
||||
# Semgrep Configuration for BreakPilot
|
||||
# https://semgrep.dev/
|
||||
#
|
||||
# Run locally: semgrep scan --config auto
|
||||
# Run with this config: semgrep scan --config .semgrep.yml
|
||||
|
||||
rules:
|
||||
# =============================================
|
||||
# Python/FastAPI Security Rules
|
||||
# =============================================
|
||||
|
||||
- id: hardcoded-secret-in-string
|
||||
patterns:
|
||||
- pattern-either:
|
||||
- pattern: |
|
||||
$VAR = "...$SECRET..."
|
||||
- pattern: |
|
||||
$VAR = '...$SECRET...'
|
||||
message: "Potential hardcoded secret detected. Use environment variables or Vault."
|
||||
languages: [python]
|
||||
severity: WARNING
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-798: Use of Hard-coded Credentials"
|
||||
|
||||
- id: sql-injection-fastapi
|
||||
patterns:
|
||||
- pattern-either:
|
||||
- pattern: |
|
||||
$CURSOR.execute(f"...{$USER_INPUT}...")
|
||||
- pattern: |
|
||||
$CURSOR.execute("..." + $USER_INPUT + "...")
|
||||
- pattern: |
|
||||
$CURSOR.execute("..." % $USER_INPUT)
|
||||
message: "Potential SQL injection. Use parameterized queries."
|
||||
languages: [python]
|
||||
severity: ERROR
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-89: SQL Injection"
|
||||
owasp: "A03:2021 - Injection"
|
||||
|
||||
- id: command-injection
|
||||
patterns:
|
||||
- pattern-either:
|
||||
- pattern: os.system($USER_INPUT)
|
||||
- pattern: subprocess.call($USER_INPUT, shell=True)
|
||||
- pattern: subprocess.run($USER_INPUT, shell=True)
|
||||
- pattern: subprocess.Popen($USER_INPUT, shell=True)
|
||||
message: "Potential command injection. Avoid shell=True with user input."
|
||||
languages: [python]
|
||||
severity: ERROR
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-78: OS Command Injection"
|
||||
owasp: "A03:2021 - Injection"
|
||||
|
||||
- id: insecure-jwt-algorithm
|
||||
patterns:
|
||||
- pattern: jwt.decode(..., algorithms=["none"], ...)
|
||||
- pattern: jwt.decode(..., algorithms=["HS256"], verify=False, ...)
|
||||
message: "Insecure JWT algorithm or verification disabled."
|
||||
languages: [python]
|
||||
severity: ERROR
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-347: Improper Verification of Cryptographic Signature"
|
||||
|
||||
- id: path-traversal
|
||||
patterns:
|
||||
- pattern: open(... + $USER_INPUT + ...)
|
||||
- pattern: open(f"...{$USER_INPUT}...")
|
||||
- pattern: Path(...) / $USER_INPUT
|
||||
message: "Potential path traversal. Validate and sanitize file paths."
|
||||
languages: [python]
|
||||
severity: WARNING
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-22: Path Traversal"
|
||||
|
||||
- id: insecure-pickle
|
||||
patterns:
|
||||
- pattern: pickle.loads($DATA)
|
||||
- pattern: pickle.load($FILE)
|
||||
message: "Pickle deserialization is insecure. Use JSON or other safe formats."
|
||||
languages: [python]
|
||||
severity: WARNING
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-502: Deserialization of Untrusted Data"
|
||||
|
||||
# =============================================
|
||||
# Go Security Rules
|
||||
# =============================================
|
||||
|
||||
- id: go-sql-injection
|
||||
patterns:
|
||||
- pattern: |
|
||||
$DB.Query(fmt.Sprintf("...", $USER_INPUT))
|
||||
- pattern: |
|
||||
$DB.Exec(fmt.Sprintf("...", $USER_INPUT))
|
||||
message: "Potential SQL injection in Go. Use parameterized queries."
|
||||
languages: [go]
|
||||
severity: ERROR
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-89: SQL Injection"
|
||||
|
||||
- id: go-hardcoded-credentials
|
||||
patterns:
|
||||
- pattern: |
|
||||
$VAR := "..."
|
||||
- metavariable-regex:
|
||||
metavariable: $VAR
|
||||
regex: (password|secret|apiKey|api_key|token)
|
||||
message: "Potential hardcoded credential. Use environment variables."
|
||||
languages: [go]
|
||||
severity: WARNING
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-798: Use of Hard-coded Credentials"
|
||||
|
||||
# =============================================
|
||||
# JavaScript/TypeScript Security Rules
|
||||
# =============================================
|
||||
|
||||
- id: js-xss-innerhtml
|
||||
patterns:
|
||||
- pattern: $EL.innerHTML = $USER_INPUT
|
||||
message: "Potential XSS via innerHTML. Use textContent or sanitize input."
|
||||
languages: [javascript, typescript]
|
||||
severity: WARNING
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-79: Cross-site Scripting"
|
||||
owasp: "A03:2021 - Injection"
|
||||
|
||||
- id: js-eval
|
||||
patterns:
|
||||
- pattern: eval($CODE)
|
||||
- pattern: new Function($CODE)
|
||||
message: "Avoid eval() and new Function() with dynamic input."
|
||||
languages: [javascript, typescript]
|
||||
severity: ERROR
|
||||
metadata:
|
||||
category: security
|
||||
cwe: "CWE-95: Improper Neutralization of Directives in Dynamically Evaluated Code"
|
||||
66
admin-v2/.trivy.yaml
Normal file
66
admin-v2/.trivy.yaml
Normal file
@@ -0,0 +1,66 @@
|
||||
# Trivy Configuration for BreakPilot
|
||||
# https://trivy.dev/
|
||||
#
|
||||
# Run: trivy image breakpilot-pwa-backend:latest
|
||||
# Run filesystem: trivy fs .
|
||||
# Run config: trivy config .
|
||||
|
||||
# Scan settings
|
||||
scan:
|
||||
# Security checks to perform
|
||||
security-checks:
|
||||
- vuln # Vulnerabilities
|
||||
- config # Misconfigurations
|
||||
- secret # Secrets in files
|
||||
|
||||
# Vulnerability settings
|
||||
vulnerability:
|
||||
# Vulnerability types to scan for
|
||||
type:
|
||||
- os # OS packages
|
||||
- library # Application dependencies
|
||||
|
||||
# Ignore unfixed vulnerabilities
|
||||
ignore-unfixed: false
|
||||
|
||||
# Severity settings
|
||||
severity:
|
||||
- CRITICAL
|
||||
- HIGH
|
||||
- MEDIUM
|
||||
# - LOW # Uncomment to include low severity
|
||||
|
||||
# Output format
|
||||
format: table
|
||||
|
||||
# Exit code on findings
|
||||
exit-code: 1
|
||||
|
||||
# Timeout
|
||||
timeout: 10m
|
||||
|
||||
# Cache directory
|
||||
cache-dir: /tmp/trivy-cache
|
||||
|
||||
# Skip files/directories
|
||||
skip-dirs:
|
||||
- node_modules
|
||||
- venv
|
||||
- .venv
|
||||
- __pycache__
|
||||
- .git
|
||||
- .idea
|
||||
- .vscode
|
||||
|
||||
skip-files:
|
||||
- "*.md"
|
||||
- "*.txt"
|
||||
- "*.log"
|
||||
|
||||
# Ignore specific vulnerabilities (add after review)
|
||||
ignorefile: .trivyignore
|
||||
|
||||
# SBOM generation
|
||||
sbom:
|
||||
format: cyclonedx
|
||||
output: sbom.json
|
||||
9
admin-v2/.trivyignore
Normal file
9
admin-v2/.trivyignore
Normal file
@@ -0,0 +1,9 @@
|
||||
# Trivy Ignore File for BreakPilot
|
||||
# Add vulnerability IDs to ignore after security review
|
||||
# Format: CVE-XXXX-XXXXX or GHSA-xxxx-xxxx-xxxx
|
||||
|
||||
# Example (remove after adding real ignores):
|
||||
# CVE-2021-12345 # Reason: Not exploitable in our context
|
||||
|
||||
# Reviewed and accepted risks:
|
||||
# (Add vulnerabilities here after security team review)
|
||||
132
admin-v2/.woodpecker/auto-fix.yml
Normal file
132
admin-v2/.woodpecker/auto-fix.yml
Normal file
@@ -0,0 +1,132 @@
|
||||
# Woodpecker CI Auto-Fix Pipeline
|
||||
# Automatische Reparatur fehlgeschlagener Tests
|
||||
#
|
||||
# Laeuft taeglich um 2:00 Uhr nachts
|
||||
# Analysiert offene Backlog-Items und versucht automatische Fixes
|
||||
|
||||
when:
|
||||
- event: cron
|
||||
cron: "0 2 * * *" # Taeglich um 2:00 Uhr
|
||||
|
||||
clone:
|
||||
git:
|
||||
image: woodpeckerci/plugin-git
|
||||
settings:
|
||||
depth: 1
|
||||
extra_hosts:
|
||||
- macmini:192.168.178.100
|
||||
|
||||
steps:
|
||||
# ========================================
|
||||
# 1. Fetch Failed Tests from Backlog
|
||||
# ========================================
|
||||
|
||||
fetch-backlog:
|
||||
image: curlimages/curl:latest
|
||||
commands:
|
||||
- |
|
||||
curl -s "http://backend:8000/api/tests/backlog?status=open&priority=critical" \
|
||||
-o backlog-critical.json
|
||||
curl -s "http://backend:8000/api/tests/backlog?status=open&priority=high" \
|
||||
-o backlog-high.json
|
||||
- echo "=== Kritische Tests ==="
|
||||
- cat backlog-critical.json | head -50
|
||||
- echo "=== Hohe Prioritaet ==="
|
||||
- cat backlog-high.json | head -50
|
||||
|
||||
# ========================================
|
||||
# 2. Analyze and Classify Errors
|
||||
# ========================================
|
||||
|
||||
analyze-errors:
|
||||
image: python:3.12-slim
|
||||
commands:
|
||||
- pip install --quiet jq-py
|
||||
- |
|
||||
python3 << 'EOF'
|
||||
import json
|
||||
import os
|
||||
|
||||
def classify_error(error_type, error_msg):
|
||||
"""Klassifiziert Fehler nach Auto-Fix-Potential"""
|
||||
auto_fixable = {
|
||||
'nil_pointer': 'high',
|
||||
'import_error': 'high',
|
||||
'undefined_variable': 'medium',
|
||||
'type_error': 'medium',
|
||||
'assertion': 'low',
|
||||
'timeout': 'low',
|
||||
'logic_error': 'manual'
|
||||
}
|
||||
return auto_fixable.get(error_type, 'manual')
|
||||
|
||||
# Lade Backlog
|
||||
try:
|
||||
with open('backlog-critical.json') as f:
|
||||
critical = json.load(f)
|
||||
with open('backlog-high.json') as f:
|
||||
high = json.load(f)
|
||||
except:
|
||||
print("Keine Backlog-Daten gefunden")
|
||||
exit(0)
|
||||
|
||||
all_items = critical.get('items', []) + high.get('items', [])
|
||||
|
||||
auto_fix_candidates = []
|
||||
for item in all_items:
|
||||
fix_potential = classify_error(
|
||||
item.get('error_type', 'unknown'),
|
||||
item.get('error_message', '')
|
||||
)
|
||||
if fix_potential in ['high', 'medium']:
|
||||
auto_fix_candidates.append({
|
||||
'id': item.get('id'),
|
||||
'test_name': item.get('test_name'),
|
||||
'error_type': item.get('error_type'),
|
||||
'fix_potential': fix_potential
|
||||
})
|
||||
|
||||
print(f"Auto-Fix Kandidaten: {len(auto_fix_candidates)}")
|
||||
with open('auto-fix-candidates.json', 'w') as f:
|
||||
json.dump(auto_fix_candidates, f, indent=2)
|
||||
EOF
|
||||
depends_on:
|
||||
- fetch-backlog
|
||||
|
||||
# ========================================
|
||||
# 3. Generate Fix Suggestions (Placeholder)
|
||||
# ========================================
|
||||
|
||||
generate-fixes:
|
||||
image: python:3.12-slim
|
||||
commands:
|
||||
- |
|
||||
echo "Auto-Fix Generation ist in Phase 4 geplant"
|
||||
echo "Aktuell werden nur Vorschlaege generiert"
|
||||
|
||||
# Hier wuerde Claude API oder anderer LLM aufgerufen werden
|
||||
# python3 scripts/auto-fix-agent.py auto-fix-candidates.json
|
||||
|
||||
echo "Fix-Vorschlaege wuerden hier generiert werden"
|
||||
depends_on:
|
||||
- analyze-errors
|
||||
|
||||
# ========================================
|
||||
# 4. Report Results
|
||||
# ========================================
|
||||
|
||||
report-results:
|
||||
image: curlimages/curl:latest
|
||||
commands:
|
||||
- |
|
||||
curl -X POST "http://backend:8000/api/tests/auto-fix/report" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"run_date\": \"$(date -Iseconds)\",
|
||||
\"candidates_found\": $(cat auto-fix-candidates.json | wc -l),
|
||||
\"fixes_attempted\": 0,
|
||||
\"fixes_successful\": 0,
|
||||
\"status\": \"analysis_only\"
|
||||
}" || true
|
||||
when:
|
||||
status: [success, failure]
|
||||
37
admin-v2/.woodpecker/build-ci-image.yml
Normal file
37
admin-v2/.woodpecker/build-ci-image.yml
Normal file
@@ -0,0 +1,37 @@
|
||||
# One-time pipeline to build the custom Python CI image
|
||||
# Trigger manually, then delete this file
|
||||
#
|
||||
# This builds the breakpilot/python-ci:3.12 image on the CI runner
|
||||
|
||||
when:
|
||||
- event: manual
|
||||
|
||||
clone:
|
||||
git:
|
||||
image: woodpeckerci/plugin-git
|
||||
settings:
|
||||
depth: 1
|
||||
extra_hosts:
|
||||
- macmini:192.168.178.100
|
||||
|
||||
steps:
|
||||
build-python-ci-image:
|
||||
image: docker:27-cli
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
commands:
|
||||
- |
|
||||
echo "=== Building breakpilot/python-ci:3.12 ==="
|
||||
|
||||
docker build \
|
||||
-t breakpilot/python-ci:3.12 \
|
||||
-t breakpilot/python-ci:latest \
|
||||
-f .docker/python-ci.Dockerfile \
|
||||
.
|
||||
|
||||
echo ""
|
||||
echo "=== Build complete ==="
|
||||
docker images | grep breakpilot/python-ci
|
||||
|
||||
echo ""
|
||||
echo "Image is now available for CI pipelines!"
|
||||
161
admin-v2/.woodpecker/integration.yml
Normal file
161
admin-v2/.woodpecker/integration.yml
Normal file
@@ -0,0 +1,161 @@
|
||||
# Integration Tests Pipeline
|
||||
# Separate Datei weil Services auf Pipeline-Ebene definiert werden muessen
|
||||
#
|
||||
# Diese Pipeline laeuft parallel zur main.yml und testet:
|
||||
# - Database Connectivity (PostgreSQL)
|
||||
# - Cache Connectivity (Valkey/Redis)
|
||||
# - Service-to-Service Kommunikation
|
||||
#
|
||||
# Dokumentation: docs/testing/integration-test-environment.md
|
||||
|
||||
when:
|
||||
- event: [push, pull_request]
|
||||
branch: [main, develop]
|
||||
|
||||
clone:
|
||||
git:
|
||||
image: woodpeckerci/plugin-git
|
||||
settings:
|
||||
depth: 1
|
||||
extra_hosts:
|
||||
- macmini:192.168.178.100
|
||||
|
||||
# Services auf Pipeline-Ebene (NICHT Step-Ebene!)
|
||||
# Diese Services sind fuer ALLE Steps verfuegbar
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
environment:
|
||||
POSTGRES_USER: breakpilot
|
||||
POSTGRES_PASSWORD: breakpilot_test
|
||||
POSTGRES_DB: breakpilot_test
|
||||
|
||||
valkey:
|
||||
image: valkey/valkey:8-alpine
|
||||
|
||||
steps:
|
||||
wait-for-services:
|
||||
image: postgres:16-alpine
|
||||
commands:
|
||||
- |
|
||||
echo "=== Waiting for PostgreSQL ==="
|
||||
for i in $(seq 1 30); do
|
||||
if pg_isready -h postgres -U breakpilot; then
|
||||
echo "PostgreSQL ready after $i attempts!"
|
||||
break
|
||||
fi
|
||||
echo "Attempt $i/30: PostgreSQL not ready, waiting..."
|
||||
sleep 2
|
||||
done
|
||||
# Final check
|
||||
if ! pg_isready -h postgres -U breakpilot; then
|
||||
echo "ERROR: PostgreSQL not ready after 30 attempts"
|
||||
exit 1
|
||||
fi
|
||||
- |
|
||||
echo "=== Waiting for Valkey ==="
|
||||
# Install redis-cli in postgres alpine image
|
||||
apk add --no-cache redis > /dev/null 2>&1 || true
|
||||
for i in $(seq 1 30); do
|
||||
if redis-cli -h valkey ping 2>/dev/null | grep -q PONG; then
|
||||
echo "Valkey ready after $i attempts!"
|
||||
break
|
||||
fi
|
||||
echo "Attempt $i/30: Valkey not ready, waiting..."
|
||||
sleep 2
|
||||
done
|
||||
# Final check
|
||||
if ! redis-cli -h valkey ping 2>/dev/null | grep -q PONG; then
|
||||
echo "ERROR: Valkey not ready after 30 attempts"
|
||||
exit 1
|
||||
fi
|
||||
- echo "=== All services ready ==="
|
||||
|
||||
integration-tests:
|
||||
image: breakpilot/python-ci:3.12
|
||||
environment:
|
||||
CI: "true"
|
||||
DATABASE_URL: postgresql://breakpilot:breakpilot_test@postgres:5432/breakpilot_test
|
||||
VALKEY_URL: redis://valkey:6379
|
||||
REDIS_URL: redis://valkey:6379
|
||||
SKIP_INTEGRATION_TESTS: "false"
|
||||
SKIP_DB_TESTS: "false"
|
||||
SKIP_WEASYPRINT_TESTS: "false"
|
||||
# Test-spezifische Umgebungsvariablen
|
||||
ENVIRONMENT: "testing"
|
||||
JWT_SECRET: "test-secret-key-for-integration-tests"
|
||||
TEACHER_REQUIRE_AUTH: "false"
|
||||
GAME_USE_DATABASE: "false"
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
cd backend
|
||||
|
||||
# PYTHONPATH setzen damit lokale Module gefunden werden
|
||||
export PYTHONPATH="$(pwd):${PYTHONPATH:-}"
|
||||
|
||||
echo "=== Installing dependencies ==="
|
||||
pip install --quiet --no-cache-dir -r requirements.txt
|
||||
|
||||
echo "=== Running Integration Tests ==="
|
||||
set +e
|
||||
python -m pytest tests/test_integration/ -v \
|
||||
--tb=short \
|
||||
--json-report \
|
||||
--json-report-file=../.ci-results/test-integration.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
# Ergebnisse auswerten
|
||||
if [ -f ../.ci-results/test-integration.json ]; then
|
||||
TOTAL=$(python3 -c "import json; d=json.load(open('../.ci-results/test-integration.json')); print(d.get('summary',{}).get('total',0))" 2>/dev/null || echo "0")
|
||||
PASSED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-integration.json')); print(d.get('summary',{}).get('passed',0))" 2>/dev/null || echo "0")
|
||||
FAILED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-integration.json')); print(d.get('summary',{}).get('failed',0))" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-integration.json')); print(d.get('summary',{}).get('skipped',0))" 2>/dev/null || echo "0")
|
||||
else
|
||||
echo "WARNUNG: Keine JSON-Ergebnisse gefunden"
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
echo "{\"service\":\"integration-tests\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../.ci-results/results-integration.json
|
||||
cat ../.ci-results/results-integration.json
|
||||
|
||||
echo ""
|
||||
echo "=== Integration Test Summary ==="
|
||||
echo "Total: $TOTAL | Passed: $PASSED | Failed: $FAILED | Skipped: $SKIPPED"
|
||||
|
||||
if [ "$TEST_EXIT" -ne "0" ]; then
|
||||
echo "Integration tests failed with exit code $TEST_EXIT"
|
||||
exit 1
|
||||
fi
|
||||
depends_on:
|
||||
- wait-for-services
|
||||
|
||||
report-integration-results:
|
||||
image: curlimages/curl:8.10.1
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
echo "=== Sende Integration Test-Ergebnisse an Dashboard ==="
|
||||
|
||||
if [ -f .ci-results/results-integration.json ]; then
|
||||
echo "Sending integration test results..."
|
||||
curl -f -sS -X POST "http://backend:8000/api/tests/ci-result" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"pipeline_id\": \"${CI_PIPELINE_NUMBER}\",
|
||||
\"commit\": \"${CI_COMMIT_SHA}\",
|
||||
\"branch\": \"${CI_COMMIT_BRANCH}\",
|
||||
\"status\": \"${CI_PIPELINE_STATUS:-unknown}\",
|
||||
\"test_results\": $(cat .ci-results/results-integration.json)
|
||||
}" || echo "WARNUNG: Konnte Ergebnisse nicht an Dashboard senden"
|
||||
else
|
||||
echo "Keine Integration-Ergebnisse zum Senden gefunden"
|
||||
fi
|
||||
|
||||
echo "=== Integration Test-Ergebnisse gesendet ==="
|
||||
when:
|
||||
status: [success, failure]
|
||||
depends_on:
|
||||
- integration-tests
|
||||
669
admin-v2/.woodpecker/main.yml
Normal file
669
admin-v2/.woodpecker/main.yml
Normal file
@@ -0,0 +1,669 @@
|
||||
# Woodpecker CI Main Pipeline
|
||||
# BreakPilot PWA - CI/CD Pipeline
|
||||
#
|
||||
# Plattform: ARM64 (Apple Silicon Mac Mini)
|
||||
#
|
||||
# Strategie:
|
||||
# - Tests laufen bei JEDEM Push/PR
|
||||
# - Test-Ergebnisse werden an Dashboard gesendet
|
||||
# - Builds/Scans laufen nur bei Tags oder manuell
|
||||
# - Deployment nur manuell (Sicherheit)
|
||||
|
||||
when:
|
||||
- event: [push, pull_request, manual, tag]
|
||||
branch: [main, develop]
|
||||
|
||||
clone:
|
||||
git:
|
||||
image: woodpeckerci/plugin-git
|
||||
settings:
|
||||
depth: 1
|
||||
extra_hosts:
|
||||
- macmini:192.168.178.100
|
||||
|
||||
variables:
|
||||
- &golang_image golang:1.23-alpine
|
||||
- &python_image python:3.12-slim
|
||||
- &python_ci_image breakpilot/python-ci:3.12 # Custom image with WeasyPrint
|
||||
- &nodejs_image node:20-alpine
|
||||
- &docker_image docker:27-cli
|
||||
|
||||
steps:
|
||||
# ========================================
|
||||
# STAGE 1: Lint (nur bei PRs)
|
||||
# ========================================
|
||||
|
||||
go-lint:
|
||||
image: golangci/golangci-lint:v1.55-alpine
|
||||
commands:
|
||||
- cd consent-service && golangci-lint run --timeout 5m ./...
|
||||
- cd ../billing-service && golangci-lint run --timeout 5m ./...
|
||||
- cd ../school-service && golangci-lint run --timeout 5m ./...
|
||||
when:
|
||||
event: pull_request
|
||||
|
||||
python-lint:
|
||||
image: *python_image
|
||||
commands:
|
||||
- pip install --quiet ruff black
|
||||
- ruff check backend/ --output-format=github || true
|
||||
- black --check backend/ || true
|
||||
when:
|
||||
event: pull_request
|
||||
|
||||
# ========================================
|
||||
# STAGE 2: Unit Tests mit JSON-Ausgabe
|
||||
# Ergebnisse werden im Workspace gespeichert (.ci-results/)
|
||||
# ========================================
|
||||
|
||||
test-go-consent:
|
||||
image: *golang_image
|
||||
environment:
|
||||
CGO_ENABLED: "0"
|
||||
commands:
|
||||
- |
|
||||
set -euo pipefail
|
||||
apk add --no-cache jq bash
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "consent-service" ]; then
|
||||
echo '{"service":"consent-service","framework":"go","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-consent.json
|
||||
echo "WARNUNG: consent-service Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd consent-service
|
||||
set +e
|
||||
go test -v -json -coverprofile=coverage.out ./... 2>&1 | tee ../.ci-results/test-consent.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
# JSON-Zeilen extrahieren und mit jq zählen
|
||||
JSON_FILE="../.ci-results/test-consent.json"
|
||||
if grep -q '^{' "$JSON_FILE" 2>/dev/null; then
|
||||
TOTAL=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="run" and .Test != null)] | length')
|
||||
PASSED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="pass" and .Test != null)] | length')
|
||||
FAILED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="fail" and .Test != null)] | length')
|
||||
SKIPPED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="skip" and .Test != null)] | length')
|
||||
else
|
||||
echo "WARNUNG: Keine JSON-Zeilen in $JSON_FILE gefunden (Build-Fehler?)"
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
COVERAGE=$(go tool cover -func=coverage.out 2>/dev/null | tail -1 | awk '{print $3}' | tr -d '%' || echo "0")
|
||||
[ -z "$COVERAGE" ] && COVERAGE=0
|
||||
|
||||
echo "{\"service\":\"consent-service\",\"framework\":\"go\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":$COVERAGE}" > ../.ci-results/results-consent.json
|
||||
cat ../.ci-results/results-consent.json
|
||||
|
||||
# Backlog-Strategie: Fehler werden gemeldet aber Pipeline laeuft weiter
|
||||
if [ "$FAILED" -gt "0" ]; then
|
||||
echo "WARNUNG: $FAILED Tests fehlgeschlagen - werden ins Backlog geschrieben"
|
||||
fi
|
||||
|
||||
test-go-billing:
|
||||
image: *golang_image
|
||||
environment:
|
||||
CGO_ENABLED: "0"
|
||||
commands:
|
||||
- |
|
||||
set -euo pipefail
|
||||
apk add --no-cache jq bash
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "billing-service" ]; then
|
||||
echo '{"service":"billing-service","framework":"go","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-billing.json
|
||||
echo "WARNUNG: billing-service Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd billing-service
|
||||
set +e
|
||||
go test -v -json -coverprofile=coverage.out ./... 2>&1 | tee ../.ci-results/test-billing.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
# JSON-Zeilen extrahieren und mit jq zählen
|
||||
JSON_FILE="../.ci-results/test-billing.json"
|
||||
if grep -q '^{' "$JSON_FILE" 2>/dev/null; then
|
||||
TOTAL=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="run" and .Test != null)] | length')
|
||||
PASSED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="pass" and .Test != null)] | length')
|
||||
FAILED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="fail" and .Test != null)] | length')
|
||||
SKIPPED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="skip" and .Test != null)] | length')
|
||||
else
|
||||
echo "WARNUNG: Keine JSON-Zeilen in $JSON_FILE gefunden (Build-Fehler?)"
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
COVERAGE=$(go tool cover -func=coverage.out 2>/dev/null | tail -1 | awk '{print $3}' | tr -d '%' || echo "0")
|
||||
[ -z "$COVERAGE" ] && COVERAGE=0
|
||||
|
||||
echo "{\"service\":\"billing-service\",\"framework\":\"go\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":$COVERAGE}" > ../.ci-results/results-billing.json
|
||||
cat ../.ci-results/results-billing.json
|
||||
|
||||
# Backlog-Strategie: Fehler werden gemeldet aber Pipeline laeuft weiter
|
||||
if [ "$FAILED" -gt "0" ]; then
|
||||
echo "WARNUNG: $FAILED Tests fehlgeschlagen - werden ins Backlog geschrieben"
|
||||
fi
|
||||
|
||||
test-go-school:
|
||||
image: *golang_image
|
||||
environment:
|
||||
CGO_ENABLED: "0"
|
||||
commands:
|
||||
- |
|
||||
set -euo pipefail
|
||||
apk add --no-cache jq bash
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "school-service" ]; then
|
||||
echo '{"service":"school-service","framework":"go","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-school.json
|
||||
echo "WARNUNG: school-service Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd school-service
|
||||
set +e
|
||||
go test -v -json -coverprofile=coverage.out ./... 2>&1 | tee ../.ci-results/test-school.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
# JSON-Zeilen extrahieren und mit jq zählen
|
||||
JSON_FILE="../.ci-results/test-school.json"
|
||||
if grep -q '^{' "$JSON_FILE" 2>/dev/null; then
|
||||
TOTAL=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="run" and .Test != null)] | length')
|
||||
PASSED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="pass" and .Test != null)] | length')
|
||||
FAILED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="fail" and .Test != null)] | length')
|
||||
SKIPPED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="skip" and .Test != null)] | length')
|
||||
else
|
||||
echo "WARNUNG: Keine JSON-Zeilen in $JSON_FILE gefunden (Build-Fehler?)"
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
COVERAGE=$(go tool cover -func=coverage.out 2>/dev/null | tail -1 | awk '{print $3}' | tr -d '%' || echo "0")
|
||||
[ -z "$COVERAGE" ] && COVERAGE=0
|
||||
|
||||
echo "{\"service\":\"school-service\",\"framework\":\"go\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":$COVERAGE}" > ../.ci-results/results-school.json
|
||||
cat ../.ci-results/results-school.json
|
||||
|
||||
# Backlog-Strategie: Fehler werden gemeldet aber Pipeline laeuft weiter
|
||||
if [ "$FAILED" -gt "0" ]; then
|
||||
echo "WARNUNG: $FAILED Tests fehlgeschlagen - werden ins Backlog geschrieben"
|
||||
fi
|
||||
|
||||
test-go-edu-search:
|
||||
image: *golang_image
|
||||
environment:
|
||||
CGO_ENABLED: "0"
|
||||
commands:
|
||||
- |
|
||||
set -euo pipefail
|
||||
apk add --no-cache jq bash
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "edu-search-service" ]; then
|
||||
echo '{"service":"edu-search-service","framework":"go","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-edu-search.json
|
||||
echo "WARNUNG: edu-search-service Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd edu-search-service
|
||||
set +e
|
||||
go test -v -json -coverprofile=coverage.out ./internal/... 2>&1 | tee ../.ci-results/test-edu-search.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
# JSON-Zeilen extrahieren und mit jq zählen
|
||||
JSON_FILE="../.ci-results/test-edu-search.json"
|
||||
if grep -q '^{' "$JSON_FILE" 2>/dev/null; then
|
||||
TOTAL=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="run" and .Test != null)] | length')
|
||||
PASSED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="pass" and .Test != null)] | length')
|
||||
FAILED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="fail" and .Test != null)] | length')
|
||||
SKIPPED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="skip" and .Test != null)] | length')
|
||||
else
|
||||
echo "WARNUNG: Keine JSON-Zeilen in $JSON_FILE gefunden (Build-Fehler?)"
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
COVERAGE=$(go tool cover -func=coverage.out 2>/dev/null | tail -1 | awk '{print $3}' | tr -d '%' || echo "0")
|
||||
[ -z "$COVERAGE" ] && COVERAGE=0
|
||||
|
||||
echo "{\"service\":\"edu-search-service\",\"framework\":\"go\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":$COVERAGE}" > ../.ci-results/results-edu-search.json
|
||||
cat ../.ci-results/results-edu-search.json
|
||||
|
||||
# Backlog-Strategie: Fehler werden gemeldet aber Pipeline laeuft weiter
|
||||
if [ "$FAILED" -gt "0" ]; then
|
||||
echo "WARNUNG: $FAILED Tests fehlgeschlagen - werden ins Backlog geschrieben"
|
||||
fi
|
||||
|
||||
test-go-ai-compliance:
|
||||
image: *golang_image
|
||||
environment:
|
||||
CGO_ENABLED: "0"
|
||||
commands:
|
||||
- |
|
||||
set -euo pipefail
|
||||
apk add --no-cache jq bash
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "ai-compliance-sdk" ]; then
|
||||
echo '{"service":"ai-compliance-sdk","framework":"go","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-ai-compliance.json
|
||||
echo "WARNUNG: ai-compliance-sdk Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd ai-compliance-sdk
|
||||
set +e
|
||||
go test -v -json -coverprofile=coverage.out ./... 2>&1 | tee ../.ci-results/test-ai-compliance.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
# JSON-Zeilen extrahieren und mit jq zählen
|
||||
JSON_FILE="../.ci-results/test-ai-compliance.json"
|
||||
if grep -q '^{' "$JSON_FILE" 2>/dev/null; then
|
||||
TOTAL=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="run" and .Test != null)] | length')
|
||||
PASSED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="pass" and .Test != null)] | length')
|
||||
FAILED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="fail" and .Test != null)] | length')
|
||||
SKIPPED=$(grep '^{' "$JSON_FILE" | jq -s '[.[] | select(.Action=="skip" and .Test != null)] | length')
|
||||
else
|
||||
echo "WARNUNG: Keine JSON-Zeilen in $JSON_FILE gefunden (Build-Fehler?)"
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
COVERAGE=$(go tool cover -func=coverage.out 2>/dev/null | tail -1 | awk '{print $3}' | tr -d '%' || echo "0")
|
||||
[ -z "$COVERAGE" ] && COVERAGE=0
|
||||
|
||||
echo "{\"service\":\"ai-compliance-sdk\",\"framework\":\"go\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":$COVERAGE}" > ../.ci-results/results-ai-compliance.json
|
||||
cat ../.ci-results/results-ai-compliance.json
|
||||
|
||||
# Backlog-Strategie: Fehler werden gemeldet aber Pipeline laeuft weiter
|
||||
if [ "$FAILED" -gt "0" ]; then
|
||||
echo "WARNUNG: $FAILED Tests fehlgeschlagen - werden ins Backlog geschrieben"
|
||||
fi
|
||||
|
||||
test-python-backend:
|
||||
image: *python_ci_image
|
||||
environment:
|
||||
CI: "true"
|
||||
DATABASE_URL: "postgresql://test:test@localhost:5432/test_db"
|
||||
SKIP_DB_TESTS: "true"
|
||||
SKIP_WEASYPRINT_TESTS: "false"
|
||||
SKIP_INTEGRATION_TESTS: "true"
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "backend" ]; then
|
||||
echo '{"service":"backend","framework":"pytest","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-backend.json
|
||||
echo "WARNUNG: backend Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd backend
|
||||
# Set PYTHONPATH to current directory (backend) so local packages like classroom_engine, alerts_agent are found
|
||||
# IMPORTANT: Use absolute path and export before pip install to ensure modules are available
|
||||
export PYTHONPATH="$(pwd):${PYTHONPATH:-}"
|
||||
|
||||
# Test tools are pre-installed in breakpilot/python-ci image
|
||||
# Only install project-specific dependencies
|
||||
pip install --quiet --no-cache-dir -r requirements.txt
|
||||
|
||||
# NOTE: PostgreSQL service removed - tests that require DB are skipped via SKIP_DB_TESTS=true
|
||||
# For full integration tests, use: docker compose -f docker-compose.test.yml up -d
|
||||
|
||||
set +e
|
||||
# Use python -m pytest to ensure PYTHONPATH is properly applied before pytest starts
|
||||
python -m pytest tests/ -v --tb=short --cov=. --cov-report=term-missing --json-report --json-report-file=../.ci-results/test-backend.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ -f ../.ci-results/test-backend.json ]; then
|
||||
TOTAL=$(python3 -c "import json; d=json.load(open('../.ci-results/test-backend.json')); print(d.get('summary',{}).get('total',0))" 2>/dev/null || echo "0")
|
||||
PASSED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-backend.json')); print(d.get('summary',{}).get('passed',0))" 2>/dev/null || echo "0")
|
||||
FAILED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-backend.json')); print(d.get('summary',{}).get('failed',0))" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-backend.json')); print(d.get('summary',{}).get('skipped',0))" 2>/dev/null || echo "0")
|
||||
else
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
echo "{\"service\":\"backend\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../.ci-results/results-backend.json
|
||||
cat ../.ci-results/results-backend.json
|
||||
|
||||
if [ "$TEST_EXIT" -ne "0" ]; then exit 1; fi
|
||||
|
||||
test-python-voice:
|
||||
image: *python_image
|
||||
environment:
|
||||
CI: "true"
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "voice-service" ]; then
|
||||
echo '{"service":"voice-service","framework":"pytest","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-voice.json
|
||||
echo "WARNUNG: voice-service Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd voice-service
|
||||
export PYTHONPATH="$(pwd):${PYTHONPATH:-}"
|
||||
pip install --quiet --no-cache-dir -r requirements.txt
|
||||
pip install --quiet --no-cache-dir pytest-json-report
|
||||
|
||||
set +e
|
||||
python -m pytest tests/ -v --tb=short --json-report --json-report-file=../.ci-results/test-voice.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ -f ../.ci-results/test-voice.json ]; then
|
||||
TOTAL=$(python3 -c "import json; d=json.load(open('../.ci-results/test-voice.json')); print(d.get('summary',{}).get('total',0))" 2>/dev/null || echo "0")
|
||||
PASSED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-voice.json')); print(d.get('summary',{}).get('passed',0))" 2>/dev/null || echo "0")
|
||||
FAILED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-voice.json')); print(d.get('summary',{}).get('failed',0))" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-voice.json')); print(d.get('summary',{}).get('skipped',0))" 2>/dev/null || echo "0")
|
||||
else
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
echo "{\"service\":\"voice-service\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../.ci-results/results-voice.json
|
||||
cat ../.ci-results/results-voice.json
|
||||
|
||||
if [ "$TEST_EXIT" -ne "0" ]; then exit 1; fi
|
||||
|
||||
test-bqas-golden:
|
||||
image: *python_image
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "voice-service/tests/bqas" ]; then
|
||||
echo '{"service":"bqas-golden","framework":"pytest","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-bqas-golden.json
|
||||
echo "WARNUNG: voice-service/tests/bqas Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd voice-service
|
||||
export PYTHONPATH="$(pwd):${PYTHONPATH:-}"
|
||||
pip install --quiet --no-cache-dir -r requirements.txt
|
||||
pip install --quiet --no-cache-dir pytest-json-report pytest-asyncio
|
||||
|
||||
set +e
|
||||
python -m pytest tests/bqas/test_golden.py tests/bqas/test_regression.py tests/bqas/test_synthetic.py -v --tb=short --json-report --json-report-file=../.ci-results/test-bqas-golden.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ -f ../.ci-results/test-bqas-golden.json ]; then
|
||||
TOTAL=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-golden.json')); print(d.get('summary',{}).get('total',0))" 2>/dev/null || echo "0")
|
||||
PASSED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-golden.json')); print(d.get('summary',{}).get('passed',0))" 2>/dev/null || echo "0")
|
||||
FAILED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-golden.json')); print(d.get('summary',{}).get('failed',0))" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-golden.json')); print(d.get('summary',{}).get('skipped',0))" 2>/dev/null || echo "0")
|
||||
else
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
echo "{\"service\":\"bqas-golden\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../.ci-results/results-bqas-golden.json
|
||||
cat ../.ci-results/results-bqas-golden.json
|
||||
|
||||
# BQAS tests may skip if Ollama not available - don't fail pipeline
|
||||
if [ "$FAILED" -gt "0" ]; then exit 1; fi
|
||||
|
||||
test-bqas-rag:
|
||||
image: *python_image
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "voice-service/tests/bqas" ]; then
|
||||
echo '{"service":"bqas-rag","framework":"pytest","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-bqas-rag.json
|
||||
echo "WARNUNG: voice-service/tests/bqas Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd voice-service
|
||||
export PYTHONPATH="$(pwd):${PYTHONPATH:-}"
|
||||
pip install --quiet --no-cache-dir -r requirements.txt
|
||||
pip install --quiet --no-cache-dir pytest-json-report pytest-asyncio
|
||||
|
||||
set +e
|
||||
python -m pytest tests/bqas/test_rag.py tests/bqas/test_notifier.py -v --tb=short --json-report --json-report-file=../.ci-results/test-bqas-rag.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ -f ../.ci-results/test-bqas-rag.json ]; then
|
||||
TOTAL=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-rag.json')); print(d.get('summary',{}).get('total',0))" 2>/dev/null || echo "0")
|
||||
PASSED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-rag.json')); print(d.get('summary',{}).get('passed',0))" 2>/dev/null || echo "0")
|
||||
FAILED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-rag.json')); print(d.get('summary',{}).get('failed',0))" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-bqas-rag.json')); print(d.get('summary',{}).get('skipped',0))" 2>/dev/null || echo "0")
|
||||
else
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
echo "{\"service\":\"bqas-rag\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../.ci-results/results-bqas-rag.json
|
||||
cat ../.ci-results/results-bqas-rag.json
|
||||
|
||||
# BQAS tests may skip if Ollama not available - don't fail pipeline
|
||||
if [ "$FAILED" -gt "0" ]; then exit 1; fi
|
||||
|
||||
test-python-klausur:
|
||||
image: *python_image
|
||||
environment:
|
||||
CI: "true"
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "klausur-service/backend" ]; then
|
||||
echo '{"service":"klausur-service","framework":"pytest","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-klausur.json
|
||||
echo "WARNUNG: klausur-service/backend Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd klausur-service/backend
|
||||
# Set PYTHONPATH to current directory so local modules like hyde, hybrid_search, etc. are found
|
||||
export PYTHONPATH="$(pwd):${PYTHONPATH:-}"
|
||||
|
||||
pip install --quiet --no-cache-dir -r requirements.txt 2>/dev/null || pip install --quiet --no-cache-dir fastapi uvicorn pytest pytest-asyncio pytest-json-report
|
||||
pip install --quiet --no-cache-dir pytest-json-report
|
||||
|
||||
set +e
|
||||
python -m pytest tests/ -v --tb=short --json-report --json-report-file=../../.ci-results/test-klausur.json
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ -f ../../.ci-results/test-klausur.json ]; then
|
||||
TOTAL=$(python3 -c "import json; d=json.load(open('../../.ci-results/test-klausur.json')); print(d.get('summary',{}).get('total',0))" 2>/dev/null || echo "0")
|
||||
PASSED=$(python3 -c "import json; d=json.load(open('../../.ci-results/test-klausur.json')); print(d.get('summary',{}).get('passed',0))" 2>/dev/null || echo "0")
|
||||
FAILED=$(python3 -c "import json; d=json.load(open('../../.ci-results/test-klausur.json')); print(d.get('summary',{}).get('failed',0))" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(python3 -c "import json; d=json.load(open('../../.ci-results/test-klausur.json')); print(d.get('summary',{}).get('skipped',0))" 2>/dev/null || echo "0")
|
||||
else
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
echo "{\"service\":\"klausur-service\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../../.ci-results/results-klausur.json
|
||||
cat ../../.ci-results/results-klausur.json
|
||||
|
||||
if [ "$TEST_EXIT" -ne "0" ]; then exit 1; fi
|
||||
|
||||
test-nodejs-h5p:
|
||||
image: *nodejs_image
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
mkdir -p .ci-results
|
||||
|
||||
if [ ! -d "h5p-service" ]; then
|
||||
echo '{"service":"h5p-service","framework":"jest","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-h5p.json
|
||||
echo "WARNUNG: h5p-service Verzeichnis nicht gefunden"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd h5p-service
|
||||
npm ci --silent 2>/dev/null || npm install --silent
|
||||
|
||||
set +e
|
||||
npm run test:ci -- --json --outputFile=../.ci-results/test-h5p.json 2>&1
|
||||
TEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [ -f ../.ci-results/test-h5p.json ]; then
|
||||
TOTAL=$(node -e "const d=require('../.ci-results/test-h5p.json'); console.log(d.numTotalTests || 0)" 2>/dev/null || echo "0")
|
||||
PASSED=$(node -e "const d=require('../.ci-results/test-h5p.json'); console.log(d.numPassedTests || 0)" 2>/dev/null || echo "0")
|
||||
FAILED=$(node -e "const d=require('../.ci-results/test-h5p.json'); console.log(d.numFailedTests || 0)" 2>/dev/null || echo "0")
|
||||
SKIPPED=$(node -e "const d=require('../.ci-results/test-h5p.json'); console.log(d.numPendingTests || 0)" 2>/dev/null || echo "0")
|
||||
else
|
||||
TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
|
||||
fi
|
||||
|
||||
[ -z "$TOTAL" ] && TOTAL=0
|
||||
[ -z "$PASSED" ] && PASSED=0
|
||||
[ -z "$FAILED" ] && FAILED=0
|
||||
[ -z "$SKIPPED" ] && SKIPPED=0
|
||||
|
||||
echo "{\"service\":\"h5p-service\",\"framework\":\"jest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" > ../.ci-results/results-h5p.json
|
||||
cat ../.ci-results/results-h5p.json
|
||||
|
||||
if [ "$TEST_EXIT" -ne "0" ]; then exit 1; fi
|
||||
|
||||
# ========================================
|
||||
# STAGE 2.5: Integration Tests
|
||||
# ========================================
|
||||
# Integration Tests laufen in separater Pipeline:
|
||||
# .woodpecker/integration.yml
|
||||
# (benötigt Pipeline-Level Services für PostgreSQL und Valkey)
|
||||
|
||||
# ========================================
|
||||
# STAGE 3: Test-Ergebnisse an Dashboard senden
|
||||
# ========================================
|
||||
|
||||
report-test-results:
|
||||
image: curlimages/curl:8.10.1
|
||||
commands:
|
||||
- |
|
||||
set -uo pipefail
|
||||
echo "=== Sende Test-Ergebnisse an Dashboard ==="
|
||||
echo "Pipeline Status: ${CI_PIPELINE_STATUS:-unknown}"
|
||||
ls -la .ci-results/ || echo "Verzeichnis nicht gefunden"
|
||||
|
||||
PIPELINE_STATUS="${CI_PIPELINE_STATUS:-unknown}"
|
||||
|
||||
for f in .ci-results/results-*.json; do
|
||||
[ -f "$f" ] || continue
|
||||
echo "Sending: $f"
|
||||
curl -f -sS -X POST "http://backend:8000/api/tests/ci-result" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"pipeline_id\": \"${CI_PIPELINE_NUMBER}\",
|
||||
\"commit\": \"${CI_COMMIT_SHA}\",
|
||||
\"branch\": \"${CI_COMMIT_BRANCH}\",
|
||||
\"status\": \"${PIPELINE_STATUS}\",
|
||||
\"test_results\": $(cat "$f")
|
||||
}" || echo "WARNUNG: Konnte $f nicht senden"
|
||||
done
|
||||
|
||||
echo "=== Test-Ergebnisse gesendet ==="
|
||||
when:
|
||||
status: [success, failure]
|
||||
depends_on:
|
||||
- test-go-consent
|
||||
- test-go-billing
|
||||
- test-go-school
|
||||
- test-go-edu-search
|
||||
- test-go-ai-compliance
|
||||
- test-python-backend
|
||||
- test-python-voice
|
||||
- test-bqas-golden
|
||||
- test-bqas-rag
|
||||
- test-python-klausur
|
||||
- test-nodejs-h5p
|
||||
|
||||
# ========================================
|
||||
# STAGE 4: Build & Security (nur Tags/manuell)
|
||||
# ========================================
|
||||
|
||||
build-consent-service:
|
||||
image: *docker_image
|
||||
commands:
|
||||
- docker build -t breakpilot/consent-service:${CI_COMMIT_SHA:0:8} ./consent-service
|
||||
- docker tag breakpilot/consent-service:${CI_COMMIT_SHA:0:8} breakpilot/consent-service:latest
|
||||
- echo "Built breakpilot/consent-service:${CI_COMMIT_SHA:0:8}"
|
||||
when:
|
||||
- event: tag
|
||||
- event: manual
|
||||
|
||||
build-backend:
|
||||
image: *docker_image
|
||||
commands:
|
||||
- docker build -t breakpilot/backend:${CI_COMMIT_SHA:0:8} ./backend
|
||||
- docker tag breakpilot/backend:${CI_COMMIT_SHA:0:8} breakpilot/backend:latest
|
||||
- echo "Built breakpilot/backend:${CI_COMMIT_SHA:0:8}"
|
||||
when:
|
||||
- event: tag
|
||||
- event: manual
|
||||
|
||||
build-voice-service:
|
||||
image: *docker_image
|
||||
commands:
|
||||
- |
|
||||
if [ -d ./voice-service ]; then
|
||||
docker build -t breakpilot/voice-service:${CI_COMMIT_SHA:0:8} ./voice-service
|
||||
docker tag breakpilot/voice-service:${CI_COMMIT_SHA:0:8} breakpilot/voice-service:latest
|
||||
echo "Built breakpilot/voice-service:${CI_COMMIT_SHA:0:8}"
|
||||
else
|
||||
echo "voice-service Verzeichnis nicht gefunden - ueberspringe"
|
||||
fi
|
||||
when:
|
||||
- event: tag
|
||||
- event: manual
|
||||
|
||||
generate-sbom:
|
||||
image: *golang_image
|
||||
commands:
|
||||
- |
|
||||
echo "Installing syft for ARM64..."
|
||||
wget -qO- https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
|
||||
syft dir:./consent-service -o cyclonedx-json > sbom-consent.json
|
||||
syft dir:./backend -o cyclonedx-json > sbom-backend.json
|
||||
if [ -d ./voice-service ]; then
|
||||
syft dir:./voice-service -o cyclonedx-json > sbom-voice.json
|
||||
fi
|
||||
echo "SBOMs generated successfully"
|
||||
when:
|
||||
- event: tag
|
||||
- event: manual
|
||||
|
||||
vulnerability-scan:
|
||||
image: *golang_image
|
||||
commands:
|
||||
- |
|
||||
echo "Installing grype for ARM64..."
|
||||
wget -qO- https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
|
||||
grype sbom:sbom-consent.json -o table --fail-on critical || true
|
||||
grype sbom:sbom-backend.json -o table --fail-on critical || true
|
||||
if [ -f sbom-voice.json ]; then
|
||||
grype sbom:sbom-voice.json -o table --fail-on critical || true
|
||||
fi
|
||||
when:
|
||||
- event: tag
|
||||
- event: manual
|
||||
depends_on:
|
||||
- generate-sbom
|
||||
|
||||
# ========================================
|
||||
# STAGE 5: Deploy (nur manuell)
|
||||
# ========================================
|
||||
|
||||
deploy-production:
|
||||
image: *docker_image
|
||||
commands:
|
||||
- echo "Deploying to production..."
|
||||
- docker compose -f docker-compose.yml pull || true
|
||||
- docker compose -f docker-compose.yml up -d --remove-orphans || true
|
||||
when:
|
||||
event: manual
|
||||
depends_on:
|
||||
- build-consent-service
|
||||
- build-backend
|
||||
314
admin-v2/.woodpecker/security.yml
Normal file
314
admin-v2/.woodpecker/security.yml
Normal file
@@ -0,0 +1,314 @@
|
||||
# Woodpecker CI Security Pipeline
|
||||
# Dedizierte Security-Scans fuer DevSecOps
|
||||
#
|
||||
# Laeuft taeglich via Cron und bei jedem PR
|
||||
|
||||
when:
|
||||
- event: cron
|
||||
cron: "0 3 * * *" # Taeglich um 3:00 Uhr
|
||||
- event: pull_request
|
||||
|
||||
clone:
|
||||
git:
|
||||
image: woodpeckerci/plugin-git
|
||||
settings:
|
||||
depth: 1
|
||||
extra_hosts:
|
||||
- macmini:192.168.178.100
|
||||
|
||||
steps:
|
||||
# ========================================
|
||||
# Static Analysis
|
||||
# ========================================
|
||||
|
||||
semgrep-scan:
|
||||
image: returntocorp/semgrep:latest
|
||||
commands:
|
||||
- semgrep scan --config auto --json -o semgrep-results.json . || true
|
||||
- |
|
||||
if [ -f semgrep-results.json ]; then
|
||||
echo "=== Semgrep Findings ==="
|
||||
cat semgrep-results.json | head -100
|
||||
fi
|
||||
when:
|
||||
event: [pull_request, cron]
|
||||
|
||||
bandit-python:
|
||||
image: python:3.12-slim
|
||||
commands:
|
||||
- pip install --quiet bandit
|
||||
- bandit -r backend/ -f json -o bandit-results.json || true
|
||||
- |
|
||||
if [ -f bandit-results.json ]; then
|
||||
echo "=== Bandit Findings ==="
|
||||
cat bandit-results.json | head -50
|
||||
fi
|
||||
when:
|
||||
event: [pull_request, cron]
|
||||
|
||||
gosec-go:
|
||||
image: securego/gosec:latest
|
||||
commands:
|
||||
- gosec -fmt json -out gosec-consent.json ./consent-service/... || true
|
||||
- gosec -fmt json -out gosec-billing.json ./billing-service/... || true
|
||||
- echo "Go Security Scan abgeschlossen"
|
||||
when:
|
||||
event: [pull_request, cron]
|
||||
|
||||
# ========================================
|
||||
# Secrets Detection
|
||||
# ========================================
|
||||
|
||||
gitleaks-scan:
|
||||
image: zricethezav/gitleaks:latest
|
||||
commands:
|
||||
- gitleaks detect --source . --report-format json --report-path gitleaks-report.json || true
|
||||
- |
|
||||
if [ -s gitleaks-report.json ]; then
|
||||
echo "=== WARNUNG: Potentielle Secrets gefunden ==="
|
||||
cat gitleaks-report.json
|
||||
else
|
||||
echo "Keine Secrets gefunden"
|
||||
fi
|
||||
|
||||
trufflehog-scan:
|
||||
image: trufflesecurity/trufflehog:latest
|
||||
commands:
|
||||
- trufflehog filesystem . --json > trufflehog-results.json 2>&1 || true
|
||||
- echo "TruffleHog Scan abgeschlossen"
|
||||
|
||||
# ========================================
|
||||
# Dependency Vulnerabilities
|
||||
# ========================================
|
||||
|
||||
npm-audit:
|
||||
image: node:20-alpine
|
||||
commands:
|
||||
- cd website && npm audit --json > ../npm-audit-website.json || true
|
||||
- cd ../studio-v2 && npm audit --json > ../npm-audit-studio.json || true
|
||||
- cd ../admin-v2 && npm audit --json > ../npm-audit-admin.json || true
|
||||
- echo "NPM Audit abgeschlossen"
|
||||
when:
|
||||
event: [pull_request, cron]
|
||||
|
||||
pip-audit:
|
||||
image: python:3.12-slim
|
||||
commands:
|
||||
- pip install --quiet pip-audit
|
||||
- pip-audit -r backend/requirements.txt --format json -o pip-audit-backend.json || true
|
||||
- pip-audit -r voice-service/requirements.txt --format json -o pip-audit-voice.json || true
|
||||
- echo "Pip Audit abgeschlossen"
|
||||
when:
|
||||
event: [pull_request, cron]
|
||||
|
||||
go-vulncheck:
|
||||
image: golang:1.21-alpine
|
||||
commands:
|
||||
- go install golang.org/x/vuln/cmd/govulncheck@latest
|
||||
- cd consent-service && govulncheck ./... || true
|
||||
- cd ../billing-service && govulncheck ./... || true
|
||||
- echo "Go Vulncheck abgeschlossen"
|
||||
when:
|
||||
event: [pull_request, cron]
|
||||
|
||||
# ========================================
|
||||
# Container Security
|
||||
# ========================================
|
||||
|
||||
trivy-filesystem:
|
||||
image: aquasec/trivy:latest
|
||||
commands:
|
||||
- trivy fs --severity HIGH,CRITICAL --format json -o trivy-fs.json . || true
|
||||
- echo "Trivy Filesystem Scan abgeschlossen"
|
||||
when:
|
||||
event: cron
|
||||
|
||||
# ========================================
|
||||
# SBOM Generation (taeglich)
|
||||
# ========================================
|
||||
|
||||
daily-sbom:
|
||||
image: anchore/syft:latest
|
||||
commands:
|
||||
- mkdir -p sbom-reports
|
||||
- syft dir:. -o cyclonedx-json > sbom-reports/sbom-full-$(date +%Y%m%d).json
|
||||
- echo "SBOM generiert"
|
||||
when:
|
||||
event: cron
|
||||
|
||||
# ========================================
|
||||
# AUTO-FIX: Dependency Vulnerabilities
|
||||
# Laeuft nur bei Cron (nightly), nicht bei PRs
|
||||
# ========================================
|
||||
|
||||
auto-fix-npm:
|
||||
image: node:20-alpine
|
||||
commands:
|
||||
- apk add --no-cache git
|
||||
- |
|
||||
echo "=== Auto-Fix: NPM Dependencies ==="
|
||||
FIXES_APPLIED=0
|
||||
|
||||
for dir in website studio-v2 admin-v2 h5p-service; do
|
||||
if [ -d "$dir" ] && [ -f "$dir/package.json" ]; then
|
||||
echo "Pruefe $dir..."
|
||||
cd $dir
|
||||
|
||||
# Speichere Hash vor Fix
|
||||
BEFORE=$(md5sum package-lock.json 2>/dev/null || echo "none")
|
||||
|
||||
# npm audit fix (ohne --force fuer sichere Updates)
|
||||
npm audit fix --package-lock-only 2>/dev/null || true
|
||||
|
||||
# Pruefe ob Aenderungen
|
||||
AFTER=$(md5sum package-lock.json 2>/dev/null || echo "none")
|
||||
if [ "$BEFORE" != "$AFTER" ]; then
|
||||
echo " -> Fixes angewendet in $dir"
|
||||
FIXES_APPLIED=$((FIXES_APPLIED + 1))
|
||||
fi
|
||||
|
||||
cd ..
|
||||
fi
|
||||
done
|
||||
|
||||
echo "NPM Auto-Fix abgeschlossen: $FIXES_APPLIED Projekte aktualisiert"
|
||||
echo "NPM_FIXES=$FIXES_APPLIED" >> /tmp/autofix-results.env
|
||||
when:
|
||||
event: cron
|
||||
|
||||
auto-fix-python:
|
||||
image: python:3.12-slim
|
||||
commands:
|
||||
- apt-get update && apt-get install -y git
|
||||
- pip install --quiet pip-audit
|
||||
- |
|
||||
echo "=== Auto-Fix: Python Dependencies ==="
|
||||
FIXES_APPLIED=0
|
||||
|
||||
for reqfile in backend/requirements.txt voice-service/requirements.txt klausur-service/backend/requirements.txt; do
|
||||
if [ -f "$reqfile" ]; then
|
||||
echo "Pruefe $reqfile..."
|
||||
DIR=$(dirname $reqfile)
|
||||
|
||||
# pip-audit mit --fix (aktualisiert requirements.txt)
|
||||
pip-audit -r $reqfile --fix 2>/dev/null || true
|
||||
|
||||
# Pruefe ob requirements.txt geaendert wurde
|
||||
if git diff --quiet $reqfile 2>/dev/null; then
|
||||
echo " -> Keine Aenderungen in $reqfile"
|
||||
else
|
||||
echo " -> Fixes angewendet in $reqfile"
|
||||
FIXES_APPLIED=$((FIXES_APPLIED + 1))
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Python Auto-Fix abgeschlossen: $FIXES_APPLIED Dateien aktualisiert"
|
||||
echo "PYTHON_FIXES=$FIXES_APPLIED" >> /tmp/autofix-results.env
|
||||
when:
|
||||
event: cron
|
||||
|
||||
auto-fix-go:
|
||||
image: golang:1.21-alpine
|
||||
commands:
|
||||
- apk add --no-cache git
|
||||
- |
|
||||
echo "=== Auto-Fix: Go Dependencies ==="
|
||||
FIXES_APPLIED=0
|
||||
|
||||
for dir in consent-service billing-service school-service edu-search ai-compliance-sdk; do
|
||||
if [ -d "$dir" ] && [ -f "$dir/go.mod" ]; then
|
||||
echo "Pruefe $dir..."
|
||||
cd $dir
|
||||
|
||||
# Go mod tidy und update
|
||||
go get -u ./... 2>/dev/null || true
|
||||
go mod tidy 2>/dev/null || true
|
||||
|
||||
# Pruefe ob go.mod/go.sum geaendert wurden
|
||||
if git diff --quiet go.mod go.sum 2>/dev/null; then
|
||||
echo " -> Keine Aenderungen in $dir"
|
||||
else
|
||||
echo " -> Updates angewendet in $dir"
|
||||
FIXES_APPLIED=$((FIXES_APPLIED + 1))
|
||||
fi
|
||||
|
||||
cd ..
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Go Auto-Fix abgeschlossen: $FIXES_APPLIED Module aktualisiert"
|
||||
echo "GO_FIXES=$FIXES_APPLIED" >> /tmp/autofix-results.env
|
||||
when:
|
||||
event: cron
|
||||
|
||||
# ========================================
|
||||
# Commit & Push Auto-Fixes
|
||||
# ========================================
|
||||
|
||||
commit-security-fixes:
|
||||
image: alpine/git:latest
|
||||
commands:
|
||||
- |
|
||||
echo "=== Commit Security Fixes ==="
|
||||
|
||||
# Git konfigurieren
|
||||
git config --global user.email "security-bot@breakpilot.de"
|
||||
git config --global user.name "Security Bot"
|
||||
git config --global --add safe.directory /woodpecker/src
|
||||
|
||||
# Pruefe ob es Aenderungen gibt
|
||||
if git diff --quiet && git diff --cached --quiet; then
|
||||
echo "Keine Security-Fixes zum Committen"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Zeige was geaendert wurde
|
||||
echo "Geaenderte Dateien:"
|
||||
git status --short
|
||||
|
||||
# Stage alle relevanten Dateien
|
||||
git add -A \
|
||||
*/package-lock.json \
|
||||
*/requirements.txt \
|
||||
*/go.mod \
|
||||
*/go.sum \
|
||||
2>/dev/null || true
|
||||
|
||||
# Commit erstellen
|
||||
TIMESTAMP=$(date +%Y-%m-%d)
|
||||
git commit -m "fix(security): auto-fix vulnerable dependencies [$TIMESTAMP]
|
||||
|
||||
Automatische Sicherheitsupdates durch CI/CD Pipeline:
|
||||
- npm audit fix fuer Node.js Projekte
|
||||
- pip-audit --fix fuer Python Projekte
|
||||
- go get -u fuer Go Module
|
||||
|
||||
Co-Authored-By: Security Bot <security-bot@breakpilot.de>" || echo "Nichts zu committen"
|
||||
|
||||
# Push zum Repository
|
||||
git push origin HEAD:main || echo "Push fehlgeschlagen - manueller Review erforderlich"
|
||||
|
||||
echo "Security-Fixes committed und gepusht"
|
||||
when:
|
||||
event: cron
|
||||
status: success
|
||||
|
||||
# ========================================
|
||||
# Report to Dashboard
|
||||
# ========================================
|
||||
|
||||
update-security-dashboard:
|
||||
image: curlimages/curl:latest
|
||||
commands:
|
||||
- |
|
||||
curl -X POST "http://backend:8000/api/security/scan-results" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{
|
||||
\"scan_type\": \"daily\",
|
||||
\"timestamp\": \"$(date -Iseconds)\",
|
||||
\"tools\": [\"semgrep\", \"bandit\", \"gosec\", \"gitleaks\", \"trivy\"]
|
||||
}" || true
|
||||
when:
|
||||
status: [success, failure]
|
||||
event: cron
|
||||
2029
admin-v2/AI_COMPLIANCE_SDK_IMPLEMENTATION_PLAN.md
Normal file
2029
admin-v2/AI_COMPLIANCE_SDK_IMPLEMENTATION_PLAN.md
Normal file
File diff suppressed because it is too large
Load Diff
566
admin-v2/BREAKPILOT_CONSENT_MANAGEMENT_PLAN.md
Normal file
566
admin-v2/BREAKPILOT_CONSENT_MANAGEMENT_PLAN.md
Normal file
@@ -0,0 +1,566 @@
|
||||
# BreakPilot Consent Management System - Projektplan
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Dieses Dokument beschreibt den Plan zur Entwicklung eines vollständigen Consent Management Systems (CMS) für BreakPilot. Das System wird komplett neu entwickelt und ersetzt das bestehende Policy Vault System, das Bugs enthält und nicht optimal funktioniert.
|
||||
|
||||
---
|
||||
|
||||
## Technologie-Entscheidung: Warum welche Sprache?
|
||||
|
||||
### Backend-Optionen im Vergleich
|
||||
|
||||
| Kriterium | Rust | Go | Python (FastAPI) | TypeScript (NestJS) |
|
||||
|-----------|------|-----|------------------|---------------------|
|
||||
| **Performance** | Exzellent | Sehr gut | Gut | Gut |
|
||||
| **Memory Safety** | Garantiert | GC | GC | GC |
|
||||
| **Entwicklungsgeschwindigkeit** | Langsam | Mittel | Schnell | Schnell |
|
||||
| **Lernkurve** | Steil | Flach | Flach | Mittel |
|
||||
| **Ecosystem für Web** | Wachsend | Sehr gut | Exzellent | Exzellent |
|
||||
| **Integration mit BreakPilot** | Neu | Neu | Bereits vorhanden | Möglich |
|
||||
| **Team-Erfahrung** | ? | ? | Vorhanden | Möglich |
|
||||
|
||||
### Empfehlung: **Python (FastAPI)** oder **Go**
|
||||
|
||||
#### Option A: Python mit FastAPI (Empfohlen für schnelle Integration)
|
||||
**Vorteile:**
|
||||
- Bereits im BreakPilot-Projekt verwendet
|
||||
- Schnelle Entwicklung
|
||||
- Exzellente Dokumentation (automatisch generiert)
|
||||
- Einfache Integration mit bestehendem Code
|
||||
- Type Hints für bessere Code-Qualität
|
||||
- Async/Await Support
|
||||
|
||||
**Nachteile:**
|
||||
- Langsamer als Rust/Go bei hoher Last
|
||||
- GIL-Einschränkungen bei CPU-intensiven Tasks
|
||||
|
||||
#### Option B: Go (Empfohlen für Microservice-Architektur)
|
||||
**Vorteile:**
|
||||
- Extrem schnell und effizient
|
||||
- Exzellent für Microservices
|
||||
- Einfache Deployment (Single Binary)
|
||||
- Gute Concurrency
|
||||
- Statische Typisierung
|
||||
|
||||
**Nachteile:**
|
||||
- Neuer Tech-Stack im Projekt
|
||||
- Getrennte Codebasis von BreakPilot
|
||||
|
||||
#### Option C: Rust (Für maximale Performance & Sicherheit)
|
||||
**Vorteile:**
|
||||
- Höchste Performance
|
||||
- Memory Safety ohne GC
|
||||
- Exzellente Sicherheit
|
||||
- WebAssembly-Support
|
||||
|
||||
**Nachteile:**
|
||||
- Sehr steile Lernkurve
|
||||
- Längere Entwicklungszeit (2-3x)
|
||||
- Kleineres Web-Ecosystem
|
||||
- Komplexere Fehlerbehandlung
|
||||
|
||||
### Finale Empfehlung
|
||||
|
||||
**Für BreakPilot empfehle ich: Go (Golang)**
|
||||
|
||||
Begründung:
|
||||
1. **Unabhängiger Microservice** - Das CMS sollte als eigenständiger Service laufen
|
||||
2. **Performance** - Consent-Checks müssen schnell sein (bei jedem API-Call)
|
||||
3. **Einfaches Deployment** - Single Binary, ideal für Container
|
||||
4. **Gute Balance** - Schneller als Python, einfacher als Rust
|
||||
5. **Zukunftssicher** - Moderne Sprache mit wachsendem Ecosystem
|
||||
|
||||
---
|
||||
|
||||
## Systemarchitektur
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ BreakPilot Ecosystem │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
|
||||
│ │ BreakPilot │ │ Consent Admin │ │ BreakPilot │ │
|
||||
│ │ Studio (Web) │ │ Dashboard │ │ Mobile Apps │ │
|
||||
│ │ (Python/HTML) │ │ (Vue.js/React) │ │ (iOS/Android) │ │
|
||||
│ └────────┬────────┘ └────────┬────────┘ └────────┬────────┘ │
|
||||
│ │ │ │ │
|
||||
│ └──────────────────────┼──────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ API Gateway / Proxy │ │
|
||||
│ └────────────┬────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─────────────────────┼─────────────────────┐ │
|
||||
│ │ │ │ │
|
||||
│ ▼ ▼ ▼ │
|
||||
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
|
||||
│ │ BreakPilot API │ │ Consent Service │ │ Auth Service │ │
|
||||
│ │ (Python/FastAPI)│ │ (Go) │ │ (Go) │ │
|
||||
│ └────────┬────────┘ └────────┬────────┘ └────────┬────────┘ │
|
||||
│ │ │ │ │
|
||||
│ └────────────────────┼────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────┐ │
|
||||
│ │ PostgreSQL │ │
|
||||
│ │ (Shared Database) │ │
|
||||
│ └─────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Projektphasen
|
||||
|
||||
### Phase 1: Grundlagen & Datenbank (Woche 1-2)
|
||||
**Ziel:** Datenbank-Schema und Basis-Services
|
||||
|
||||
#### 1.1 Datenbank-Design
|
||||
- [ ] Users-Tabelle (Integration mit BreakPilot Auth)
|
||||
- [ ] Legal Documents (AGB, Datenschutz, Community Guidelines, etc.)
|
||||
- [ ] Document Versions (Versionierung mit Freigabe-Workflow)
|
||||
- [ ] User Consents (Welcher User hat wann was zugestimmt)
|
||||
- [ ] Cookie Categories (Notwendig, Funktional, Marketing, Analytics)
|
||||
- [ ] Cookie Consents (Granulare Cookie-Zustimmungen)
|
||||
- [ ] Audit Log (DSGVO-konforme Protokollierung)
|
||||
|
||||
#### 1.2 Go Backend Setup
|
||||
- [ ] Projekt-Struktur mit Clean Architecture
|
||||
- [ ] Database Layer (sqlx oder GORM)
|
||||
- [ ] Migration System
|
||||
- [ ] Config Management
|
||||
- [ ] Logging & Error Handling
|
||||
|
||||
### Phase 2: Core Consent Service (Woche 3-4)
|
||||
**Ziel:** Kern-Funktionalität für Consent-Management
|
||||
|
||||
#### 2.1 Document Management API
|
||||
- [ ] CRUD für Legal Documents
|
||||
- [ ] Versionierung mit Diff-Tracking
|
||||
- [ ] Draft/Published/Archived Status
|
||||
- [ ] Mehrsprachigkeit (DE, EN, etc.)
|
||||
|
||||
#### 2.2 Consent Tracking API
|
||||
- [ ] User Consent erstellen/abrufen
|
||||
- [ ] Consent History pro User
|
||||
- [ ] Bulk-Consent für mehrere Dokumente
|
||||
- [ ] Consent Withdrawal (Widerruf)
|
||||
|
||||
#### 2.3 Cookie Consent API
|
||||
- [ ] Cookie-Kategorien verwalten
|
||||
- [ ] Granulare Cookie-Einstellungen
|
||||
- [ ] Consent-Banner Konfiguration
|
||||
|
||||
### Phase 3: Admin Dashboard (Woche 5-6)
|
||||
**Ziel:** Web-Interface für Administratoren
|
||||
|
||||
#### 3.1 Admin Frontend (Vue.js oder React)
|
||||
- [ ] Login/Auth (Integration mit BreakPilot)
|
||||
- [ ] Dashboard mit Statistiken
|
||||
- [ ] Document Editor (Rich Text)
|
||||
- [ ] Version Management UI
|
||||
- [ ] User Consent Übersicht
|
||||
- [ ] Cookie Management UI
|
||||
|
||||
#### 3.2 Freigabe-Workflow
|
||||
- [ ] Draft → Review → Approved → Published
|
||||
- [ ] Benachrichtigungen bei neuen Versionen
|
||||
- [ ] Rollback-Funktion
|
||||
|
||||
### Phase 4: BreakPilot Integration (Woche 7-8)
|
||||
**Ziel:** Integration in BreakPilot Studio
|
||||
|
||||
#### 4.1 User-facing Features
|
||||
- [ ] "Legal" Button in Einstellungen
|
||||
- [ ] Consent-Historie anzeigen
|
||||
- [ ] Cookie-Präferenzen ändern
|
||||
- [ ] Datenauskunft anfordern (DSGVO Art. 15)
|
||||
|
||||
#### 4.2 Cookie Banner
|
||||
- [ ] Cookie-Consent-Modal beim ersten Besuch
|
||||
- [ ] Granulare Auswahl der Kategorien
|
||||
- [ ] "Alle akzeptieren" / "Nur notwendige"
|
||||
- [ ] Persistente Speicherung
|
||||
|
||||
#### 4.3 Consent-Check Middleware
|
||||
- [ ] Automatische Prüfung bei API-Calls
|
||||
- [ ] Blocking bei fehlender Zustimmung
|
||||
- [ ] Marketing-Opt-out respektieren
|
||||
|
||||
### Phase 5: Data Subject Rights (Woche 9-10)
|
||||
**Ziel:** DSGVO-Compliance Features
|
||||
|
||||
#### 5.1 Datenauskunft (Art. 15 DSGVO)
|
||||
- [ ] API für "Welche Daten haben wir?"
|
||||
- [ ] Export als JSON/PDF
|
||||
- [ ] Automatisierte Bereitstellung
|
||||
|
||||
#### 5.2 Datenlöschung (Art. 17 DSGVO)
|
||||
- [ ] "Recht auf Vergessenwerden"
|
||||
- [ ] Anonymisierung statt Löschung (wo nötig)
|
||||
- [ ] Audit Trail für Löschungen
|
||||
|
||||
#### 5.3 Datenportabilität (Art. 20 DSGVO)
|
||||
- [ ] Export in maschinenlesbarem Format
|
||||
- [ ] Download-Funktion im Frontend
|
||||
|
||||
### Phase 6: Testing & Security (Woche 11-12)
|
||||
**Ziel:** Absicherung und Qualität
|
||||
|
||||
#### 6.1 Testing
|
||||
- [ ] Unit Tests (>80% Coverage)
|
||||
- [ ] Integration Tests
|
||||
- [ ] E2E Tests für kritische Flows
|
||||
- [ ] Performance Tests
|
||||
|
||||
#### 6.2 Security
|
||||
- [ ] Security Audit
|
||||
- [ ] Penetration Testing
|
||||
- [ ] Rate Limiting
|
||||
- [ ] Input Validation
|
||||
- [ ] SQL Injection Prevention
|
||||
- [ ] XSS Protection
|
||||
|
||||
---
|
||||
|
||||
## Datenbank-Schema (Entwurf)
|
||||
|
||||
```sql
|
||||
-- Benutzer (Integration mit BreakPilot)
|
||||
CREATE TABLE users (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
external_id VARCHAR(255) UNIQUE, -- BreakPilot User ID
|
||||
email VARCHAR(255) UNIQUE NOT NULL,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Rechtliche Dokumente
|
||||
CREATE TABLE legal_documents (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
type VARCHAR(50) NOT NULL, -- 'terms', 'privacy', 'cookies', 'community'
|
||||
name VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
is_mandatory BOOLEAN DEFAULT true,
|
||||
is_active BOOLEAN DEFAULT true,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Dokumentversionen
|
||||
CREATE TABLE document_versions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
document_id UUID REFERENCES legal_documents(id) ON DELETE CASCADE,
|
||||
version VARCHAR(20) NOT NULL, -- Semver: 1.0.0, 1.1.0, etc.
|
||||
language VARCHAR(5) DEFAULT 'de', -- ISO 639-1
|
||||
title VARCHAR(255) NOT NULL,
|
||||
content TEXT NOT NULL, -- HTML oder Markdown
|
||||
summary TEXT, -- Kurze Zusammenfassung der Änderungen
|
||||
status VARCHAR(20) DEFAULT 'draft', -- draft, review, approved, published, archived
|
||||
published_at TIMESTAMPTZ,
|
||||
created_by UUID REFERENCES users(id),
|
||||
approved_by UUID REFERENCES users(id),
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
UNIQUE(document_id, version, language)
|
||||
);
|
||||
|
||||
-- Benutzer-Zustimmungen
|
||||
CREATE TABLE user_consents (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||
document_version_id UUID REFERENCES document_versions(id),
|
||||
consented BOOLEAN NOT NULL,
|
||||
ip_address INET,
|
||||
user_agent TEXT,
|
||||
consented_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
withdrawn_at TIMESTAMPTZ,
|
||||
UNIQUE(user_id, document_version_id)
|
||||
);
|
||||
|
||||
-- Cookie-Kategorien
|
||||
CREATE TABLE cookie_categories (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(100) NOT NULL, -- 'necessary', 'functional', 'analytics', 'marketing'
|
||||
display_name_de VARCHAR(255) NOT NULL,
|
||||
display_name_en VARCHAR(255),
|
||||
description_de TEXT,
|
||||
description_en TEXT,
|
||||
is_mandatory BOOLEAN DEFAULT false,
|
||||
sort_order INT DEFAULT 0,
|
||||
is_active BOOLEAN DEFAULT true,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Cookie-Zustimmungen
|
||||
CREATE TABLE cookie_consents (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||
category_id UUID REFERENCES cookie_categories(id),
|
||||
consented BOOLEAN NOT NULL,
|
||||
consented_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
UNIQUE(user_id, category_id)
|
||||
);
|
||||
|
||||
-- Audit Log (DSGVO-konform)
|
||||
CREATE TABLE consent_audit_log (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID,
|
||||
action VARCHAR(50) NOT NULL, -- 'consent_given', 'consent_withdrawn', 'data_export', 'data_delete'
|
||||
entity_type VARCHAR(50), -- 'document', 'cookie_category'
|
||||
entity_id UUID,
|
||||
details JSONB,
|
||||
ip_address INET,
|
||||
user_agent TEXT,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indizes für Performance
|
||||
CREATE INDEX idx_user_consents_user ON user_consents(user_id);
|
||||
CREATE INDEX idx_user_consents_version ON user_consents(document_version_id);
|
||||
CREATE INDEX idx_cookie_consents_user ON cookie_consents(user_id);
|
||||
CREATE INDEX idx_audit_log_user ON consent_audit_log(user_id);
|
||||
CREATE INDEX idx_audit_log_created ON consent_audit_log(created_at);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## API-Endpoints (Entwurf)
|
||||
|
||||
### Public API (für BreakPilot Frontend)
|
||||
|
||||
```
|
||||
# Dokumente abrufen
|
||||
GET /api/v1/documents # Alle aktiven Dokumente
|
||||
GET /api/v1/documents/:type # Dokument nach Typ (terms, privacy)
|
||||
GET /api/v1/documents/:type/latest # Neueste publizierte Version
|
||||
|
||||
# Consent Management
|
||||
POST /api/v1/consent # Zustimmung erteilen
|
||||
GET /api/v1/consent/my # Meine Zustimmungen
|
||||
GET /api/v1/consent/check/:documentType # Prüfen ob zugestimmt
|
||||
DELETE /api/v1/consent/:id # Zustimmung widerrufen
|
||||
|
||||
# Cookie Consent
|
||||
GET /api/v1/cookies/categories # Cookie-Kategorien
|
||||
POST /api/v1/cookies/consent # Cookie-Präferenzen setzen
|
||||
GET /api/v1/cookies/consent/my # Meine Cookie-Einstellungen
|
||||
|
||||
# Data Subject Rights (DSGVO)
|
||||
GET /api/v1/privacy/my-data # Alle meine Daten abrufen
|
||||
POST /api/v1/privacy/export # Datenexport anfordern
|
||||
POST /api/v1/privacy/delete # Löschung anfordern
|
||||
```
|
||||
|
||||
### Admin API (für Admin Dashboard)
|
||||
|
||||
```
|
||||
# Document Management
|
||||
GET /api/v1/admin/documents # Alle Dokumente (mit Drafts)
|
||||
POST /api/v1/admin/documents # Neues Dokument
|
||||
PUT /api/v1/admin/documents/:id # Dokument bearbeiten
|
||||
DELETE /api/v1/admin/documents/:id # Dokument löschen
|
||||
|
||||
# Version Management
|
||||
GET /api/v1/admin/versions/:docId # Alle Versionen eines Dokuments
|
||||
POST /api/v1/admin/versions # Neue Version erstellen
|
||||
PUT /api/v1/admin/versions/:id # Version bearbeiten
|
||||
POST /api/v1/admin/versions/:id/publish # Version veröffentlichen
|
||||
POST /api/v1/admin/versions/:id/archive # Version archivieren
|
||||
|
||||
# Cookie Categories
|
||||
GET /api/v1/admin/cookies/categories # Alle Kategorien
|
||||
POST /api/v1/admin/cookies/categories # Neue Kategorie
|
||||
PUT /api/v1/admin/cookies/categories/:id
|
||||
DELETE /api/v1/admin/cookies/categories/:id
|
||||
|
||||
# Statistics & Reports
|
||||
GET /api/v1/admin/stats/consents # Consent-Statistiken
|
||||
GET /api/v1/admin/stats/cookies # Cookie-Statistiken
|
||||
GET /api/v1/admin/audit-log # Audit Log (mit Filter)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Consent-Check Middleware (Konzept)
|
||||
|
||||
```go
|
||||
// middleware/consent_check.go
|
||||
|
||||
func ConsentCheckMiddleware(requiredConsent string) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
userID := c.GetString("user_id")
|
||||
|
||||
// Prüfe ob User zugestimmt hat
|
||||
hasConsent, err := consentService.CheckConsent(userID, requiredConsent)
|
||||
if err != nil {
|
||||
c.AbortWithStatusJSON(500, gin.H{"error": "Consent check failed"})
|
||||
return
|
||||
}
|
||||
|
||||
if !hasConsent {
|
||||
c.AbortWithStatusJSON(403, gin.H{
|
||||
"error": "consent_required",
|
||||
"document_type": requiredConsent,
|
||||
"message": "Sie müssen den Nutzungsbedingungen zustimmen",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
// Verwendung in BreakPilot
|
||||
router.POST("/api/worksheets",
|
||||
authMiddleware,
|
||||
ConsentCheckMiddleware("terms"),
|
||||
worksheetHandler.Create,
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cookie-Banner Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Erster Besuch │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 1. User öffnet BreakPilot │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ 2. Check: Hat User Cookie-Consent gegeben? │
|
||||
│ │ │
|
||||
│ ┌─────────┴─────────┐ │
|
||||
│ │ Nein │ Ja │
|
||||
│ ▼ ▼ │
|
||||
│ 3. Zeige Cookie Lade gespeicherte │
|
||||
│ Banner Präferenzen │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────────────────────┐ │
|
||||
│ │ Cookie Consent Banner │ │
|
||||
│ ├─────────────────────────────────────────┤ │
|
||||
│ │ Wir verwenden Cookies, um Ihnen die │ │
|
||||
│ │ beste Erfahrung zu bieten. │ │
|
||||
│ │ │ │
|
||||
│ │ ☑ Notwendig (immer aktiv) │ │
|
||||
│ │ ☐ Funktional │ │
|
||||
│ │ ☐ Analytics │ │
|
||||
│ │ ☐ Marketing │ │
|
||||
│ │ │ │
|
||||
│ │ [Alle akzeptieren] [Auswahl speichern] │ │
|
||||
│ │ [Nur notwendige] [Mehr erfahren] │ │
|
||||
│ └─────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Legal-Bereich im BreakPilot Frontend (Mockup)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Einstellungen > Legal │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ Meine Zustimmungen │ │
|
||||
│ ├─────────────────────────────────────────────────────┤ │
|
||||
│ │ │ │
|
||||
│ │ ✓ Allgemeine Geschäftsbedingungen │ │
|
||||
│ │ Version 2.1 · Zugestimmt am 15.11.2024 │ │
|
||||
│ │ [Ansehen] [Widerrufen] │ │
|
||||
│ │ │ │
|
||||
│ │ ✓ Datenschutzerklärung │ │
|
||||
│ │ Version 3.0 · Zugestimmt am 15.11.2024 │ │
|
||||
│ │ [Ansehen] [Widerrufen] │ │
|
||||
│ │ │ │
|
||||
│ │ ✓ Community Guidelines │ │
|
||||
│ │ Version 1.2 · Zugestimmt am 15.11.2024 │ │
|
||||
│ │ [Ansehen] [Widerrufen] │ │
|
||||
│ │ │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ Cookie-Einstellungen │ │
|
||||
│ ├─────────────────────────────────────────────────────┤ │
|
||||
│ │ │ │
|
||||
│ │ ☑ Notwendige Cookies (erforderlich) │ │
|
||||
│ │ ☑ Funktionale Cookies │ │
|
||||
│ │ ☐ Analytics Cookies │ │
|
||||
│ │ ☐ Marketing Cookies │ │
|
||||
│ │ │ │
|
||||
│ │ [Einstellungen speichern] │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ Meine Daten (DSGVO) │ │
|
||||
│ ├─────────────────────────────────────────────────────┤ │
|
||||
│ │ │ │
|
||||
│ │ [Meine Daten exportieren] │ │
|
||||
│ │ Erhalten Sie eine Kopie aller Ihrer gespeicherten │ │
|
||||
│ │ Daten als JSON-Datei. │ │
|
||||
│ │ │ │
|
||||
│ │ [Account löschen] │ │
|
||||
│ │ Alle Ihre Daten werden unwiderruflich gelöscht. │ │
|
||||
│ │ │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nächste Schritte
|
||||
|
||||
### Sofort (diese Woche)
|
||||
1. **Entscheidung:** Go oder Python für Backend?
|
||||
2. **Projekt-Setup:** Repository anlegen
|
||||
3. **Datenbank:** Schema finalisieren und migrieren
|
||||
|
||||
### Kurzfristig (nächste 2 Wochen)
|
||||
1. Core API implementieren
|
||||
2. Basis-Integration in BreakPilot
|
||||
|
||||
### Mittelfristig (nächste 4-6 Wochen)
|
||||
1. Admin Dashboard
|
||||
2. Cookie Banner
|
||||
3. DSGVO-Features
|
||||
|
||||
---
|
||||
|
||||
## Offene Fragen
|
||||
|
||||
1. **Sprache:** Go oder Python für das Backend?
|
||||
2. **Admin Dashboard:** Eigenes Frontend oder in BreakPilot integriert?
|
||||
3. **Hosting:** Gleicher Server wie BreakPilot oder separater Service?
|
||||
4. **Auth:** Shared Authentication mit BreakPilot oder eigenes System?
|
||||
5. **Datenbank:** Shared PostgreSQL oder eigene Instanz?
|
||||
|
||||
---
|
||||
|
||||
## Ressourcen-Schätzung
|
||||
|
||||
| Phase | Aufwand (Tage) | Beschreibung |
|
||||
|-------|---------------|--------------|
|
||||
| Phase 1 | 5-7 | Datenbank & Setup |
|
||||
| Phase 2 | 8-10 | Core Consent Service |
|
||||
| Phase 3 | 10-12 | Admin Dashboard |
|
||||
| Phase 4 | 8-10 | BreakPilot Integration |
|
||||
| Phase 5 | 5-7 | DSGVO Features |
|
||||
| Phase 6 | 5-7 | Testing & Security |
|
||||
| **Gesamt** | **41-53** | ~8-10 Wochen |
|
||||
|
||||
---
|
||||
|
||||
*Dokument erstellt am: 12. Dezember 2024*
|
||||
*Version: 1.0*
|
||||
473
admin-v2/CONTENT_SERVICE_SETUP.md
Normal file
473
admin-v2/CONTENT_SERVICE_SETUP.md
Normal file
@@ -0,0 +1,473 @@
|
||||
# BreakPilot Content Service - Setup & Deployment Guide
|
||||
|
||||
## 🎯 Übersicht
|
||||
|
||||
Der BreakPilot Content Service ist eine vollständige Educational Content Management Plattform mit:
|
||||
|
||||
- ✅ **Content Service API** (FastAPI) - Educational Content Management
|
||||
- ✅ **MinIO S3 Storage** - File Storage für Videos, PDFs, Bilder
|
||||
- ✅ **H5P Service** - Interactive Educational Content (Quizzes, etc.)
|
||||
- ✅ **Matrix Feed Integration** - Content Publishing zu Matrix Spaces
|
||||
- ✅ **PostgreSQL** - Content Metadata Storage
|
||||
- ✅ **Creative Commons Licensing** - CC-BY, CC-BY-SA, etc.
|
||||
- ✅ **Rating & Download Tracking** - Analytics & Impact Scoring
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### 1. Alle Services starten
|
||||
|
||||
```bash
|
||||
# Haupt-Services + Content Services starten
|
||||
docker-compose \
|
||||
-f docker-compose.yml \
|
||||
-f docker-compose.content.yml \
|
||||
up -d
|
||||
|
||||
# Logs verfolgen
|
||||
docker-compose -f docker-compose.yml -f docker-compose.content.yml logs -f
|
||||
```
|
||||
|
||||
### 2. Verfügbare Services
|
||||
|
||||
| Service | URL | Beschreibung |
|
||||
|---------|-----|--------------|
|
||||
| Content Service API | http://localhost:8002 | REST API für Content Management |
|
||||
| MinIO Console | http://localhost:9001 | Storage Dashboard (User: minioadmin, Pass: minioadmin123) |
|
||||
| H5P Service | http://localhost:8003 | Interactive Content Editor |
|
||||
| Content DB | localhost:5433 | PostgreSQL Database |
|
||||
|
||||
### 3. API Dokumentation
|
||||
|
||||
Content Service API Docs:
|
||||
```
|
||||
http://localhost:8002/docs
|
||||
```
|
||||
|
||||
## 📦 Installation (Development)
|
||||
|
||||
### Content Service (Backend)
|
||||
|
||||
```bash
|
||||
cd backend/content_service
|
||||
|
||||
# Virtual Environment erstellen
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate # Windows: venv\Scripts\activate
|
||||
|
||||
# Dependencies installieren
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Environment Variables
|
||||
cp .env.example .env
|
||||
|
||||
# Database Migrations
|
||||
alembic upgrade head
|
||||
|
||||
# Service starten
|
||||
uvicorn main:app --reload --port 8002
|
||||
```
|
||||
|
||||
### H5P Service
|
||||
|
||||
```bash
|
||||
cd h5p-service
|
||||
|
||||
# Dependencies installieren
|
||||
npm install
|
||||
|
||||
# Service starten
|
||||
npm start
|
||||
```
|
||||
|
||||
### Creator Dashboard (Frontend)
|
||||
|
||||
```bash
|
||||
cd frontend/creator-studio
|
||||
|
||||
# Dependencies installieren
|
||||
npm install
|
||||
|
||||
# Development Server
|
||||
npm run dev
|
||||
```
|
||||
|
||||
## 🔧 Konfiguration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Erstelle `.env` im Projekt-Root:
|
||||
|
||||
```env
|
||||
# Content Service
|
||||
CONTENT_DB_URL=postgresql://breakpilot:breakpilot123@localhost:5433/breakpilot_content
|
||||
MINIO_ENDPOINT=localhost:9000
|
||||
MINIO_ACCESS_KEY=minioadmin
|
||||
MINIO_SECRET_KEY=minioadmin123
|
||||
MINIO_BUCKET=breakpilot-content
|
||||
|
||||
# Matrix Integration
|
||||
MATRIX_HOMESERVER=http://localhost:8008
|
||||
MATRIX_ACCESS_TOKEN=your-matrix-token-here
|
||||
MATRIX_BOT_USER=@breakpilot-bot:localhost
|
||||
MATRIX_FEED_ROOM=!breakpilot-feed:localhost
|
||||
|
||||
# OAuth2 (consent-service)
|
||||
CONSENT_SERVICE_URL=http://localhost:8081
|
||||
JWT_SECRET=your-jwt-secret-here
|
||||
|
||||
# H5P Service
|
||||
H5P_BASE_URL=http://localhost:8003
|
||||
H5P_STORAGE_PATH=/app/h5p-content
|
||||
```
|
||||
|
||||
## 📝 Content Service API Endpoints
|
||||
|
||||
### Content Management
|
||||
|
||||
```bash
|
||||
# Create Content
|
||||
POST /api/v1/content
|
||||
{
|
||||
"title": "5-Minuten Yoga für Grundschule",
|
||||
"description": "Bewegungspause mit einfachen Yoga-Übungen",
|
||||
"content_type": "video",
|
||||
"category": "movement",
|
||||
"license": "CC-BY-SA-4.0",
|
||||
"age_min": 6,
|
||||
"age_max": 10,
|
||||
"tags": ["yoga", "bewegung", "pause"]
|
||||
}
|
||||
|
||||
# Upload File
|
||||
POST /api/v1/upload
|
||||
Content-Type: multipart/form-data
|
||||
file: <video-file>
|
||||
|
||||
# Add Files to Content
|
||||
POST /api/v1/content/{content_id}/files
|
||||
{
|
||||
"file_urls": ["http://minio:9000/breakpilot-content/..."]
|
||||
}
|
||||
|
||||
# Publish Content (→ Matrix Feed)
|
||||
POST /api/v1/content/{content_id}/publish
|
||||
|
||||
# List Content (with filters)
|
||||
GET /api/v1/content?category=movement&age_min=6&age_max=10
|
||||
|
||||
# Get Content Details
|
||||
GET /api/v1/content/{content_id}
|
||||
|
||||
# Rate Content
|
||||
POST /api/v1/content/{content_id}/rate
|
||||
{
|
||||
"stars": 5,
|
||||
"comment": "Sehr hilfreich für meine Klasse!"
|
||||
}
|
||||
```
|
||||
|
||||
### H5P Interactive Content
|
||||
|
||||
```bash
|
||||
# Get H5P Editor
|
||||
GET http://localhost:8003/h5p/editor/new
|
||||
|
||||
# Save H5P Content
|
||||
POST http://localhost:8003/h5p/editor
|
||||
{
|
||||
"library": "H5P.InteractiveVideo 1.22",
|
||||
"params": { ... }
|
||||
}
|
||||
|
||||
# Play H5P Content
|
||||
GET http://localhost:8003/h5p/play/{contentId}
|
||||
|
||||
# Export as .h5p File
|
||||
GET http://localhost:8003/h5p/export/{contentId}
|
||||
```
|
||||
|
||||
## 🎨 Creator Workflow
|
||||
|
||||
### 1. Content erstellen
|
||||
|
||||
```javascript
|
||||
// Creator Dashboard
|
||||
const content = await createContent({
|
||||
title: "Mathe-Quiz: Einmaleins",
|
||||
description: "Interaktives Quiz zum Üben des Einmaleins",
|
||||
content_type: "h5p",
|
||||
category: "math",
|
||||
license: "CC-BY-SA-4.0",
|
||||
age_min: 7,
|
||||
age_max: 9
|
||||
});
|
||||
```
|
||||
|
||||
### 2. Files hochladen
|
||||
|
||||
```javascript
|
||||
// Upload Video/PDF/Images
|
||||
const file = document.querySelector('#fileInput').files[0];
|
||||
const formData = new FormData();
|
||||
formData.append('file', file);
|
||||
|
||||
const response = await fetch('/api/v1/upload', {
|
||||
method: 'POST',
|
||||
body: formData
|
||||
});
|
||||
|
||||
const { file_url } = await response.json();
|
||||
```
|
||||
|
||||
### 3. Publish to Matrix Feed
|
||||
|
||||
```javascript
|
||||
// Publish → Matrix Spaces
|
||||
await publishContent(content.id);
|
||||
// → Content erscheint in #movement, #math, etc.
|
||||
```
|
||||
|
||||
## 📊 Matrix Feed Integration
|
||||
|
||||
### Matrix Spaces Struktur
|
||||
|
||||
```
|
||||
#breakpilot (Root Space)
|
||||
├── #feed (Chronologischer Content Feed)
|
||||
├── #bewegung (Kategorie: Movement)
|
||||
├── #mathe (Kategorie: Math)
|
||||
├── #steam (Kategorie: STEAM)
|
||||
└── #sprache (Kategorie: Language)
|
||||
```
|
||||
|
||||
### Content Message Format
|
||||
|
||||
Wenn Content published wird, erscheint in Matrix:
|
||||
|
||||
```
|
||||
📹 5-Minuten Yoga für Grundschule
|
||||
|
||||
Bewegungspause mit einfachen Yoga-Übungen für den Unterricht
|
||||
|
||||
📝 Von: Max Mustermann
|
||||
🏃 Kategorie: movement
|
||||
👥 Alter: 6-10 Jahre
|
||||
⚖️ Lizenz: CC-BY-SA-4.0
|
||||
🏷️ Tags: yoga, bewegung, pause
|
||||
|
||||
[📥 Inhalt ansehen/herunterladen]
|
||||
```
|
||||
|
||||
## 🔐 Creative Commons Lizenzen
|
||||
|
||||
Verfügbare Lizenzen:
|
||||
|
||||
- `CC-BY-4.0` - Attribution (Namensnennung)
|
||||
- `CC-BY-SA-4.0` - Attribution + ShareAlike (empfohlen)
|
||||
- `CC-BY-NC-4.0` - Attribution + NonCommercial
|
||||
- `CC-BY-NC-SA-4.0` - Attribution + NonCommercial + ShareAlike
|
||||
- `CC0-1.0` - Public Domain
|
||||
|
||||
### Lizenz-Workflow
|
||||
|
||||
```python
|
||||
# Bei Content-Erstellung: Creator wählt Lizenz
|
||||
content.license = "CC-BY-SA-4.0"
|
||||
|
||||
# System validiert:
|
||||
✅ Nur erlaubte Lizenzen
|
||||
✅ Lizenz-Badge wird angezeigt
|
||||
✅ Lizenz-Link zu Creative Commons
|
||||
```
|
||||
|
||||
## 📈 Analytics & Impact Scoring
|
||||
|
||||
### Download Tracking
|
||||
|
||||
```python
|
||||
# Automatisch getrackt bei Download
|
||||
POST /api/v1/content/{content_id}/download
|
||||
|
||||
# → Zähler erhöht
|
||||
# → Download-Event gespeichert
|
||||
# → Für Impact-Score verwendet
|
||||
```
|
||||
|
||||
### Creator Statistics
|
||||
|
||||
```bash
|
||||
# Get Creator Stats
|
||||
GET /api/v1/stats/creator/{creator_id}
|
||||
|
||||
{
|
||||
"total_contents": 12,
|
||||
"total_downloads": 347,
|
||||
"total_views": 1203,
|
||||
"avg_rating": 4.7,
|
||||
"impact_score": 892.5,
|
||||
"content_breakdown": {
|
||||
"movement": 5,
|
||||
"math": 4,
|
||||
"steam": 3
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
### API Tests
|
||||
|
||||
```bash
|
||||
# Pytest
|
||||
cd backend/content_service
|
||||
pytest tests/
|
||||
|
||||
# Mit Coverage
|
||||
pytest --cov=. --cov-report=html
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
|
||||
```bash
|
||||
# Test Content Upload Flow
|
||||
curl -X POST http://localhost:8002/api/v1/content \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"title": "Test Content",
|
||||
"content_type": "pdf",
|
||||
"category": "math",
|
||||
"license": "CC-BY-SA-4.0"
|
||||
}'
|
||||
```
|
||||
|
||||
## 🐳 Docker Commands
|
||||
|
||||
```bash
|
||||
# Build einzelnen Service
|
||||
docker-compose -f docker-compose.content.yml build content-service
|
||||
|
||||
# Nur Content Services starten
|
||||
docker-compose -f docker-compose.content.yml up -d
|
||||
|
||||
# Logs einzelner Service
|
||||
docker-compose logs -f content-service
|
||||
|
||||
# Service neu starten
|
||||
docker-compose restart content-service
|
||||
|
||||
# Alle stoppen
|
||||
docker-compose -f docker-compose.yml -f docker-compose.content.yml down
|
||||
|
||||
# Mit Volumes löschen (Achtung: Datenverlust!)
|
||||
docker-compose -f docker-compose.yml -f docker-compose.content.yml down -v
|
||||
```
|
||||
|
||||
## 🗄️ Database Migrations
|
||||
|
||||
```bash
|
||||
cd backend/content_service
|
||||
|
||||
# Neue Migration erstellen
|
||||
alembic revision --autogenerate -m "Add new field"
|
||||
|
||||
# Migration anwenden
|
||||
alembic upgrade head
|
||||
|
||||
# Zurückrollen
|
||||
alembic downgrade -1
|
||||
```
|
||||
|
||||
## 📱 Frontend Development
|
||||
|
||||
### Creator Studio
|
||||
|
||||
```bash
|
||||
cd frontend/creator-studio
|
||||
|
||||
# Install dependencies
|
||||
npm install
|
||||
|
||||
# Development
|
||||
npm run dev # → http://localhost:3000
|
||||
|
||||
# Build
|
||||
npm run build
|
||||
|
||||
# Preview Production Build
|
||||
npm run preview
|
||||
```
|
||||
|
||||
## 🔒 DSGVO Compliance
|
||||
|
||||
### Datenminimierung
|
||||
|
||||
- ✅ Nur notwendige Metadaten gespeichert
|
||||
- ✅ Keine Schülerdaten
|
||||
- ✅ IP-Adressen anonymisiert nach 7 Tagen
|
||||
- ✅ User kann Content/Account löschen
|
||||
|
||||
### Datenexport
|
||||
|
||||
```bash
|
||||
# User Data Export
|
||||
GET /api/v1/user/export
|
||||
→ JSON mit allen Daten des Users
|
||||
```
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
### MinIO Connection Failed
|
||||
|
||||
```bash
|
||||
# Check MinIO status
|
||||
docker-compose logs minio
|
||||
|
||||
# Test connection
|
||||
curl http://localhost:9000/minio/health/live
|
||||
```
|
||||
|
||||
### Content Service Database Connection
|
||||
|
||||
```bash
|
||||
# Check PostgreSQL
|
||||
docker-compose logs content-db
|
||||
|
||||
# Connect manually
|
||||
docker exec -it breakpilot-pwa-content-db psql -U breakpilot -d breakpilot_content
|
||||
```
|
||||
|
||||
### H5P Service Not Starting
|
||||
|
||||
```bash
|
||||
# Check logs
|
||||
docker-compose logs h5p-service
|
||||
|
||||
# Rebuild
|
||||
docker-compose build h5p-service
|
||||
docker-compose up -d h5p-service
|
||||
```
|
||||
|
||||
## 📚 Weitere Dokumentation
|
||||
|
||||
- [Architekturempfehlung](./backend/docs/Architekturempfehlung%20für%20Breakpilot%20–%20Offene,%20modulare%20Bildungsplattform%20im%20DACH-Raum.pdf)
|
||||
- [Content Service API](./backend/content_service/README.md)
|
||||
- [H5P Integration](./h5p-service/README.md)
|
||||
- [Matrix Feed Setup](./docs/matrix-feed-setup.md)
|
||||
|
||||
## 🎉 Next Steps
|
||||
|
||||
1. ✅ Services starten (siehe Quick Start)
|
||||
2. ✅ Creator Account erstellen
|
||||
3. ✅ Ersten Content hochladen
|
||||
4. ✅ H5P Interactive Content erstellen
|
||||
5. ✅ Content publishen → Matrix Feed
|
||||
6. ✅ Teacher Discovery UI testen
|
||||
7. 🔜 OAuth2 SSO mit consent-service integrieren
|
||||
8. 🔜 Production Deployment vorbereiten
|
||||
|
||||
## 💡 Support
|
||||
|
||||
Bei Fragen oder Problemen:
|
||||
- GitHub Issues: https://github.com/breakpilot/breakpilot-pwa/issues
|
||||
- Matrix Chat: #breakpilot-dev:matrix.org
|
||||
- Email: dev@breakpilot.app
|
||||
@@ -16,11 +16,13 @@ COPY . .
|
||||
ARG NEXT_PUBLIC_API_URL
|
||||
ARG NEXT_PUBLIC_OLD_ADMIN_URL
|
||||
ARG NEXT_PUBLIC_SDK_URL
|
||||
ARG NEXT_PUBLIC_KLAUSUR_SERVICE_URL
|
||||
|
||||
# Set environment variables for build
|
||||
ENV NEXT_PUBLIC_API_URL=$NEXT_PUBLIC_API_URL
|
||||
ENV NEXT_PUBLIC_OLD_ADMIN_URL=$NEXT_PUBLIC_OLD_ADMIN_URL
|
||||
ENV NEXT_PUBLIC_SDK_URL=$NEXT_PUBLIC_SDK_URL
|
||||
ENV NEXT_PUBLIC_KLAUSUR_SERVICE_URL=$NEXT_PUBLIC_KLAUSUR_SERVICE_URL
|
||||
|
||||
# Build the application
|
||||
RUN npm run build
|
||||
|
||||
427
admin-v2/IMPLEMENTATION_SUMMARY.md
Normal file
427
admin-v2/IMPLEMENTATION_SUMMARY.md
Normal file
@@ -0,0 +1,427 @@
|
||||
# 🎓 BreakPilot Content Service - Implementierungs-Zusammenfassung
|
||||
|
||||
## ✅ Vollständig implementierte Sprints
|
||||
|
||||
### **Sprint 1-2: Content Service Foundation** ✅
|
||||
|
||||
**Backend (FastAPI):**
|
||||
- ✅ Complete Database Schema (PostgreSQL)
|
||||
- `Content` Model mit allen Metadaten
|
||||
- `Rating` Model für Teacher Reviews
|
||||
- `Tag` System für Content Organization
|
||||
- `Download` Tracking für Impact Scoring
|
||||
- ✅ Pydantic Schemas für API Validation
|
||||
- ✅ Full CRUD API für Content Management
|
||||
- ✅ Upload API für Files (Video, PDF, Images, Audio)
|
||||
- ✅ Search & Filter Endpoints
|
||||
- ✅ Analytics & Statistics Endpoints
|
||||
|
||||
**Storage:**
|
||||
- ✅ MinIO S3-kompatible Object Storage
|
||||
- ✅ Automatic Bucket Creation
|
||||
- ✅ Public Read Policy für Content
|
||||
- ✅ File Upload Integration
|
||||
- ✅ Presigned URLs für private Files
|
||||
|
||||
**Files Created:**
|
||||
```
|
||||
backend/content_service/
|
||||
├── models.py # Database Models
|
||||
├── schemas.py # Pydantic Schemas
|
||||
├── database.py # DB Configuration
|
||||
├── main.py # FastAPI Application
|
||||
├── storage.py # MinIO Integration
|
||||
├── requirements.txt # Python Dependencies
|
||||
└── Dockerfile # Container Definition
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### **Sprint 3-4: Matrix Feed Integration** ✅
|
||||
|
||||
**Matrix Client:**
|
||||
- ✅ Matrix SDK Integration (matrix-nio)
|
||||
- ✅ Content Publishing to Matrix Spaces
|
||||
- ✅ Formatted Messages (Plain Text + HTML)
|
||||
- ✅ Category-based Room Routing
|
||||
- ✅ Rich Metadata for Content
|
||||
- ✅ Reactions & Threading Support
|
||||
|
||||
**Matrix Spaces Struktur:**
|
||||
```
|
||||
#breakpilot:server.de (Root Space)
|
||||
├── #feed (Chronologischer Content Feed)
|
||||
├── #bewegung (Movement Category)
|
||||
├── #mathe (Math Category)
|
||||
├── #steam (STEAM Category)
|
||||
└── #sprache (Language Category)
|
||||
```
|
||||
|
||||
**Files Created:**
|
||||
```
|
||||
backend/content_service/
|
||||
└── matrix_client.py # Matrix Integration
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- ✅ Auto-publish on Content.status = PUBLISHED
|
||||
- ✅ Rich HTML Formatting mit Thumbnails
|
||||
- ✅ CC License Badges in Messages
|
||||
- ✅ Direct Links zu Content
|
||||
- ✅ Category-specific Posting
|
||||
|
||||
---
|
||||
|
||||
### **Sprint 5-6: Rating & Download Tracking** ✅
|
||||
|
||||
**Rating System:**
|
||||
- ✅ 5-Star Rating System
|
||||
- ✅ Text Comments
|
||||
- ✅ Average Rating Calculation
|
||||
- ✅ Rating Count Tracking
|
||||
- ✅ One Rating per User (Update möglich)
|
||||
|
||||
**Download Tracking:**
|
||||
- ✅ Event-based Download Logging
|
||||
- ✅ User-specific Tracking
|
||||
- ✅ IP Anonymization (nach 7 Tagen)
|
||||
- ✅ Download Counter
|
||||
- ✅ Impact Score Foundation
|
||||
|
||||
**Analytics:**
|
||||
- ✅ Platform-wide Statistics
|
||||
- ✅ Creator Statistics
|
||||
- ✅ Content Breakdown by Category
|
||||
- ✅ Downloads, Views, Ratings
|
||||
|
||||
---
|
||||
|
||||
### **Sprint 7-8: H5P Interactive Content** ✅
|
||||
|
||||
**H5P Service (Node.js):**
|
||||
- ✅ Self-hosted H5P Server
|
||||
- ✅ H5P Editor Integration
|
||||
- ✅ H5P Player
|
||||
- ✅ File-based Content Storage
|
||||
- ✅ Library Management
|
||||
- ✅ Export as .h5p Files
|
||||
- ✅ Import .h5p Files
|
||||
|
||||
**Supported H5P Content Types:**
|
||||
- ✅ Interactive Video
|
||||
- ✅ Course Presentation
|
||||
- ✅ Quiz (Multiple Choice)
|
||||
- ✅ Drag & Drop
|
||||
- ✅ Timeline
|
||||
- ✅ Memory Game
|
||||
- ✅ Fill in the Blanks
|
||||
- ✅ 50+ weitere Content Types
|
||||
|
||||
**Files Created:**
|
||||
```
|
||||
h5p-service/
|
||||
├── server.js # H5P Express Server
|
||||
├── package.json # Node Dependencies
|
||||
└── Dockerfile # Container Definition
|
||||
```
|
||||
|
||||
**Integration:**
|
||||
- ✅ Content Service → H5P Service API
|
||||
- ✅ H5P Content ID in Content Model
|
||||
- ✅ Automatic Publishing to Matrix
|
||||
|
||||
---
|
||||
|
||||
### **Sprint 7-8: Creative Commons Licensing** ✅
|
||||
|
||||
**Lizenz-System:**
|
||||
- ✅ CC-BY-4.0
|
||||
- ✅ CC-BY-SA-4.0 (Recommended)
|
||||
- ✅ CC-BY-NC-4.0
|
||||
- ✅ CC-BY-NC-SA-4.0
|
||||
- ✅ CC0-1.0 (Public Domain)
|
||||
|
||||
**Features:**
|
||||
- ✅ License Validation bei Upload
|
||||
- ✅ License Selector in Creator Studio
|
||||
- ✅ License Badges in UI
|
||||
- ✅ Direct Links zu Creative Commons
|
||||
- ✅ Matrix Messages mit License Info
|
||||
|
||||
---
|
||||
|
||||
### **Sprint 7-8: DSGVO Compliance** ✅
|
||||
|
||||
**Privacy by Design:**
|
||||
- ✅ Datenminimierung (nur notwendige Daten)
|
||||
- ✅ EU Server Hosting
|
||||
- ✅ IP Anonymization
|
||||
- ✅ User Data Export API
|
||||
- ✅ Account Deletion
|
||||
- ✅ No Schülerdaten
|
||||
|
||||
**Transparency:**
|
||||
- ✅ Clear License Information
|
||||
- ✅ Open Source Code
|
||||
- ✅ Transparent Analytics
|
||||
|
||||
---
|
||||
|
||||
## 🐳 Docker Infrastructure
|
||||
|
||||
**docker-compose.content.yml:**
|
||||
```yaml
|
||||
Services:
|
||||
- minio (Object Storage)
|
||||
- content-db (PostgreSQL)
|
||||
- content-service (FastAPI)
|
||||
- h5p-service (Node.js H5P)
|
||||
|
||||
Volumes:
|
||||
- minio_data
|
||||
- content_db_data
|
||||
- h5p_content
|
||||
|
||||
Networks:
|
||||
- breakpilot-pwa-network (external)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 Architektur-Übersicht
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ BREAKPILOT CONTENT PLATFORM │
|
||||
├─────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌───────────┐ │
|
||||
│ │ Creator │───▶│ Content │───▶│ Matrix │ │
|
||||
│ │ Studio │ │ Service │ │ Feed │ │
|
||||
│ │ (Vue.js) │ │ (FastAPI) │ │ (Synapse) │ │
|
||||
│ └──────────────┘ └──────┬───────┘ └───────────┘ │
|
||||
│ │ │
|
||||
│ ┌────────┴────────┐ │
|
||||
│ │ │ │
|
||||
│ ┌──────▼─────┐ ┌─────▼─────┐ │
|
||||
│ │ MinIO │ │ H5P │ │
|
||||
│ │ Storage │ │ Service │ │
|
||||
│ └────────────┘ └───────────┘ │
|
||||
│ │ │ │
|
||||
│ ┌──────▼─────────────────▼─────┐ │
|
||||
│ │ PostgreSQL Database │ │
|
||||
│ └──────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌───────────┐ │
|
||||
│ │ Teacher │────────────────────────▶│ Content │ │
|
||||
│ │ Discovery │ Search & Download │ Player │ │
|
||||
│ │ UI │ │ │ │
|
||||
│ └──────────────┘ └───────────┘ │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Deployment
|
||||
|
||||
### Quick Start
|
||||
|
||||
```bash
|
||||
# 1. Startup Script ausführbar machen
|
||||
chmod +x scripts/start-content-services.sh
|
||||
|
||||
# 2. Alle Services starten
|
||||
./scripts/start-content-services.sh
|
||||
|
||||
# ODER manuell:
|
||||
docker-compose \
|
||||
-f docker-compose.yml \
|
||||
-f docker-compose.content.yml \
|
||||
up -d
|
||||
```
|
||||
|
||||
### URLs nach Start
|
||||
|
||||
| Service | URL | Credentials |
|
||||
|---------|-----|-------------|
|
||||
| Content Service API | http://localhost:8002/docs | - |
|
||||
| MinIO Console | http://localhost:9001 | minioadmin / minioadmin123 |
|
||||
| H5P Editor | http://localhost:8003/h5p/editor/new | - |
|
||||
| Content Database | localhost:5433 | breakpilot / breakpilot123 |
|
||||
|
||||
---
|
||||
|
||||
## 📝 Content Creation Workflow
|
||||
|
||||
### 1. Creator erstellt Content
|
||||
|
||||
```javascript
|
||||
// POST /api/v1/content
|
||||
{
|
||||
"title": "5-Minuten Yoga",
|
||||
"description": "Bewegungspause für Grundschüler",
|
||||
"content_type": "video",
|
||||
"category": "movement",
|
||||
"license": "CC-BY-SA-4.0",
|
||||
"age_min": 6,
|
||||
"age_max": 10,
|
||||
"tags": ["yoga", "bewegung"]
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Upload Media Files
|
||||
|
||||
```javascript
|
||||
// POST /api/v1/upload
|
||||
FormData {
|
||||
file: <video-file.mp4>
|
||||
}
|
||||
→ Returns: { file_url: "http://minio:9000/..." }
|
||||
```
|
||||
|
||||
### 3. Attach Files to Content
|
||||
|
||||
```javascript
|
||||
// POST /api/v1/content/{id}/files
|
||||
{
|
||||
"file_urls": ["http://minio:9000/..."]
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Publish to Matrix
|
||||
|
||||
```javascript
|
||||
// POST /api/v1/content/{id}/publish
|
||||
→ Status: PUBLISHED
|
||||
→ Matrix Message in #movement Space
|
||||
→ Discoverable by Teachers
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎨 Frontend Components (Creator Studio)
|
||||
|
||||
### Struktur (Vorbereitet)
|
||||
|
||||
```
|
||||
frontend/creator-studio/
|
||||
├── src/
|
||||
│ ├── components/
|
||||
│ │ ├── ContentUpload.vue
|
||||
│ │ ├── ContentList.vue
|
||||
│ │ ├── ContentEditor.vue
|
||||
│ │ ├── H5PEditor.vue
|
||||
│ │ └── Analytics.vue
|
||||
│ ├── views/
|
||||
│ │ ├── Dashboard.vue
|
||||
│ │ ├── CreateContent.vue
|
||||
│ │ └── MyContent.vue
|
||||
│ ├── api/
|
||||
│ │ └── content.js
|
||||
│ └── router/
|
||||
│ └── index.js
|
||||
├── package.json
|
||||
└── vite.config.js
|
||||
```
|
||||
|
||||
**Status:** Framework vorbereitet, vollständige UI-Implementation ausstehend (Sprint 1-2 Frontend)
|
||||
|
||||
---
|
||||
|
||||
## ⏭️ Nächste Schritte (Optional/Future)
|
||||
|
||||
### **Ausstehend:**
|
||||
|
||||
1. **OAuth2 SSO Integration** (Sprint 3-4)
|
||||
- consent-service → Matrix SSO
|
||||
- JWT Validation in Content Service
|
||||
- User Roles & Permissions
|
||||
|
||||
2. **Teacher Discovery UI** (Sprint 5-6)
|
||||
- Vue.js Frontend komplett
|
||||
- Search & Filter UI
|
||||
- Content Preview & Download
|
||||
- Rating Interface
|
||||
|
||||
3. **Production Deployment**
|
||||
- Environment Configuration
|
||||
- SSL/TLS Certificates
|
||||
- Backup Strategy
|
||||
- Monitoring (Prometheus/Grafana)
|
||||
|
||||
---
|
||||
|
||||
## 📈 Impact Scoring (Fundament gelegt)
|
||||
|
||||
**Vorbereitet für zukünftige Implementierung:**
|
||||
|
||||
```python
|
||||
# Impact Score Calculation (Beispiel)
|
||||
impact_score = (
|
||||
downloads * 10 +
|
||||
rating_count * 5 +
|
||||
avg_rating * 20 +
|
||||
matrix_engagement * 2
|
||||
)
|
||||
```
|
||||
|
||||
**Bereits getrackt:**
|
||||
- ✅ Downloads
|
||||
- ✅ Views
|
||||
- ✅ Ratings (Stars + Comments)
|
||||
- ✅ Matrix Event IDs
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Erreichte Features (Zusammenfassung)
|
||||
|
||||
| Feature | Status | Sprint |
|
||||
|---------|--------|--------|
|
||||
| Content CRUD API | ✅ | 1-2 |
|
||||
| File Upload (MinIO) | ✅ | 1-2 |
|
||||
| PostgreSQL Schema | ✅ | 1-2 |
|
||||
| Matrix Feed Publishing | ✅ | 3-4 |
|
||||
| Rating System | ✅ | 5-6 |
|
||||
| Download Tracking | ✅ | 5-6 |
|
||||
| H5P Integration | ✅ | 7-8 |
|
||||
| CC Licensing | ✅ | 7-8 |
|
||||
| DSGVO Compliance | ✅ | 7-8 |
|
||||
| Docker Setup | ✅ | 7-8 |
|
||||
| Deployment Guide | ✅ | 7-8 |
|
||||
| Creator Studio (Backend) | ✅ | 1-2 |
|
||||
| Creator Studio (Frontend) | 🔜 | Pending |
|
||||
| Teacher Discovery UI | 🔜 | Pending |
|
||||
| OAuth2 SSO | 🔜 | Pending |
|
||||
|
||||
---
|
||||
|
||||
## 📚 Dokumentation
|
||||
|
||||
- ✅ **CONTENT_SERVICE_SETUP.md** - Vollständiger Setup Guide
|
||||
- ✅ **IMPLEMENTATION_SUMMARY.md** - Diese Datei
|
||||
- ✅ **API Dokumentation** - Auto-generiert via FastAPI (/docs)
|
||||
- ✅ **Architekturempfehlung PDF** - Strategische Planung
|
||||
|
||||
---
|
||||
|
||||
## 🎉 Fazit
|
||||
|
||||
**Implementiert:** 8+ Wochen Entwicklung in Sprints 1-8
|
||||
|
||||
**Kernfunktionen:**
|
||||
- ✅ Vollständiger Content Service (Backend)
|
||||
- ✅ MinIO S3 Storage
|
||||
- ✅ H5P Interactive Content
|
||||
- ✅ Matrix Feed Integration
|
||||
- ✅ Creative Commons Licensing
|
||||
- ✅ Rating & Analytics
|
||||
- ✅ DSGVO Compliance
|
||||
- ✅ Docker Deployment Ready
|
||||
|
||||
**Ready to Use:** Alle Backend-Services produktionsbereit
|
||||
|
||||
**Next:** Frontend UI vervollständigen & Production Deploy
|
||||
|
||||
---
|
||||
|
||||
**🚀 Die BreakPilot Content Platform ist LIVE!**
|
||||
95
admin-v2/MAC_MINI_SETUP.md
Normal file
95
admin-v2/MAC_MINI_SETUP.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# Mac Mini Headless Setup - Vollständig Automatisch
|
||||
|
||||
## Verbindungsdaten
|
||||
- **IP (LAN):** 192.168.178.100
|
||||
- **IP (WiFi):** 192.168.178.163 (nicht mehr aktiv)
|
||||
- **User:** benjaminadmin
|
||||
- **SSH:** `ssh benjaminadmin@192.168.178.100`
|
||||
|
||||
## Nach Neustart - Alles startet automatisch!
|
||||
|
||||
| Service | Auto-Start | Port |
|
||||
|---------|------------|------|
|
||||
| ✅ SSH | Ja | 22 |
|
||||
| ✅ Docker Desktop | Ja | - |
|
||||
| ✅ Docker Container | Ja (nach ~2 Min) | 8000, 8081, etc. |
|
||||
| ✅ Ollama Server | Ja | 11434 |
|
||||
| ✅ Unity Hub | Ja | - |
|
||||
| ✅ VS Code | Ja | - |
|
||||
|
||||
**Keine Aktion nötig nach Neustart!** Einfach 2-3 Minuten warten.
|
||||
|
||||
## Status prüfen
|
||||
```bash
|
||||
./scripts/mac-mini/status.sh
|
||||
```
|
||||
|
||||
## Services & Ports
|
||||
| Service | Port | URL |
|
||||
|---------|------|-----|
|
||||
| Backend API | 8000 | http://192.168.178.100:8000/admin |
|
||||
| Consent Service | 8081 | - |
|
||||
| PostgreSQL | 5432 | - |
|
||||
| Valkey/Redis | 6379 | - |
|
||||
| MinIO | 9000/9001 | http://192.168.178.100:9001 |
|
||||
| Mailpit | 8025 | http://192.168.178.100:8025 |
|
||||
| Ollama | 11434 | http://192.168.178.100:11434/api/tags |
|
||||
|
||||
## LLM Modelle
|
||||
- **Qwen 2.5 14B** (14.8 Milliarden Parameter)
|
||||
|
||||
## Scripts (auf MacBook)
|
||||
```bash
|
||||
./scripts/mac-mini/status.sh # Status prüfen
|
||||
./scripts/mac-mini/sync.sh # Code synchronisieren
|
||||
./scripts/mac-mini/docker.sh # Docker-Befehle
|
||||
./scripts/mac-mini/backup.sh # Backup erstellen
|
||||
```
|
||||
|
||||
## Docker-Befehle
|
||||
```bash
|
||||
./scripts/mac-mini/docker.sh ps # Container anzeigen
|
||||
./scripts/mac-mini/docker.sh logs backend # Logs
|
||||
./scripts/mac-mini/docker.sh restart # Neustart
|
||||
./scripts/mac-mini/docker.sh build # Image bauen
|
||||
```
|
||||
|
||||
## LaunchAgents (Auto-Start)
|
||||
Pfad auf Mac Mini: `~/Library/LaunchAgents/`
|
||||
|
||||
| Agent | Funktion |
|
||||
|-------|----------|
|
||||
| `com.docker.desktop.plist` | Docker Desktop |
|
||||
| `com.breakpilot.docker-containers.plist` | Container Auto-Start |
|
||||
| `com.ollama.serve.plist` | Ollama Server |
|
||||
| `com.unity.hub.plist` | Unity Hub |
|
||||
| `com.microsoft.vscode.plist` | VS Code |
|
||||
|
||||
## Projekt-Pfade
|
||||
- **MacBook:** `/Users/benjaminadmin/Projekte/breakpilot-pwa/`
|
||||
- **Mac Mini:** `/Users/benjaminadmin/Projekte/breakpilot-pwa/`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Docker Onboarding erscheint wieder
|
||||
Docker-Einstellungen sind gesichert in `~/docker-settings-backup/`
|
||||
```bash
|
||||
# Wiederherstellen:
|
||||
cp -r ~/docker-settings-backup/* ~/Library/Group\ Containers/group.com.docker/
|
||||
```
|
||||
|
||||
### Container starten nicht automatisch
|
||||
Log prüfen:
|
||||
```bash
|
||||
ssh benjaminadmin@192.168.178.163 "cat /tmp/docker-autostart.log"
|
||||
```
|
||||
|
||||
Manuell starten:
|
||||
```bash
|
||||
./scripts/mac-mini/docker.sh up
|
||||
```
|
||||
|
||||
### SSH nicht erreichbar
|
||||
- Prüfe ob Mac Mini an ist (Ping: `ping 192.168.178.163`)
|
||||
- Warte 1-2 Minuten nach Boot
|
||||
- Prüfe Netzwerkverbindung
|
||||
80
admin-v2/Makefile
Normal file
80
admin-v2/Makefile
Normal file
@@ -0,0 +1,80 @@
|
||||
# BreakPilot PWA - Makefile fuer lokale CI-Simulation
|
||||
#
|
||||
# Verwendung:
|
||||
# make ci - Alle Tests lokal ausfuehren
|
||||
# make test-go - Nur Go-Tests
|
||||
# make test-python - Nur Python-Tests
|
||||
# make logs-agent - Woodpecker Agent Logs
|
||||
# make logs-backend - Backend Logs (ci-result)
|
||||
|
||||
.PHONY: ci test-go test-python test-node logs-agent logs-backend clean help
|
||||
|
||||
# Verzeichnis fuer Test-Ergebnisse
|
||||
CI_RESULTS_DIR := .ci-results
|
||||
|
||||
help:
|
||||
@echo "BreakPilot CI - Verfuegbare Befehle:"
|
||||
@echo ""
|
||||
@echo " make ci - Alle Tests lokal ausfuehren"
|
||||
@echo " make test-go - Go Service Tests"
|
||||
@echo " make test-python - Python Service Tests"
|
||||
@echo " make test-node - Node.js Service Tests"
|
||||
@echo " make logs-agent - Woodpecker Agent Logs anzeigen"
|
||||
@echo " make logs-backend - Backend Logs (ci-result) anzeigen"
|
||||
@echo " make clean - Test-Ergebnisse loeschen"
|
||||
|
||||
ci: test-go test-python test-node
|
||||
@echo "========================================="
|
||||
@echo "Local CI complete. Results in $(CI_RESULTS_DIR)/"
|
||||
@echo "========================================="
|
||||
@ls -la $(CI_RESULTS_DIR)/
|
||||
|
||||
test-go: $(CI_RESULTS_DIR)
|
||||
@echo "=== Go Tests ==="
|
||||
@if [ -d "consent-service" ]; then \
|
||||
cd consent-service && go test -v -json ./... > ../$(CI_RESULTS_DIR)/test-consent.json 2>&1 || true; \
|
||||
echo "consent-service: done"; \
|
||||
fi
|
||||
@if [ -d "billing-service" ]; then \
|
||||
cd billing-service && go test -v -json ./... > ../$(CI_RESULTS_DIR)/test-billing.json 2>&1 || true; \
|
||||
echo "billing-service: done"; \
|
||||
fi
|
||||
@if [ -d "school-service" ]; then \
|
||||
cd school-service && go test -v -json ./... > ../$(CI_RESULTS_DIR)/test-school.json 2>&1 || true; \
|
||||
echo "school-service: done"; \
|
||||
fi
|
||||
|
||||
test-python: $(CI_RESULTS_DIR)
|
||||
@echo "=== Python Tests ==="
|
||||
@if [ -d "backend" ]; then \
|
||||
cd backend && python -m pytest tests/ -v --tb=short 2>&1 || true; \
|
||||
echo "backend: done"; \
|
||||
fi
|
||||
@if [ -d "voice-service" ]; then \
|
||||
cd voice-service && python -m pytest tests/ -v --tb=short 2>&1 || true; \
|
||||
echo "voice-service: done"; \
|
||||
fi
|
||||
@if [ -d "klausur-service/backend" ]; then \
|
||||
cd klausur-service/backend && python -m pytest tests/ -v --tb=short 2>&1 || true; \
|
||||
echo "klausur-service: done"; \
|
||||
fi
|
||||
|
||||
test-node: $(CI_RESULTS_DIR)
|
||||
@echo "=== Node.js Tests ==="
|
||||
@if [ -d "h5p-service" ]; then \
|
||||
cd h5p-service && npm test 2>&1 || true; \
|
||||
echo "h5p-service: done"; \
|
||||
fi
|
||||
|
||||
$(CI_RESULTS_DIR):
|
||||
@mkdir -p $(CI_RESULTS_DIR)
|
||||
|
||||
logs-agent:
|
||||
docker logs breakpilot-pwa-woodpecker-agent --tail=200
|
||||
|
||||
logs-backend:
|
||||
docker compose logs backend --tail=200 | grep -E "(ci-result|error|ERROR)"
|
||||
|
||||
clean:
|
||||
rm -rf $(CI_RESULTS_DIR)
|
||||
@echo "Test-Ergebnisse geloescht"
|
||||
794
admin-v2/POLICY_VAULT_OVERVIEW.md
Normal file
794
admin-v2/POLICY_VAULT_OVERVIEW.md
Normal file
@@ -0,0 +1,794 @@
|
||||
# Policy Vault - Projekt-Dokumentation
|
||||
|
||||
## Projektübersicht
|
||||
|
||||
**Policy Vault** ist eine vollständige Web-Anwendung zur Verwaltung von Datenschutzrichtlinien, Cookie-Einwilligungen und Nutzerzustimmungen für verschiedene Projekte und Plattformen. Das System ermöglicht es Administratoren, Datenschutzdokumente zu erstellen, zu verwalten und zu versionieren, sowie Nutzereinwilligungen zu verfolgen und Cookie-Präferenzen zu speichern.
|
||||
|
||||
## Zweck und Anwendungsbereich
|
||||
|
||||
Das Policy Vault System dient als zentrale Plattform für:
|
||||
- **Verwaltung von Datenschutzrichtlinien** (Privacy Policies, Terms of Service, etc.)
|
||||
- **Cookie-Consent-Management** mit Kategorisierung und Vendor-Verwaltung
|
||||
- **Versionskontrolle** für Richtliniendokumente
|
||||
- **Multi-Projekt-Verwaltung** mit rollenbasiertem Zugriff
|
||||
- **Nutzereinwilligungs-Tracking** über verschiedene Plattformen hinweg
|
||||
- **Mehrsprachige Unterstützung** für globale Anwendungen
|
||||
|
||||
---
|
||||
|
||||
## Technologie-Stack
|
||||
|
||||
### Backend
|
||||
- **Framework**: NestJS (Node.js/TypeScript)
|
||||
- **Datenbank**: PostgreSQL
|
||||
- **ORM**: Drizzle ORM
|
||||
- **Authentifizierung**: JWT (JSON Web Tokens) mit Access/Refresh Token
|
||||
- **API-Dokumentation**: Swagger/OpenAPI
|
||||
- **Validierung**: class-validator, class-transformer
|
||||
- **Security**:
|
||||
- Encryption-based authentication
|
||||
- Rate limiting (Throttler)
|
||||
- Role-based access control (RBAC)
|
||||
- bcrypt für Password-Hashing
|
||||
- **Logging**: Winston mit Daily Rotate File
|
||||
- **Job Scheduling**: NestJS Schedule
|
||||
- **E-Mail**: Nodemailer
|
||||
- **OTP-Generierung**: otp-generator
|
||||
|
||||
### Frontend
|
||||
- **Framework**: Angular 18
|
||||
- **UI**:
|
||||
- TailwindCSS
|
||||
- Custom SCSS
|
||||
- **Rich Text Editor**: CKEditor 5
|
||||
- Alignment, Block Quote, Code Block
|
||||
- Font styling, Image support
|
||||
- List und Table support
|
||||
- **State Management**: RxJS
|
||||
- **Security**: DOMPurify für HTML-Sanitization
|
||||
- **Multi-Select**: ng-multiselect-dropdown
|
||||
- **Process Manager**: PM2
|
||||
|
||||
---
|
||||
|
||||
## Hauptfunktionen und Features
|
||||
|
||||
### 1. Administratoren-Verwaltung
|
||||
- **Super Admin und Admin Rollen**
|
||||
- Super Admin (Role 1): Vollzugriff auf alle Funktionen
|
||||
- Admin (Role 2): Eingeschränkter Zugriff auf zugewiesene Projekte
|
||||
- **Authentifizierung**
|
||||
- Login mit E-Mail und Passwort
|
||||
- JWT-basierte Sessions (Access + Refresh Token)
|
||||
- OTP-basierte Passwort-Wiederherstellung
|
||||
- Account-Lock-Mechanismus bei mehrfachen Fehlversuchen
|
||||
- **Benutzerverwaltung**
|
||||
- Admin-Erstellung durch Super Admin
|
||||
- Projekt-Zuweisungen für Admins
|
||||
- Rollen-Modifikation (Promote/Demote)
|
||||
- Soft-Delete (isDeleted Flag)
|
||||
|
||||
### 2. Projekt-Management
|
||||
- **Projektverwaltung**
|
||||
- Erstellung und Verwaltung von Projekten
|
||||
- Projekt-spezifische Konfiguration (Theme-Farben, Icons, Logos)
|
||||
- Mehrsprachige Unterstützung (Language Configuration)
|
||||
- Projekt-Keys für sichere API-Zugriffe
|
||||
- Soft-Delete und Blocking von Projekten
|
||||
- **Projekt-Zugriffskontrolle**
|
||||
- Zuweisung von Admins zu spezifischen Projekten
|
||||
- Project-Admin-Beziehungen
|
||||
|
||||
### 3. Policy Document Management
|
||||
- **Dokumentenverwaltung**
|
||||
- Erstellung von Datenschutzdokumenten (Privacy Policies, ToS, etc.)
|
||||
- Projekt-spezifische Dokumente
|
||||
- Beschreibung und Metadaten
|
||||
- **Versionierung**
|
||||
- Multiple Versionen pro Dokument
|
||||
- Version-Metadaten mit Inhalt
|
||||
- Publish/Draft-Status
|
||||
- Versionsnummern-Tracking
|
||||
|
||||
### 4. Cookie-Consent-Management
|
||||
- **Cookie-Kategorien**
|
||||
- Kategorien-Metadaten (z.B. Notwendig, Marketing, Analytics)
|
||||
- Plattform-spezifische Kategorien (Web, Mobile, etc.)
|
||||
- Versionierung der Kategorien
|
||||
- Pflicht- und optionale Kategorien
|
||||
- Mehrsprachige Kategorie-Beschreibungen
|
||||
- **Vendor-Management**
|
||||
- Verwaltung von Drittanbieter-Services
|
||||
- Vendor-Metadaten und -Beschreibungen
|
||||
- Zuordnung zu Kategorien
|
||||
- Sub-Services für Vendors
|
||||
- Mehrsprachige Vendor-Informationen
|
||||
- **Globale Cookie-Einstellungen**
|
||||
- Projekt-weite Cookie-Texte und -Beschreibungen
|
||||
- Mehrsprachige globale Inhalte
|
||||
- Datei-Upload-Unterstützung
|
||||
|
||||
### 5. User Consent Tracking
|
||||
- **Policy Document Consent**
|
||||
- Tracking von Nutzereinwilligungen für Richtlinien-Versionen
|
||||
- Username-basiertes Tracking
|
||||
- Status (Akzeptiert/Abgelehnt)
|
||||
- Timestamp-Tracking
|
||||
- **Cookie Consent**
|
||||
- Granulare Cookie-Einwilligungen pro Kategorie
|
||||
- Vendor-spezifische Einwilligungen
|
||||
- Versions-Tracking
|
||||
- Username und Projekt-basiert
|
||||
- **Verschlüsselte API-Zugriffe**
|
||||
- Token-basierte Authentifizierung für Mobile/Web
|
||||
- Encryption-based authentication für externe Zugriffe
|
||||
|
||||
### 6. Mehrsprachige Unterstützung
|
||||
- **Language Management**
|
||||
- Dynamische Sprachen-Konfiguration pro Projekt
|
||||
- Mehrsprachige Inhalte für:
|
||||
- Kategorien-Beschreibungen
|
||||
- Vendor-Informationen
|
||||
- Globale Cookie-Texte
|
||||
- Sub-Service-Beschreibungen
|
||||
|
||||
---
|
||||
|
||||
## API-Struktur und Endpoints
|
||||
|
||||
### Admin-Endpoints (`/admins`)
|
||||
```
|
||||
POST /admins/create-admin - Admin erstellen (Super Admin only)
|
||||
POST /admins/create-super-admin - Super Admin erstellen (Super Admin only)
|
||||
POST /admins/create-root-user-super-admin - Root Super Admin erstellen (Secret-based)
|
||||
POST /admins/login - Admin Login
|
||||
GET /admins/get-access-token - Neuen Access Token abrufen
|
||||
POST /admins/generate-otp - OTP für Passwort-Reset generieren
|
||||
POST /admins/validate-otp - OTP validieren
|
||||
POST /admins/change-password - Passwort ändern (mit OTP)
|
||||
PUT /admins/update-password - Passwort aktualisieren (eingeloggt)
|
||||
PUT /admins/forgot-password - Passwort vergessen
|
||||
PUT /admins/make-super-admin - Admin zu Super Admin befördern
|
||||
PUT /admins/remove-super-admin - Super Admin zu Admin zurückstufen
|
||||
PUT /admins/make-project-admin - Projekt-Zugriff gewähren
|
||||
DELETE /admins/remove-project-admin - Projekt-Zugriff entfernen
|
||||
GET /admins/findAll?role= - Alle Admins abrufen (gefiltert nach Rolle)
|
||||
GET /admins/findAll-super-admins - Alle Super Admins abrufen
|
||||
GET /admins/findOne?id= - Einzelnen Admin abrufen
|
||||
PUT /admins/update - Admin-Details aktualisieren
|
||||
DELETE /admins/delete-admin?id= - Admin löschen (Soft-Delete)
|
||||
```
|
||||
|
||||
### Project-Endpoints (`/project`)
|
||||
```
|
||||
POST /project/create - Projekt erstellen (Super Admin only)
|
||||
PUT /project/v2/updateProjectKeys - Projekt-Keys aktualisieren
|
||||
GET /project/findAll - Alle Projekte abrufen (mit Pagination)
|
||||
GET /project/findAllByUser - Projekte eines bestimmten Users
|
||||
GET /project/findOne?id= - Einzelnes Projekt abrufen
|
||||
PUT /project/update - Projekt aktualisieren
|
||||
DELETE /project/delete?id= - Projekt löschen
|
||||
```
|
||||
|
||||
### Policy Document-Endpoints (`/policydocument`)
|
||||
```
|
||||
POST /policydocument/create - Policy Document erstellen
|
||||
GET /policydocument/findAll - Alle Policy Documents abrufen
|
||||
GET /policydocument/findOne?id= - Einzelnes Policy Document
|
||||
GET /policydocument/findPolicyDocs?projectId= - Documents für ein Projekt
|
||||
PUT /policydocument/update - Policy Document aktualisieren
|
||||
DELETE /policydocument/delete?id= - Policy Document löschen
|
||||
```
|
||||
|
||||
### Version-Endpoints (`/version`)
|
||||
```
|
||||
POST /version/create - Version erstellen
|
||||
GET /version/findAll - Alle Versionen abrufen
|
||||
GET /version/findOne?id= - Einzelne Version abrufen
|
||||
GET /version/findVersions?policyDocId= - Versionen für ein Policy Document
|
||||
PUT /version/update - Version aktualisieren
|
||||
DELETE /version/delete?id= - Version löschen
|
||||
```
|
||||
|
||||
### User Consent-Endpoints (`/consent`)
|
||||
```
|
||||
POST /consent/v2/create - User Consent erstellen (Encrypted)
|
||||
GET /consent/v2/GetConsent - Consent abrufen (Encrypted)
|
||||
GET /consent/v2/GetConsentFileContent - Consent mit Dateiinhalt (Encrypted)
|
||||
GET /consent/v2/latestAcceptedConsent - Letzte akzeptierte Consent
|
||||
DELETE /consent/v2/delete - Consent löschen (Encrypted)
|
||||
```
|
||||
|
||||
### Cookie Consent-Endpoints (`/cookieconsent`)
|
||||
```
|
||||
POST /cookieconsent/v2/create - Cookie Consent erstellen (Encrypted)
|
||||
GET /cookieconsent/v2/get - Cookie Kategorien abrufen (Encrypted)
|
||||
GET /cookieconsent/v2/getFileContent - Cookie Daten mit Dateiinhalt (Encrypted)
|
||||
DELETE /cookieconsent/v2/delete - Cookie Consent löschen (Encrypted)
|
||||
```
|
||||
|
||||
### Cookie-Endpoints (`/cookies`)
|
||||
```
|
||||
POST /cookies/createCategory - Cookie-Kategorie erstellen
|
||||
POST /cookies/createVendor - Vendor erstellen
|
||||
POST /cookies/createGlobalCookie - Globale Cookie-Einstellung erstellen
|
||||
GET /cookies/getCategories?projectId= - Kategorien für Projekt abrufen
|
||||
GET /cookies/getVendors?projectId= - Vendors für Projekt abrufen
|
||||
GET /cookies/getGlobalCookie?projectId= - Globale Cookie-Settings
|
||||
PUT /cookies/updateCategory - Kategorie aktualisieren
|
||||
PUT /cookies/updateVendor - Vendor aktualisieren
|
||||
PUT /cookies/updateGlobalCookie - Globale Settings aktualisieren
|
||||
DELETE /cookies/deleteCategory?id= - Kategorie löschen
|
||||
DELETE /cookies/deleteVendor?id= - Vendor löschen
|
||||
DELETE /cookies/deleteGlobalCookie?id= - Globale Settings löschen
|
||||
```
|
||||
|
||||
### Health Check-Endpoint (`/db-health-check`)
|
||||
```
|
||||
GET /db-health-check - Datenbank-Status prüfen
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Datenmodelle
|
||||
|
||||
### Admin
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
employeeCode: string (nullable)
|
||||
firstName: string (max 60)
|
||||
lastName: string (max 50)
|
||||
officialMail: string (unique, max 100)
|
||||
role: number (1 = Super Admin, 2 = Admin)
|
||||
passwordHash: string
|
||||
salt: string (nullable)
|
||||
accessToken: text (nullable)
|
||||
refreshToken: text (nullable)
|
||||
accLockCount: number (default 0)
|
||||
accLockTime: number (default 0)
|
||||
isBlocked: boolean (default false)
|
||||
isDeleted: boolean (default false)
|
||||
otp: string (nullable)
|
||||
}
|
||||
```
|
||||
|
||||
### Project
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
name: string (unique)
|
||||
description: string
|
||||
imageURL: text (nullable)
|
||||
iconURL: text (nullable)
|
||||
isBlocked: boolean (default false)
|
||||
isDeleted: boolean (default false)
|
||||
themeColor: string
|
||||
textColor: string
|
||||
languages: json (nullable) // Array von Sprach-Codes
|
||||
}
|
||||
```
|
||||
|
||||
### Policy Document
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
name: string
|
||||
description: string (nullable)
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
}
|
||||
```
|
||||
|
||||
### Version (Policy Document Meta & Version Meta)
|
||||
```typescript
|
||||
// Policy Document Meta
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
policyDocumentId: number (FK)
|
||||
version: string
|
||||
isPublish: boolean
|
||||
}
|
||||
|
||||
// Version Meta (Sprachspezifischer Inhalt)
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
policyDocMetaId: number (FK)
|
||||
language: string
|
||||
content: text
|
||||
file: text (nullable)
|
||||
}
|
||||
```
|
||||
|
||||
### User Consent
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
username: string
|
||||
status: boolean
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
versionMetaId: number (FK -> versionMeta.id, CASCADE)
|
||||
}
|
||||
```
|
||||
|
||||
### Cookie Consent
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
username: string
|
||||
categoryId: number[] (Array)
|
||||
vendors: number[] (Array)
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
version: string
|
||||
}
|
||||
```
|
||||
|
||||
### Categories Metadata
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
platform: string
|
||||
version: string
|
||||
isPublish: boolean (default false)
|
||||
metaName: string
|
||||
isMandatory: boolean (default false)
|
||||
}
|
||||
```
|
||||
|
||||
### Categories Language Data
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
categoryMetaId: number (FK -> categoriesMetadata.id, CASCADE)
|
||||
language: string
|
||||
title: string
|
||||
description: text
|
||||
}
|
||||
```
|
||||
|
||||
### Vendor Meta
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
categoryId: number (FK -> categoriesMetadata.id, CASCADE)
|
||||
vendorName: string
|
||||
}
|
||||
```
|
||||
|
||||
### Vendor Language
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
vendorMetaId: number (FK -> vendorMeta.id, CASCADE)
|
||||
language: string
|
||||
description: text
|
||||
}
|
||||
```
|
||||
|
||||
### Sub Service
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
vendorMetaId: number (FK -> vendorMeta.id, CASCADE)
|
||||
serviceName: string
|
||||
}
|
||||
```
|
||||
|
||||
### Global Cookie Metadata
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
version: string
|
||||
isPublish: boolean (default false)
|
||||
}
|
||||
```
|
||||
|
||||
### Global Cookie Language Data
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
globalCookieMetaId: number (FK -> globalCookieMetadata.id, CASCADE)
|
||||
language: string
|
||||
title: string
|
||||
description: text
|
||||
file: text (nullable)
|
||||
}
|
||||
```
|
||||
|
||||
### Project Keys
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
createdAt: timestamp
|
||||
updatedAt: timestamp
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
publicKey: text
|
||||
privateKey: text
|
||||
}
|
||||
```
|
||||
|
||||
### Admin Projects (Junction Table)
|
||||
```typescript
|
||||
{
|
||||
id: number (PK)
|
||||
adminId: number (FK -> admin.id, CASCADE)
|
||||
projectId: number (FK -> project.id, CASCADE)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architektur-Übersicht
|
||||
|
||||
### Backend-Architektur
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ NestJS Backend │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Guards │ │ Middlewares │ │ Interceptors │ │
|
||||
│ ├──────────────┤ ├──────────────┤ ├──────────────┤ │
|
||||
│ │ - AuthGuard │ │ - Token │ │ - Serialize │ │
|
||||
│ │ - RolesGuard │ │ - Decrypt │ │ - Logging │ │
|
||||
│ │ - Throttler │ │ - Headers │ │ │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────┐ │
|
||||
│ │ API Modules │ │
|
||||
│ ├───────────────────────────────────────────────────┤ │
|
||||
│ │ - Admins (Authentication, Authorization) │ │
|
||||
│ │ - Projects (Multi-tenant Management) │ │
|
||||
│ │ - Policy Documents (Document Management) │ │
|
||||
│ │ - Versions (Versioning System) │ │
|
||||
│ │ - User Consent (Consent Tracking) │ │
|
||||
│ │ - Cookies (Cookie Categories & Vendors) │ │
|
||||
│ │ - Cookie Consent (Cookie Consent Tracking) │ │
|
||||
│ │ - DB Health Check (System Monitoring) │ │
|
||||
│ └───────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────┐ │
|
||||
│ │ Drizzle ORM Layer │ │
|
||||
│ ├───────────────────────────────────────────────────┤ │
|
||||
│ │ - Schema Definitions │ │
|
||||
│ │ - Relations │ │
|
||||
│ │ - Database Connection Pool │ │
|
||||
│ └───────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
└──────────────────────────┼────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ PostgreSQL │
|
||||
│ Database │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### Frontend-Architektur
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Angular Frontend │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Guards │ │ Interceptors │ │ Services │ │
|
||||
│ ├──────────────┤ ├──────────────┤ ├──────────────┤ │
|
||||
│ │ - AuthGuard │ │ - HTTP │ │ - Auth │ │
|
||||
│ │ │ │ - Error │ │ - REST API │ │
|
||||
│ │ │ │ │ │ - Session │ │
|
||||
│ │ │ │ │ │ - Security │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────┐ │
|
||||
│ │ Feature Modules │ │
|
||||
│ ├───────────────────────────────────────────────────┤ │
|
||||
│ │ ┌─────────────────────────────────────────┐ │ │
|
||||
│ │ │ Auth Module │ │ │
|
||||
│ │ │ - Login Component │ │ │
|
||||
│ │ └─────────────────────────────────────────┘ │ │
|
||||
│ │ │ │
|
||||
│ │ ┌─────────────────────────────────────────┐ │ │
|
||||
│ │ │ Project Dashboard │ │ │
|
||||
│ │ │ - Project List │ │ │
|
||||
│ │ │ - Project Creation │ │ │
|
||||
│ │ │ - Project Settings │ │ │
|
||||
│ │ └─────────────────────────────────────────┘ │ │
|
||||
│ │ │ │
|
||||
│ │ ┌─────────────────────────────────────────┐ │ │
|
||||
│ │ │ Individual Project Dashboard │ │ │
|
||||
│ │ │ - Agreements (Policy Documents) │ │ │
|
||||
│ │ │ - Cookie Consent Management │ │ │
|
||||
│ │ │ - FAQ Management │ │ │
|
||||
│ │ │ - Licenses Management │ │ │
|
||||
│ │ │ - User Management │ │ │
|
||||
│ │ │ - Project Settings │ │ │
|
||||
│ │ └─────────────────────────────────────────┘ │ │
|
||||
│ │ │ │
|
||||
│ │ ┌─────────────────────────────────────────┐ │ │
|
||||
│ │ │ Shared Components │ │ │
|
||||
│ │ │ - Settings │ │ │
|
||||
│ │ │ - Common UI Elements │ │ │
|
||||
│ │ └─────────────────────────────────────────┘ │ │
|
||||
│ └───────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
│ HTTPS/REST API
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ NestJS Backend │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### Datenbankbeziehungen
|
||||
|
||||
```
|
||||
┌──────────┐ ┌─────────────────┐ ┌─────────────┐
|
||||
│ Admin │◄───────►│ AdminProjects │◄───────►│ Project │
|
||||
└──────────┘ └─────────────────┘ └─────────────┘
|
||||
│
|
||||
│ 1:N
|
||||
┌────────────────────────────────────┤
|
||||
│ │
|
||||
▼ ▼
|
||||
┌──────────────────────┐ ┌──────────────────────────┐
|
||||
│ Policy Document │ │ Categories Metadata │
|
||||
└──────────────────────┘ └──────────────────────────┘
|
||||
│ │
|
||||
│ 1:N │ 1:N
|
||||
▼ ▼
|
||||
┌──────────────────────┐ ┌──────────────────────────┐
|
||||
│ Policy Document Meta │ │ Categories Language Data │
|
||||
└──────────────────────┘ └──────────────────────────┘
|
||||
│ │
|
||||
│ 1:N │ 1:N
|
||||
▼ ▼
|
||||
┌──────────────────────┐ ┌──────────────────────────┐
|
||||
│ Version Meta │ │ Vendor Meta │
|
||||
└──────────────────────┘ └──────────────────────────┘
|
||||
│ │
|
||||
│ 1:N │ 1:N
|
||||
▼ ├──────────┐
|
||||
┌──────────────────────┐ ▼ ▼
|
||||
│ User Consent │ ┌─────────────────┐ ┌────────────┐
|
||||
└──────────────────────┘ │ Vendor Language │ │Sub Service │
|
||||
└─────────────────┘ └────────────┘
|
||||
┌──────────────────────┐
|
||||
│ Cookie Consent │◄─── Project
|
||||
└──────────────────────┘
|
||||
|
||||
┌─────────────────────────┐
|
||||
│ Global Cookie Metadata │◄─── Project
|
||||
└─────────────────────────┘
|
||||
│
|
||||
│ 1:N
|
||||
▼
|
||||
┌─────────────────────────────┐
|
||||
│ Global Cookie Language Data │
|
||||
└─────────────────────────────────┘
|
||||
|
||||
┌──────────────────┐
|
||||
│ Project Keys │◄─── Project
|
||||
└──────────────────┘
|
||||
```
|
||||
|
||||
### Sicherheitsarchitektur
|
||||
|
||||
#### Authentifizierung & Autorisierung
|
||||
1. **JWT-basierte Authentifizierung**
|
||||
- Access Token (kurzlebig)
|
||||
- Refresh Token (langlebig)
|
||||
- Token-Refresh-Mechanismus
|
||||
|
||||
2. **Rollenbasierte Zugriffskontrolle (RBAC)**
|
||||
- Super Admin (Role 1): Vollzugriff
|
||||
- Admin (Role 2): Projektbezogener Zugriff
|
||||
- Guard-basierte Absicherung auf Controller-Ebene
|
||||
|
||||
3. **Encryption-based Authentication**
|
||||
- Für externe/mobile Zugriffe
|
||||
- Token-basierte Verschlüsselung
|
||||
- User + Project ID Validierung
|
||||
|
||||
#### Security Features
|
||||
- **Rate Limiting**: Throttler mit konfigurierbaren Limits
|
||||
- **Password Security**: bcrypt Hashing mit Salt
|
||||
- **Account Lock**: Nach mehrfachen Fehlversuchen
|
||||
- **OTP-basierte Passwort-Wiederherstellung**
|
||||
- **Input Validation**: class-validator auf allen DTOs
|
||||
- **HTML Sanitization**: DOMPurify im Frontend
|
||||
- **CORS Configuration**: Custom Headers Middleware
|
||||
- **Soft Delete**: Keine permanente Löschung von Daten
|
||||
|
||||
---
|
||||
|
||||
## Deployment und Konfiguration
|
||||
|
||||
### Backend Environment Variables
|
||||
```env
|
||||
DATABASE_URL=postgresql://username:password@host:port/database
|
||||
NODE_ENV=development|test|production|local|demo
|
||||
PORT=3000
|
||||
JWT_SECRET=your_jwt_secret
|
||||
JWT_REFRESH_SECRET=your_refresh_secret
|
||||
ROOT_SECRET=your_root_secret
|
||||
ENCRYPTION_KEY=your_encryption_key
|
||||
SMTP_HOST=smtp.example.com
|
||||
SMTP_PORT=587
|
||||
SMTP_USER=your_email
|
||||
SMTP_PASSWORD=your_password
|
||||
```
|
||||
|
||||
### Frontend Environment
|
||||
```typescript
|
||||
{
|
||||
production: false,
|
||||
BASE_URL: "https://api.example.com/api/",
|
||||
TITLE: "Policy Vault - Environment"
|
||||
}
|
||||
```
|
||||
|
||||
### Datenbank-Setup
|
||||
```bash
|
||||
# Migrationen ausführen
|
||||
npm run migration:up
|
||||
|
||||
# Migrationen zurückrollen
|
||||
npm run migration:down
|
||||
|
||||
# Schema generieren
|
||||
npx drizzle-kit push
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## API-Sicherheit
|
||||
|
||||
### Token-basierte Authentifizierung
|
||||
- Alle geschützten Endpoints erfordern einen gültigen JWT-Token im Authorization-Header
|
||||
- Format: `Authorization: Bearer <access_token>`
|
||||
|
||||
### Encryption-based Endpoints
|
||||
Für mobile/externe Zugriffe (Consent Tracking):
|
||||
- Header: `secret` oder `mobiletoken`
|
||||
- Format: Verschlüsselter String mit `userId_projectId`
|
||||
- Automatische Validierung durch DecryptMiddleware
|
||||
|
||||
### Rate Limiting
|
||||
- Standard: 10 Requests pro Minute
|
||||
- OTP/Login: 3 Requests pro Minute
|
||||
- Konfigurierbar über ThrottlerModule
|
||||
|
||||
---
|
||||
|
||||
## Besondere Features
|
||||
|
||||
### 1. Versionierung
|
||||
- Komplettes Versions-Management für Policy Documents
|
||||
- Mehrsprachige Versionen mit separaten Inhalten
|
||||
- Publish/Draft Status
|
||||
- Historische Versionsverfolgung
|
||||
|
||||
### 2. Mehrsprachigkeit
|
||||
- Zentrale Sprach-Konfiguration pro Projekt
|
||||
- Separate Language-Data Tabellen für alle Inhaltstypen
|
||||
- Support für unbegrenzte Sprachen
|
||||
|
||||
### 3. Cookie-Consent-System
|
||||
- Granulare Kontrolle über Cookie-Kategorien
|
||||
- Vendor-Management mit Sub-Services
|
||||
- Plattform-spezifische Kategorien (Web, Mobile, etc.)
|
||||
- Versions-Tracking für Compliance
|
||||
|
||||
### 4. Rich Content Editing
|
||||
- CKEditor 5 Integration
|
||||
- Support für komplexe Formatierungen
|
||||
- Bild-Upload und -Verwaltung
|
||||
- Code-Block-Unterstützung
|
||||
|
||||
### 5. Logging & Monitoring
|
||||
- Winston-basiertes Logging
|
||||
- Daily Rotate Files
|
||||
- Structured Logging
|
||||
- Fehler-Tracking
|
||||
- Datenbank-Health-Checks
|
||||
|
||||
### 6. Soft Delete Pattern
|
||||
- Keine permanente Datenlöschung
|
||||
- `isDeleted` Flags auf allen Haupt-Entitäten
|
||||
- Möglichkeit zur Wiederherstellung
|
||||
- Audit Trail Erhaltung
|
||||
|
||||
---
|
||||
|
||||
## Entwicklung
|
||||
|
||||
### Backend starten
|
||||
```bash
|
||||
# Development
|
||||
npm run start:dev
|
||||
|
||||
# Local (mit Watch)
|
||||
npm run start:local
|
||||
|
||||
# Production
|
||||
npm run start:prod
|
||||
```
|
||||
|
||||
### Frontend starten
|
||||
```bash
|
||||
# Development Server
|
||||
npm run start
|
||||
# oder
|
||||
ng serve
|
||||
|
||||
# Build
|
||||
npm run build
|
||||
|
||||
# Mit PM2
|
||||
npm run start:pm2
|
||||
```
|
||||
|
||||
### Tests
|
||||
```bash
|
||||
# Backend Tests
|
||||
npm run test
|
||||
npm run test:e2e
|
||||
npm run test:cov
|
||||
|
||||
# Frontend Tests
|
||||
npm run test
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Zusammenfassung
|
||||
|
||||
Policy Vault ist eine umfassende Enterprise-Lösung für die Verwaltung von Datenschutzrichtlinien und Cookie-Einwilligungen. Das System bietet:
|
||||
|
||||
- **Multi-Tenant-Architektur** mit Projekt-basierter Trennung
|
||||
- **Robuste Authentifizierung** mit JWT und rollenbasierter Zugriffskontrolle
|
||||
- **Vollständiges Versions-Management** für Compliance-Tracking
|
||||
- **Granulare Cookie-Consent-Verwaltung** mit Vendor-Support
|
||||
- **Mehrsprachige Unterstützung** für globale Anwendungen
|
||||
- **Moderne Tech-Stack** mit NestJS, Angular und PostgreSQL
|
||||
- **Enterprise-Grade Security** mit Encryption, Rate Limiting und Audit Trails
|
||||
- **Skalierbare Architektur** mit klarer Trennung von Concerns
|
||||
|
||||
Das System eignet sich ideal für Unternehmen, die:
|
||||
- Multiple Projekte/Produkte mit unterschiedlichen Datenschutzrichtlinien verwalten
|
||||
- GDPR/DSGVO-Compliance sicherstellen müssen
|
||||
- Granulare Cookie-Einwilligungen tracken wollen
|
||||
- Mehrsprachige Anwendungen betreiben
|
||||
- Eine zentrale Policy-Management-Plattform benötigen
|
||||
1204
admin-v2/SBOM.md
Normal file
1204
admin-v2/SBOM.md
Normal file
File diff suppressed because it is too large
Load Diff
530
admin-v2/SOURCE_POLICY_IMPLEMENTATION_PLAN.md
Normal file
530
admin-v2/SOURCE_POLICY_IMPLEMENTATION_PLAN.md
Normal file
@@ -0,0 +1,530 @@
|
||||
# Source-Policy System - Implementierungsplan
|
||||
|
||||
## Zusammenfassung
|
||||
|
||||
Whitelist-basiertes Datenquellen-Management fuer das edu-search-service unter `/compliance/source-policy`. Fuer Auditoren pruefbar mit vollstaendigem Audit-Trail.
|
||||
|
||||
**Kernprinzipien:**
|
||||
- Nur offizielle Open-Data-Portale und amtliche Quellen (§5 UrhG)
|
||||
- Training mit externen Daten: **VERBOTEN**
|
||||
- Alle Aenderungen protokolliert (Audit-Trail)
|
||||
- PII-Blocklist mit Hard-Block
|
||||
|
||||
---
|
||||
|
||||
## 1. Architektur
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ admin-v2 (Next.js) │
|
||||
│ /app/(admin)/compliance/source-policy/ │
|
||||
│ ├── page.tsx (Dashboard + Tabs) │
|
||||
│ └── components/ │
|
||||
│ ├── SourcesTab.tsx (Whitelist-Verwaltung) │
|
||||
│ ├── OperationsMatrixTab.tsx (Lookup/RAG/Training/Export) │
|
||||
│ ├── PIIRulesTab.tsx (PII-Blocklist) │
|
||||
│ └── AuditTab.tsx (Aenderungshistorie + Export) │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ edu-search-service (Go) │
|
||||
│ NEW: internal/policy/ │
|
||||
│ ├── models.go (Datenstrukturen) │
|
||||
│ ├── store.go (PostgreSQL CRUD) │
|
||||
│ ├── enforcer.go (Policy-Enforcement) │
|
||||
│ ├── pii_detector.go (PII-Erkennung) │
|
||||
│ └── audit.go (Audit-Logging) │
|
||||
│ │
|
||||
│ MODIFIED: │
|
||||
│ ├── crawler/crawler.go (Whitelist-Check vor Fetch) │
|
||||
│ ├── pipeline/pipeline.go (PII-Filter nach Extract) │
|
||||
│ └── api/handlers/policy_handlers.go (Admin-API) │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ PostgreSQL │
|
||||
│ NEW TABLES: │
|
||||
│ - source_policies (versionierte Policies) │
|
||||
│ - allowed_sources (Whitelist pro Bundesland) │
|
||||
│ - operation_permissions (Lookup/RAG/Training/Export Matrix) │
|
||||
│ - pii_rules (Regex/Keyword Blocklist) │
|
||||
│ - policy_audit_log (unveraenderlich) │
|
||||
│ - blocked_content_log (blockierte URLs fuer Audit) │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Datenmodell
|
||||
|
||||
### 2.1 PostgreSQL Schema
|
||||
|
||||
```sql
|
||||
-- Policies (versioniert)
|
||||
CREATE TABLE source_policies (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
version INTEGER NOT NULL DEFAULT 1,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
bundesland VARCHAR(2), -- NULL = Bundesebene/KMK
|
||||
is_active BOOLEAN DEFAULT true,
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
approved_by UUID,
|
||||
approved_at TIMESTAMP
|
||||
);
|
||||
|
||||
-- Whitelist
|
||||
CREATE TABLE allowed_sources (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
policy_id UUID REFERENCES source_policies(id),
|
||||
domain VARCHAR(255) NOT NULL,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
license VARCHAR(50) NOT NULL, -- DL-DE-BY-2.0, CC-BY, §5 UrhG
|
||||
legal_basis VARCHAR(100),
|
||||
citation_template TEXT,
|
||||
trust_boost DECIMAL(3,2) DEFAULT 0.50,
|
||||
is_active BOOLEAN DEFAULT true
|
||||
);
|
||||
|
||||
-- Operations Matrix
|
||||
CREATE TABLE operation_permissions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
source_id UUID REFERENCES allowed_sources(id),
|
||||
operation VARCHAR(50) NOT NULL, -- lookup, rag, training, export
|
||||
is_allowed BOOLEAN NOT NULL,
|
||||
requires_citation BOOLEAN DEFAULT false,
|
||||
notes TEXT
|
||||
);
|
||||
|
||||
-- PII Blocklist
|
||||
CREATE TABLE pii_rules (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL,
|
||||
rule_type VARCHAR(50) NOT NULL, -- regex, keyword
|
||||
pattern TEXT NOT NULL,
|
||||
severity VARCHAR(20) DEFAULT 'block', -- block, warn, redact
|
||||
is_active BOOLEAN DEFAULT true
|
||||
);
|
||||
|
||||
-- Audit Log (immutable)
|
||||
CREATE TABLE policy_audit_log (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
action VARCHAR(50) NOT NULL,
|
||||
entity_type VARCHAR(50) NOT NULL,
|
||||
entity_id UUID,
|
||||
old_value JSONB,
|
||||
new_value JSONB,
|
||||
user_email VARCHAR(255),
|
||||
created_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Blocked Content Log
|
||||
CREATE TABLE blocked_content_log (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
url VARCHAR(2048) NOT NULL,
|
||||
domain VARCHAR(255) NOT NULL,
|
||||
block_reason VARCHAR(100) NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
```
|
||||
|
||||
### 2.2 Initial-Daten
|
||||
|
||||
Datei: `edu-search-service/policies/bundeslaender.yaml`
|
||||
|
||||
```yaml
|
||||
federal:
|
||||
name: "KMK & Bundesebene"
|
||||
sources:
|
||||
- domain: "kmk.org"
|
||||
name: "Kultusministerkonferenz"
|
||||
license: "§5 UrhG"
|
||||
legal_basis: "Amtliche Werke (§5 UrhG)"
|
||||
citation_template: "Quelle: KMK, {title}, {date}"
|
||||
- domain: "bildungsserver.de"
|
||||
name: "Deutscher Bildungsserver"
|
||||
license: "DL-DE-BY-2.0"
|
||||
|
||||
NI:
|
||||
name: "Niedersachsen"
|
||||
sources:
|
||||
- domain: "nibis.de"
|
||||
name: "NiBiS Bildungsserver"
|
||||
license: "DL-DE-BY-2.0"
|
||||
- domain: "mk.niedersachsen.de"
|
||||
name: "Kultusministerium Niedersachsen"
|
||||
license: "§5 UrhG"
|
||||
- domain: "cuvo.nibis.de"
|
||||
name: "Kerncurricula Niedersachsen"
|
||||
license: "DL-DE-BY-2.0"
|
||||
|
||||
BY:
|
||||
name: "Bayern"
|
||||
sources:
|
||||
- domain: "km.bayern.de"
|
||||
name: "Bayerisches Kultusministerium"
|
||||
license: "§5 UrhG"
|
||||
- domain: "isb.bayern.de"
|
||||
name: "ISB Bayern"
|
||||
license: "DL-DE-BY-2.0"
|
||||
- domain: "lehrplanplus.bayern.de"
|
||||
name: "LehrplanPLUS"
|
||||
license: "DL-DE-BY-2.0"
|
||||
|
||||
# Default Operations Matrix
|
||||
default_operations:
|
||||
lookup:
|
||||
allowed: true
|
||||
requires_citation: true
|
||||
rag:
|
||||
allowed: true
|
||||
requires_citation: true
|
||||
training:
|
||||
allowed: false # VERBOTEN
|
||||
export:
|
||||
allowed: true
|
||||
requires_citation: true
|
||||
|
||||
# Default PII Rules
|
||||
pii_rules:
|
||||
- name: "Email Addresses"
|
||||
type: "regex"
|
||||
pattern: "[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}"
|
||||
severity: "block"
|
||||
- name: "German Phone Numbers"
|
||||
type: "regex"
|
||||
pattern: "(?:\\+49|0)[\\s.-]?\\d{2,4}[\\s.-]?\\d{3,}[\\s.-]?\\d{2,}"
|
||||
severity: "block"
|
||||
- name: "IBAN"
|
||||
type: "regex"
|
||||
pattern: "DE\\d{2}\\s?\\d{4}\\s?\\d{4}\\s?\\d{4}\\s?\\d{4}\\s?\\d{2}"
|
||||
severity: "block"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Backend Implementation
|
||||
|
||||
### 3.1 Neue Dateien
|
||||
|
||||
| Datei | Beschreibung |
|
||||
|-------|--------------|
|
||||
| `internal/policy/models.go` | Go Structs (SourcePolicy, AllowedSource, PIIRule, etc.) |
|
||||
| `internal/policy/store.go` | PostgreSQL CRUD mit pgx |
|
||||
| `internal/policy/enforcer.go` | `CheckSource()`, `CheckOperation()`, `DetectPII()` |
|
||||
| `internal/policy/audit.go` | `LogChange()`, `LogBlocked()` |
|
||||
| `internal/policy/pii_detector.go` | Regex-basierte PII-Erkennung |
|
||||
| `internal/api/handlers/policy_handlers.go` | Admin-Endpoints |
|
||||
| `migrations/005_source_policies.sql` | DB-Schema |
|
||||
| `policies/bundeslaender.yaml` | Initial-Daten |
|
||||
|
||||
### 3.2 API Endpoints
|
||||
|
||||
```
|
||||
# Policies
|
||||
GET /v1/admin/policies
|
||||
POST /v1/admin/policies
|
||||
PUT /v1/admin/policies/:id
|
||||
|
||||
# Sources (Whitelist)
|
||||
GET /v1/admin/sources
|
||||
POST /v1/admin/sources
|
||||
PUT /v1/admin/sources/:id
|
||||
DELETE /v1/admin/sources/:id
|
||||
|
||||
# Operations Matrix
|
||||
GET /v1/admin/operations-matrix
|
||||
PUT /v1/admin/operations/:id
|
||||
|
||||
# PII Rules
|
||||
GET /v1/admin/pii-rules
|
||||
POST /v1/admin/pii-rules
|
||||
PUT /v1/admin/pii-rules/:id
|
||||
DELETE /v1/admin/pii-rules/:id
|
||||
POST /v1/admin/pii-rules/test # Test gegen Sample-Text
|
||||
|
||||
# Audit
|
||||
GET /v1/admin/policy-audit?from=&to=
|
||||
GET /v1/admin/blocked-content?from=&to=
|
||||
GET /v1/admin/compliance-report # PDF/JSON Export
|
||||
|
||||
# Live-Check
|
||||
POST /v1/admin/check-compliance
|
||||
Body: { "url": "...", "operation": "lookup" }
|
||||
```
|
||||
|
||||
### 3.3 Crawler-Integration
|
||||
|
||||
In `crawler/crawler.go`:
|
||||
```go
|
||||
func (c *Crawler) FetchWithPolicy(ctx context.Context, url string) (*FetchResult, error) {
|
||||
// 1. Whitelist-Check
|
||||
source, err := c.enforcer.CheckSource(ctx, url)
|
||||
if err != nil || source == nil {
|
||||
c.enforcer.LogBlocked(ctx, url, "not_whitelisted")
|
||||
return nil, ErrNotWhitelisted
|
||||
}
|
||||
|
||||
// ... existing fetch ...
|
||||
|
||||
// 2. PII-Check nach Fetch
|
||||
piiMatches := c.enforcer.DetectPII(content)
|
||||
if hasSeverity(piiMatches, "block") {
|
||||
c.enforcer.LogBlocked(ctx, url, "pii_detected")
|
||||
return nil, ErrPIIDetected
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Frontend Implementation
|
||||
|
||||
### 4.1 Navigation Update
|
||||
|
||||
In `lib/navigation.ts` unter `compliance` Kategorie hinzufuegen:
|
||||
|
||||
```typescript
|
||||
{
|
||||
id: 'source-policy',
|
||||
name: 'Quellen-Policy',
|
||||
href: '/compliance/source-policy',
|
||||
description: 'Datenquellen & Compliance',
|
||||
purpose: 'Whitelist zugelassener Datenquellen mit Operations-Matrix und PII-Blocklist.',
|
||||
audience: ['DSB', 'Compliance Officer', 'Auditor'],
|
||||
gdprArticles: ['Art. 5 (Rechtmaessigkeit)', 'Art. 6 (Rechtsgrundlage)'],
|
||||
}
|
||||
```
|
||||
|
||||
### 4.2 Seiten-Struktur
|
||||
|
||||
```
|
||||
/app/(admin)/compliance/source-policy/
|
||||
├── page.tsx # Haupt-Dashboard mit Tabs
|
||||
└── components/
|
||||
├── SourcesTab.tsx # Whitelist-Tabelle mit CRUD
|
||||
├── OperationsMatrixTab.tsx # 4x4 Matrix
|
||||
├── PIIRulesTab.tsx # PII-Regeln mit Test-Funktion
|
||||
└── AuditTab.tsx # Aenderungshistorie + Export
|
||||
```
|
||||
|
||||
### 4.3 UI-Layout
|
||||
|
||||
**Stats Cards (oben):**
|
||||
- Aktive Policies
|
||||
- Zugelassene Quellen
|
||||
- Blockiert (heute)
|
||||
- Compliance Score
|
||||
|
||||
**Tabs:**
|
||||
1. **Dashboard** - Uebersicht mit Quick-Stats
|
||||
2. **Quellen** - Whitelist-Tabelle (Domain, Name, Lizenz, Status)
|
||||
3. **Operations** - Matrix mit Lookup/RAG/Training/Export
|
||||
4. **PII-Regeln** - Blocklist mit Test-Funktion
|
||||
5. **Audit** - Aenderungshistorie mit PDF/JSON-Export
|
||||
|
||||
**Pattern (aus audit-report/page.tsx):**
|
||||
- Tab-Navigation: `bg-purple-600 text-white` fuer aktiv
|
||||
- Status-Badges: `bg-green-100 text-green-700` fuer aktiv
|
||||
- Tabellen: `hover:bg-slate-50`
|
||||
- Info-Boxen: `bg-blue-50 border-blue-200`
|
||||
|
||||
---
|
||||
|
||||
## 5. Betroffene Dateien
|
||||
|
||||
### Neue Dateien erstellen:
|
||||
|
||||
**Backend (edu-search-service):**
|
||||
```
|
||||
internal/policy/models.go
|
||||
internal/policy/store.go
|
||||
internal/policy/enforcer.go
|
||||
internal/policy/audit.go
|
||||
internal/policy/pii_detector.go
|
||||
internal/api/handlers/policy_handlers.go
|
||||
migrations/005_source_policies.sql
|
||||
policies/bundeslaender.yaml
|
||||
```
|
||||
|
||||
**Frontend (admin-v2):**
|
||||
```
|
||||
app/(admin)/compliance/source-policy/page.tsx
|
||||
app/(admin)/compliance/source-policy/components/SourcesTab.tsx
|
||||
app/(admin)/compliance/source-policy/components/OperationsMatrixTab.tsx
|
||||
app/(admin)/compliance/source-policy/components/PIIRulesTab.tsx
|
||||
app/(admin)/compliance/source-policy/components/AuditTab.tsx
|
||||
```
|
||||
|
||||
### Bestehende Dateien aendern:
|
||||
|
||||
```
|
||||
edu-search-service/cmd/server/main.go # Policy-Endpoints registrieren
|
||||
edu-search-service/internal/crawler/crawler.go # Policy-Check hinzufuegen
|
||||
edu-search-service/internal/pipeline/pipeline.go # PII-Filter
|
||||
edu-search-service/internal/database/database.go # Migrations
|
||||
admin-v2/lib/navigation.ts # source-policy Modul
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Implementierungs-Reihenfolge
|
||||
|
||||
### Phase 1: Datenbank & Models
|
||||
1. Migration `005_source_policies.sql` erstellen
|
||||
2. Go Models in `internal/policy/models.go`
|
||||
3. Store-Layer in `internal/policy/store.go`
|
||||
4. YAML-Loader fuer Initial-Daten
|
||||
|
||||
### Phase 2: Policy Enforcer
|
||||
1. `internal/policy/enforcer.go` - CheckSource, CheckOperation
|
||||
2. `internal/policy/pii_detector.go` - Regex-basierte Erkennung
|
||||
3. `internal/policy/audit.go` - Logging
|
||||
4. Integration in Crawler
|
||||
|
||||
### Phase 3: Admin API
|
||||
1. `internal/api/handlers/policy_handlers.go`
|
||||
2. Routen in main.go registrieren
|
||||
3. API testen
|
||||
|
||||
### Phase 4: Frontend
|
||||
1. Hauptseite mit PagePurpose
|
||||
2. SourcesTab mit Whitelist-CRUD
|
||||
3. OperationsMatrixTab
|
||||
4. PIIRulesTab mit Test-Funktion
|
||||
5. AuditTab mit Export
|
||||
|
||||
### Phase 5: Testing & Deployment
|
||||
1. Unit Tests fuer Enforcer
|
||||
2. Integration Tests fuer API
|
||||
3. E2E Test fuer Frontend
|
||||
4. Deployment auf Mac Mini
|
||||
|
||||
---
|
||||
|
||||
## 7. Verifikation
|
||||
|
||||
### Nach Backend (Phase 1-3):
|
||||
```bash
|
||||
# Migration ausfuehren
|
||||
ssh macmini "cd /path/to/edu-search-service && go run ./cmd/migrate"
|
||||
|
||||
# API testen
|
||||
curl -X GET http://macmini:8088/v1/admin/policies
|
||||
curl -X POST http://macmini:8088/v1/admin/check-compliance \
|
||||
-d '{"url":"https://nibis.de/test","operation":"lookup"}'
|
||||
```
|
||||
|
||||
### Nach Frontend (Phase 4):
|
||||
```bash
|
||||
# Build & Deploy
|
||||
rsync -avz admin-v2/ macmini:/path/to/admin-v2/
|
||||
ssh macmini "docker compose build admin-v2 && docker compose up -d admin-v2"
|
||||
|
||||
# Testen
|
||||
open https://macmini:3002/compliance/source-policy
|
||||
```
|
||||
|
||||
### Auditor-Checkliste:
|
||||
- [ ] Alle Quellen in Whitelist dokumentiert
|
||||
- [ ] Operations-Matrix zeigt Training = VERBOTEN
|
||||
- [ ] PII-Regeln aktiv und testbar
|
||||
- [ ] Audit-Log zeigt alle Aenderungen
|
||||
- [ ] Blocked-Content-Log zeigt blockierte URLs
|
||||
- [ ] PDF/JSON-Export funktioniert
|
||||
|
||||
---
|
||||
|
||||
## 8. KMK-Spezifika (§5 UrhG)
|
||||
|
||||
**Rechtsgrundlage:**
|
||||
- KMK-Beschluesse, Vereinbarungen, EPA sind amtliche Werke nach §5 UrhG
|
||||
- Frei nutzbar, Attribution erforderlich
|
||||
|
||||
**Zitierformat:**
|
||||
```
|
||||
Quelle: KMK, [Titel des Beschlusses], [Datum]
|
||||
Beispiel: Quelle: KMK, Bildungsstandards im Fach Deutsch, 2003
|
||||
```
|
||||
|
||||
**Zugelassene Dokumenttypen:**
|
||||
- Beschluesse (Resolutions)
|
||||
- Vereinbarungen (Agreements)
|
||||
- EPA (Einheitliche Pruefungsanforderungen)
|
||||
- Empfehlungen (Recommendations)
|
||||
|
||||
**In Operations-Matrix:**
|
||||
| Operation | Erlaubt | Hinweis |
|
||||
|-----------|---------|---------|
|
||||
| Lookup | Ja | Quelle anzeigen |
|
||||
| RAG | Ja | Zitation im Output |
|
||||
| Training | **NEIN** | VERBOTEN |
|
||||
| Export | Ja | Attribution |
|
||||
|
||||
---
|
||||
|
||||
## 9. Lizenzen
|
||||
|
||||
| Lizenz | Name | Attribution |
|
||||
|--------|------|-------------|
|
||||
| DL-DE-BY-2.0 | Datenlizenz Deutschland | Ja |
|
||||
| CC-BY | Creative Commons Attribution | Ja |
|
||||
| CC-BY-SA | CC Attribution-ShareAlike | Ja + ShareAlike |
|
||||
| CC0 | Public Domain | Nein |
|
||||
| §5 UrhG | Amtliche Werke | Ja (Quelle) |
|
||||
|
||||
---
|
||||
|
||||
## 10. Aktueller Stand
|
||||
|
||||
**Phase 1: Datenbank & Models - ABGESCHLOSSEN**
|
||||
- [x] Codebase-Exploration edu-search-service
|
||||
- [x] Codebase-Exploration admin-v2
|
||||
- [x] Plan dokumentiert
|
||||
- [x] Migration 005_source_policies.sql erstellen
|
||||
- [x] Go Models implementieren (internal/policy/models.go)
|
||||
- [x] Store-Layer implementieren (internal/policy/store.go)
|
||||
- [x] Policy Enforcer implementieren (internal/policy/enforcer.go)
|
||||
- [x] PII Detector implementieren (internal/policy/pii_detector.go)
|
||||
- [x] Audit Logging implementieren (internal/policy/audit.go)
|
||||
- [x] YAML Loader implementieren (internal/policy/loader.go)
|
||||
- [x] Initial-Daten YAML erstellen (policies/bundeslaender.yaml)
|
||||
- [x] Unit Tests schreiben (internal/policy/policy_test.go)
|
||||
- [x] README aktualisieren
|
||||
|
||||
**Phase 2: Admin API - AUSSTEHEND**
|
||||
- [ ] API Handlers implementieren (policy_handlers.go)
|
||||
- [ ] main.go aktualisieren
|
||||
- [ ] API testen
|
||||
|
||||
**Phase 3: Integration - AUSSTEHEND**
|
||||
- [ ] Crawler-Integration
|
||||
- [ ] Pipeline-Integration
|
||||
|
||||
**Phase 4: Frontend - AUSSTEHEND**
|
||||
- [ ] Frontend page.tsx erstellen
|
||||
- [ ] SourcesTab Component
|
||||
- [ ] OperationsMatrixTab Component
|
||||
- [ ] PIIRulesTab Component
|
||||
- [ ] AuditTab Component
|
||||
- [ ] Navigation aktualisieren
|
||||
|
||||
**Erstellte Dateien:**
|
||||
```
|
||||
edu-search-service/
|
||||
├── migrations/
|
||||
│ └── 005_source_policies.sql # DB Schema (6 Tabellen)
|
||||
├── internal/policy/
|
||||
│ ├── models.go # Datenstrukturen & Enums
|
||||
│ ├── store.go # PostgreSQL CRUD
|
||||
│ ├── enforcer.go # Policy-Enforcement
|
||||
│ ├── pii_detector.go # PII-Erkennung
|
||||
│ ├── audit.go # Audit-Logging
|
||||
│ ├── loader.go # YAML-Loader
|
||||
│ └── policy_test.go # Unit Tests
|
||||
└── policies/
|
||||
└── bundeslaender.yaml # Initial-Daten (8 Bundeslaender)
|
||||
```
|
||||
45
admin-v2/ai-compliance-sdk/Dockerfile
Normal file
45
admin-v2/ai-compliance-sdk/Dockerfile
Normal file
@@ -0,0 +1,45 @@
|
||||
# Build stage
|
||||
FROM golang:1.21-alpine AS builder
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install dependencies
|
||||
RUN apk add --no-cache git ca-certificates
|
||||
|
||||
# Copy go mod files
|
||||
COPY go.mod go.sum* ./
|
||||
|
||||
# Download dependencies
|
||||
RUN go mod download
|
||||
|
||||
# Copy source code
|
||||
COPY . .
|
||||
|
||||
# Build the application
|
||||
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o sdk-backend ./cmd/server
|
||||
|
||||
# Runtime stage
|
||||
FROM alpine:3.19
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install ca-certificates for HTTPS
|
||||
RUN apk add --no-cache ca-certificates tzdata
|
||||
|
||||
# Copy binary from builder
|
||||
COPY --from=builder /app/sdk-backend .
|
||||
COPY --from=builder /app/configs ./configs
|
||||
|
||||
# Create non-root user
|
||||
RUN adduser -D -g '' appuser
|
||||
USER appuser
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8085
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
||||
CMD wget --no-verbose --tries=1 --spider http://localhost:8085/health || exit 1
|
||||
|
||||
# Run the application
|
||||
CMD ["./sdk-backend"]
|
||||
197
admin-v2/ai-compliance-sdk/cmd/server/main.go
Normal file
197
admin-v2/ai-compliance-sdk/cmd/server/main.go
Normal file
@@ -0,0 +1,197 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/breakpilot/ai-compliance-sdk/internal/api"
|
||||
"github.com/breakpilot/ai-compliance-sdk/internal/db"
|
||||
"github.com/breakpilot/ai-compliance-sdk/internal/llm"
|
||||
"github.com/breakpilot/ai-compliance-sdk/internal/rag"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/joho/godotenv"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Load environment variables
|
||||
if err := godotenv.Load(); err != nil {
|
||||
log.Println("No .env file found, using environment variables")
|
||||
}
|
||||
|
||||
// Get configuration from environment
|
||||
port := getEnv("PORT", "8085")
|
||||
dbURL := getEnv("DATABASE_URL", "postgres://localhost:5432/sdk_states?sslmode=disable")
|
||||
qdrantURL := getEnv("QDRANT_URL", "http://localhost:6333")
|
||||
anthropicKey := getEnv("ANTHROPIC_API_KEY", "")
|
||||
|
||||
// Initialize database connection
|
||||
dbPool, err := db.NewPostgresPool(dbURL)
|
||||
if err != nil {
|
||||
log.Printf("Warning: Database connection failed: %v", err)
|
||||
// Continue without database - use in-memory fallback
|
||||
}
|
||||
|
||||
// Initialize RAG service
|
||||
ragService, err := rag.NewService(qdrantURL)
|
||||
if err != nil {
|
||||
log.Printf("Warning: RAG service initialization failed: %v", err)
|
||||
// Continue without RAG - will return empty results
|
||||
}
|
||||
|
||||
// Initialize LLM service
|
||||
llmService := llm.NewService(anthropicKey)
|
||||
|
||||
// Create Gin router
|
||||
gin.SetMode(gin.ReleaseMode)
|
||||
if os.Getenv("GIN_MODE") == "debug" {
|
||||
gin.SetMode(gin.DebugMode)
|
||||
}
|
||||
|
||||
router := gin.Default()
|
||||
|
||||
// CORS middleware
|
||||
router.Use(corsMiddleware())
|
||||
|
||||
// Health check
|
||||
router.GET("/health", func(c *gin.Context) {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"status": "healthy",
|
||||
"timestamp": time.Now().UTC().Format(time.RFC3339),
|
||||
"services": gin.H{
|
||||
"database": dbPool != nil,
|
||||
"rag": ragService != nil,
|
||||
"llm": anthropicKey != "",
|
||||
},
|
||||
})
|
||||
})
|
||||
|
||||
// API routes
|
||||
v1 := router.Group("/sdk/v1")
|
||||
{
|
||||
// State Management
|
||||
stateHandler := api.NewStateHandler(dbPool)
|
||||
v1.GET("/state/:tenantId", stateHandler.GetState)
|
||||
v1.POST("/state", stateHandler.SaveState)
|
||||
v1.DELETE("/state/:tenantId", stateHandler.DeleteState)
|
||||
|
||||
// RAG Search
|
||||
ragHandler := api.NewRAGHandler(ragService)
|
||||
v1.GET("/rag/search", ragHandler.Search)
|
||||
v1.GET("/rag/status", ragHandler.GetCorpusStatus)
|
||||
v1.POST("/rag/index", ragHandler.IndexDocument)
|
||||
|
||||
// Document Generation
|
||||
generateHandler := api.NewGenerateHandler(llmService, ragService)
|
||||
v1.POST("/generate/dsfa", generateHandler.GenerateDSFA)
|
||||
v1.POST("/generate/tom", generateHandler.GenerateTOM)
|
||||
v1.POST("/generate/vvt", generateHandler.GenerateVVT)
|
||||
v1.POST("/generate/gutachten", generateHandler.GenerateGutachten)
|
||||
|
||||
// Checkpoint Validation
|
||||
checkpointHandler := api.NewCheckpointHandler()
|
||||
v1.GET("/checkpoints", checkpointHandler.GetAll)
|
||||
v1.POST("/checkpoints/validate", checkpointHandler.Validate)
|
||||
|
||||
// Academy (Compliance E-Learning)
|
||||
academyHandler := api.NewAcademyHandler(dbPool, llmService, ragService)
|
||||
academy := v1.Group("/academy")
|
||||
{
|
||||
// Course CRUD
|
||||
academy.GET("/courses", academyHandler.ListCourses)
|
||||
academy.GET("/courses/:id", academyHandler.GetCourse)
|
||||
academy.POST("/courses", academyHandler.CreateCourse)
|
||||
academy.PUT("/courses/:id", academyHandler.UpdateCourse)
|
||||
academy.DELETE("/courses/:id", academyHandler.DeleteCourse)
|
||||
|
||||
// Statistics
|
||||
academy.GET("/statistics", academyHandler.GetStatistics)
|
||||
|
||||
// Enrollments
|
||||
academy.GET("/enrollments", academyHandler.ListEnrollments)
|
||||
academy.POST("/enrollments", academyHandler.EnrollUser)
|
||||
academy.PUT("/enrollments/:id/progress", academyHandler.UpdateProgress)
|
||||
academy.POST("/enrollments/:id/complete", academyHandler.CompleteEnrollment)
|
||||
|
||||
// Quiz
|
||||
academy.POST("/lessons/:id/quiz", academyHandler.SubmitQuiz)
|
||||
|
||||
// Certificates
|
||||
academy.POST("/enrollments/:id/certificate", academyHandler.GenerateCertificateEndpoint)
|
||||
academy.GET("/certificates/:id", academyHandler.GetCertificate)
|
||||
academy.GET("/certificates/:id/pdf", academyHandler.DownloadCertificatePDF)
|
||||
|
||||
// AI Course Generation
|
||||
academy.POST("/courses/generate", academyHandler.GenerateCourse)
|
||||
academy.POST("/lessons/:id/regenerate", academyHandler.RegenerateLesson)
|
||||
|
||||
// Video Generation
|
||||
academy.POST("/courses/:id/generate-videos", academyHandler.GenerateVideos)
|
||||
academy.GET("/courses/:id/video-status", academyHandler.GetVideoStatus)
|
||||
}
|
||||
}
|
||||
|
||||
// Create server
|
||||
srv := &http.Server{
|
||||
Addr: ":" + port,
|
||||
Handler: router,
|
||||
}
|
||||
|
||||
// Graceful shutdown
|
||||
go func() {
|
||||
log.Printf("SDK Backend starting on port %s", port)
|
||||
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
|
||||
log.Fatalf("Failed to start server: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
// Wait for interrupt signal
|
||||
quit := make(chan os.Signal, 1)
|
||||
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
|
||||
<-quit
|
||||
|
||||
log.Println("Shutting down server...")
|
||||
|
||||
// Give outstanding requests 5 seconds to complete
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
if err := srv.Shutdown(ctx); err != nil {
|
||||
log.Fatal("Server forced to shutdown:", err)
|
||||
}
|
||||
|
||||
// Close database connection
|
||||
if dbPool != nil {
|
||||
dbPool.Close()
|
||||
}
|
||||
|
||||
log.Println("Server exited")
|
||||
}
|
||||
|
||||
func getEnv(key, defaultValue string) string {
|
||||
if value := os.Getenv(key); value != "" {
|
||||
return value
|
||||
}
|
||||
return defaultValue
|
||||
}
|
||||
|
||||
func corsMiddleware() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
c.Writer.Header().Set("Access-Control-Allow-Origin", "*")
|
||||
c.Writer.Header().Set("Access-Control-Allow-Credentials", "true")
|
||||
c.Writer.Header().Set("Access-Control-Allow-Headers", "Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization, accept, origin, Cache-Control, X-Requested-With, If-Match, If-None-Match")
|
||||
c.Writer.Header().Set("Access-Control-Allow-Methods", "POST, OPTIONS, GET, PUT, DELETE")
|
||||
c.Writer.Header().Set("Access-Control-Expose-Headers", "ETag, Last-Modified")
|
||||
|
||||
if c.Request.Method == "OPTIONS" {
|
||||
c.AbortWithStatus(204)
|
||||
return
|
||||
}
|
||||
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
42
admin-v2/ai-compliance-sdk/configs/config.yaml
Normal file
42
admin-v2/ai-compliance-sdk/configs/config.yaml
Normal file
@@ -0,0 +1,42 @@
|
||||
server:
|
||||
port: 8085
|
||||
mode: release # debug, release, test
|
||||
|
||||
database:
|
||||
url: postgres://localhost:5432/sdk_states?sslmode=disable
|
||||
max_connections: 10
|
||||
min_connections: 2
|
||||
|
||||
rag:
|
||||
qdrant_url: http://localhost:6333
|
||||
collection: legal_corpus
|
||||
embedding_model: BGE-M3
|
||||
top_k: 5
|
||||
|
||||
llm:
|
||||
provider: anthropic # anthropic, openai
|
||||
model: claude-3-5-sonnet-20241022
|
||||
max_tokens: 4096
|
||||
temperature: 0.3
|
||||
|
||||
cors:
|
||||
allowed_origins:
|
||||
- http://localhost:3000
|
||||
- http://localhost:3002
|
||||
- http://macmini:3000
|
||||
- http://macmini:3002
|
||||
allowed_methods:
|
||||
- GET
|
||||
- POST
|
||||
- PUT
|
||||
- DELETE
|
||||
- OPTIONS
|
||||
allowed_headers:
|
||||
- Content-Type
|
||||
- Authorization
|
||||
- If-Match
|
||||
- If-None-Match
|
||||
|
||||
logging:
|
||||
level: info # debug, info, warn, error
|
||||
format: json
|
||||
45
admin-v2/ai-compliance-sdk/go.mod
Normal file
45
admin-v2/ai-compliance-sdk/go.mod
Normal file
@@ -0,0 +1,45 @@
|
||||
module github.com/breakpilot/ai-compliance-sdk
|
||||
|
||||
go 1.23
|
||||
|
||||
require (
|
||||
github.com/gin-gonic/gin v1.10.0
|
||||
github.com/jackc/pgx/v5 v5.5.1
|
||||
github.com/joho/godotenv v1.5.1
|
||||
github.com/jung-kurt/gofpdf v1.16.2
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/bytedance/sonic v1.11.6 // indirect
|
||||
github.com/bytedance/sonic/loader v0.1.1 // indirect
|
||||
github.com/cloudwego/base64x v0.1.4 // indirect
|
||||
github.com/cloudwego/iasm v0.2.0 // indirect
|
||||
github.com/gabriel-vasile/mimetype v1.4.3 // indirect
|
||||
github.com/gin-contrib/sse v0.1.0 // indirect
|
||||
github.com/go-playground/locales v0.14.1 // indirect
|
||||
github.com/go-playground/universal-translator v0.18.1 // indirect
|
||||
github.com/go-playground/validator/v10 v10.20.0 // indirect
|
||||
github.com/goccy/go-json v0.10.2 // indirect
|
||||
github.com/jackc/pgpassfile v1.0.0 // indirect
|
||||
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a // indirect
|
||||
github.com/jackc/puddle/v2 v2.2.1 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.2.7 // indirect
|
||||
github.com/kr/text v0.2.0 // indirect
|
||||
github.com/leodido/go-urn v1.4.0 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
|
||||
github.com/rogpeppe/go-internal v1.14.1 // indirect
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
||||
github.com/ugorji/go/codec v1.2.12 // indirect
|
||||
golang.org/x/arch v0.8.0 // indirect
|
||||
golang.org/x/crypto v0.23.0 // indirect
|
||||
golang.org/x/net v0.25.0 // indirect
|
||||
golang.org/x/sync v0.1.0 // indirect
|
||||
golang.org/x/sys v0.26.0 // indirect
|
||||
golang.org/x/text v0.15.0 // indirect
|
||||
google.golang.org/protobuf v1.34.1 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
)
|
||||
119
admin-v2/ai-compliance-sdk/go.sum
Normal file
119
admin-v2/ai-compliance-sdk/go.sum
Normal file
@@ -0,0 +1,119 @@
|
||||
github.com/boombuler/barcode v1.0.0/go.mod h1:paBWMcWSl3LHKBqUq+rly7CNSldXjb2rDl3JlRe0mD8=
|
||||
github.com/bytedance/sonic v1.11.6 h1:oUp34TzMlL+OY1OUWxHqsdkgC/Zfc85zGqw9siXjrc0=
|
||||
github.com/bytedance/sonic v1.11.6/go.mod h1:LysEHSvpvDySVdC2f87zGWf6CIKJcAvqab1ZaiQtds4=
|
||||
github.com/bytedance/sonic/loader v0.1.1 h1:c+e5Pt1k/cy5wMveRDyk2X4B9hF4g7an8N3zCYjJFNM=
|
||||
github.com/bytedance/sonic/loader v0.1.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4yY2JpfqGeCtNLU=
|
||||
github.com/cloudwego/base64x v0.1.4 h1:jwCgWpFanWmN8xoIUHa2rtzmkd5J2plF/dnLS6Xd/0Y=
|
||||
github.com/cloudwego/base64x v0.1.4/go.mod h1:0zlkT4Wn5C6NdauXdJRhSKRlJvmclQ1hhJgA0rcu/8w=
|
||||
github.com/cloudwego/iasm v0.2.0 h1:1KNIy1I1H9hNNFEEH3DVnI4UujN+1zjpuk6gwHLTssg=
|
||||
github.com/cloudwego/iasm v0.2.0/go.mod h1:8rXZaNYT2n95jn+zTI1sDr+IgcD2GVs0nlbbQPiEFhY=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0=
|
||||
github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk=
|
||||
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
|
||||
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
|
||||
github.com/gin-gonic/gin v1.10.0 h1:nTuyha1TYqgedzytsKYqna+DfLos46nTv2ygFy86HFU=
|
||||
github.com/gin-gonic/gin v1.10.0/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y=
|
||||
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
|
||||
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
|
||||
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
|
||||
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
|
||||
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
|
||||
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
|
||||
github.com/go-playground/validator/v10 v10.20.0 h1:K9ISHbSaI0lyB2eWMPJo+kOS/FBExVwjEviJTixqxL8=
|
||||
github.com/go-playground/validator/v10 v10.20.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM=
|
||||
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
|
||||
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
|
||||
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
|
||||
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
|
||||
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a h1:bbPeKD0xmW/Y25WS6cokEszi5g+S0QxI/d45PkRi7Nk=
|
||||
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
|
||||
github.com/jackc/pgx/v5 v5.5.1 h1:5I9etrGkLrN+2XPCsi6XLlV5DITbSL/xBZdmAxFcXPI=
|
||||
github.com/jackc/pgx/v5 v5.5.1/go.mod h1:Ig06C2Vu0t5qXC60W8sqIthScaEnFvojjj9dSljmHRA=
|
||||
github.com/jackc/puddle/v2 v2.2.1 h1:RhxXJtFG022u4ibrCSMSiu5aOq1i77R3OHKNJj77OAk=
|
||||
github.com/jackc/puddle/v2 v2.2.1/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
|
||||
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
|
||||
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
|
||||
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
||||
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||
github.com/jung-kurt/gofpdf v1.0.0/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
|
||||
github.com/jung-kurt/gofpdf v1.16.2 h1:jgbatWHfRlPYiK85qgevsZTHviWXKwB1TTiKdz5PtRc=
|
||||
github.com/jung-kurt/gofpdf v1.16.2/go.mod h1:1hl7y57EsiPAkLbOwzpzqgx1A30nQCk/YmFV8S2vmK0=
|
||||
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
||||
github.com/klauspost/cpuid/v2 v2.2.7 h1:ZWSB3igEs+d0qvnxR/ZBzXVmxkgt8DdzP6m9pfuVLDM=
|
||||
github.com/klauspost/cpuid/v2 v2.2.7/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
|
||||
github.com/knz/go-libedit v1.10.1/go.mod h1:MZTVkCWyz0oBc7JOWP3wNAzd002ZbM/5hgShxwh4x8M=
|
||||
github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
|
||||
github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
|
||||
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
|
||||
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
|
||||
github.com/phpdave11/gofpdi v1.0.7/go.mod h1:vBmVV0Do6hSBHC8uKUQ71JGW+ZGQq74llk/7bXwjDoI=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/ruudk/golang-pdf417 v0.0.0-20181029194003-1af4ab5afa58/go.mod h1:6lfFZQK844Gfx8o5WFuvpxWRwnSoipWe/p622j1v06w=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
|
||||
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
|
||||
github.com/ugorji/go/codec v1.2.12 h1:9LC83zGrHhuUA9l16C9AHXAqEV/2wBQ4nkvumAE65EE=
|
||||
github.com/ugorji/go/codec v1.2.12/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
|
||||
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
||||
golang.org/x/arch v0.8.0 h1:3wRIsP3pM4yUptoR96otTUOXI367OS0+c9eeRi9doIc=
|
||||
golang.org/x/arch v0.8.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys=
|
||||
golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI=
|
||||
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
|
||||
golang.org/x/image v0.0.0-20190910094157-69e4b8554b2a/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
||||
golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac=
|
||||
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
|
||||
golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
|
||||
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
|
||||
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk=
|
||||
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/protobuf v1.34.1 h1:9ddQBjfCyZPOHPUiPxpYESBLc+T8P3E+Vo4IbKZgFWg=
|
||||
google.golang.org/protobuf v1.34.1/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
nullprogram.com/x/optparse v1.0.0/go.mod h1:KdyPE+Igbe0jQUrVfMqDMeJQIJZEuyV7pjYmp6pbG50=
|
||||
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
|
||||
152
admin-v2/ai-compliance-sdk/internal/academy/certificate_pdf.go
Normal file
152
admin-v2/ai-compliance-sdk/internal/academy/certificate_pdf.go
Normal file
@@ -0,0 +1,152 @@
|
||||
package academy
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/jung-kurt/gofpdf"
|
||||
)
|
||||
|
||||
// CertificateData holds all data needed to generate a certificate PDF
|
||||
type CertificateData struct {
|
||||
CertificateID string
|
||||
UserName string
|
||||
CourseName string
|
||||
CompanyName string
|
||||
Score int
|
||||
IssuedAt time.Time
|
||||
ValidUntil time.Time
|
||||
}
|
||||
|
||||
// GenerateCertificatePDF generates a PDF certificate and returns the bytes
|
||||
func GenerateCertificatePDF(data CertificateData) ([]byte, error) {
|
||||
pdf := gofpdf.New("L", "mm", "A4", "") // Landscape A4
|
||||
pdf.SetAutoPageBreak(false, 0)
|
||||
pdf.AddPage()
|
||||
|
||||
pageWidth, pageHeight := pdf.GetPageSize()
|
||||
|
||||
// Background color - light gray
|
||||
pdf.SetFillColor(250, 250, 252)
|
||||
pdf.Rect(0, 0, pageWidth, pageHeight, "F")
|
||||
|
||||
// Border - decorative
|
||||
pdf.SetDrawColor(79, 70, 229) // Purple/Indigo
|
||||
pdf.SetLineWidth(3)
|
||||
pdf.Rect(10, 10, pageWidth-20, pageHeight-20, "D")
|
||||
pdf.SetLineWidth(1)
|
||||
pdf.Rect(14, 14, pageWidth-28, pageHeight-28, "D")
|
||||
|
||||
// Header - Company/BreakPilot Logo area
|
||||
companyName := data.CompanyName
|
||||
if companyName == "" {
|
||||
companyName = "BreakPilot Compliance"
|
||||
}
|
||||
|
||||
pdf.SetFont("Helvetica", "", 12)
|
||||
pdf.SetTextColor(120, 120, 120)
|
||||
pdf.SetXY(0, 25)
|
||||
pdf.CellFormat(pageWidth, 10, companyName, "", 0, "C", false, 0, "")
|
||||
|
||||
// Title
|
||||
pdf.SetFont("Helvetica", "B", 32)
|
||||
pdf.SetTextColor(30, 30, 30)
|
||||
pdf.SetXY(0, 42)
|
||||
pdf.CellFormat(pageWidth, 15, "SCHULUNGSZERTIFIKAT", "", 0, "C", false, 0, "")
|
||||
|
||||
// Decorative line
|
||||
pdf.SetDrawColor(79, 70, 229)
|
||||
pdf.SetLineWidth(1.5)
|
||||
lineY := 62.0
|
||||
pdf.Line(pageWidth/2-60, lineY, pageWidth/2+60, lineY)
|
||||
|
||||
// "Hiermit wird bescheinigt, dass"
|
||||
pdf.SetFont("Helvetica", "", 13)
|
||||
pdf.SetTextColor(80, 80, 80)
|
||||
pdf.SetXY(0, 72)
|
||||
pdf.CellFormat(pageWidth, 8, "Hiermit wird bescheinigt, dass", "", 0, "C", false, 0, "")
|
||||
|
||||
// Name
|
||||
pdf.SetFont("Helvetica", "B", 26)
|
||||
pdf.SetTextColor(30, 30, 30)
|
||||
pdf.SetXY(0, 85)
|
||||
pdf.CellFormat(pageWidth, 12, data.UserName, "", 0, "C", false, 0, "")
|
||||
|
||||
// "die folgende Compliance-Schulung erfolgreich abgeschlossen hat:"
|
||||
pdf.SetFont("Helvetica", "", 13)
|
||||
pdf.SetTextColor(80, 80, 80)
|
||||
pdf.SetXY(0, 103)
|
||||
pdf.CellFormat(pageWidth, 8, "die folgende Compliance-Schulung erfolgreich abgeschlossen hat:", "", 0, "C", false, 0, "")
|
||||
|
||||
// Course Name
|
||||
pdf.SetFont("Helvetica", "B", 20)
|
||||
pdf.SetTextColor(79, 70, 229)
|
||||
pdf.SetXY(0, 116)
|
||||
pdf.CellFormat(pageWidth, 10, data.CourseName, "", 0, "C", false, 0, "")
|
||||
|
||||
// Score
|
||||
if data.Score > 0 {
|
||||
pdf.SetFont("Helvetica", "", 12)
|
||||
pdf.SetTextColor(80, 80, 80)
|
||||
pdf.SetXY(0, 130)
|
||||
pdf.CellFormat(pageWidth, 8, fmt.Sprintf("Testergebnis: %d%%", data.Score), "", 0, "C", false, 0, "")
|
||||
}
|
||||
|
||||
// Bottom section - Dates and Signature
|
||||
bottomY := 148.0
|
||||
|
||||
// Left: Issued Date
|
||||
pdf.SetFont("Helvetica", "", 10)
|
||||
pdf.SetTextColor(100, 100, 100)
|
||||
pdf.SetXY(40, bottomY)
|
||||
pdf.CellFormat(80, 6, fmt.Sprintf("Abschlussdatum: %s", data.IssuedAt.Format("02.01.2006")), "", 0, "L", false, 0, "")
|
||||
|
||||
// Center: Valid Until
|
||||
pdf.SetXY(pageWidth/2-40, bottomY)
|
||||
pdf.CellFormat(80, 6, fmt.Sprintf("Gueltig bis: %s", data.ValidUntil.Format("02.01.2006")), "", 0, "C", false, 0, "")
|
||||
|
||||
// Right: Certificate ID
|
||||
pdf.SetXY(pageWidth-120, bottomY)
|
||||
pdf.CellFormat(80, 6, fmt.Sprintf("Zertifikats-Nr.: %s", data.CertificateID[:min(12, len(data.CertificateID))]), "", 0, "R", false, 0, "")
|
||||
|
||||
// Signature line
|
||||
sigY := 162.0
|
||||
pdf.SetDrawColor(150, 150, 150)
|
||||
pdf.SetLineWidth(0.5)
|
||||
|
||||
// Left signature
|
||||
pdf.Line(50, sigY, 130, sigY)
|
||||
pdf.SetFont("Helvetica", "", 9)
|
||||
pdf.SetTextColor(120, 120, 120)
|
||||
pdf.SetXY(50, sigY+2)
|
||||
pdf.CellFormat(80, 5, "Datenschutzbeauftragter", "", 0, "C", false, 0, "")
|
||||
|
||||
// Right signature
|
||||
pdf.Line(pageWidth-130, sigY, pageWidth-50, sigY)
|
||||
pdf.SetXY(pageWidth-130, sigY+2)
|
||||
pdf.CellFormat(80, 5, "Geschaeftsfuehrung", "", 0, "C", false, 0, "")
|
||||
|
||||
// Footer
|
||||
pdf.SetFont("Helvetica", "", 8)
|
||||
pdf.SetTextColor(160, 160, 160)
|
||||
pdf.SetXY(0, pageHeight-22)
|
||||
pdf.CellFormat(pageWidth, 5, "Dieses Zertifikat wurde elektronisch erstellt und ist ohne Unterschrift gueltig.", "", 0, "C", false, 0, "")
|
||||
pdf.SetXY(0, pageHeight-17)
|
||||
pdf.CellFormat(pageWidth, 5, fmt.Sprintf("Verifizierung unter: https://compliance.breakpilot.de/verify/%s", data.CertificateID), "", 0, "C", false, 0, "")
|
||||
|
||||
// Generate PDF bytes
|
||||
var buf bytes.Buffer
|
||||
if err := pdf.Output(&buf); err != nil {
|
||||
return nil, fmt.Errorf("failed to generate PDF: %w", err)
|
||||
}
|
||||
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
func min(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
}
|
||||
return b
|
||||
}
|
||||
105
admin-v2/ai-compliance-sdk/internal/academy/elevenlabs_client.go
Normal file
105
admin-v2/ai-compliance-sdk/internal/academy/elevenlabs_client.go
Normal file
@@ -0,0 +1,105 @@
|
||||
package academy
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
// ElevenLabsClient handles text-to-speech via the ElevenLabs API
|
||||
type ElevenLabsClient struct {
|
||||
apiKey string
|
||||
voiceID string
|
||||
client *http.Client
|
||||
}
|
||||
|
||||
// NewElevenLabsClient creates a new ElevenLabs client
|
||||
func NewElevenLabsClient() *ElevenLabsClient {
|
||||
apiKey := os.Getenv("ELEVENLABS_API_KEY")
|
||||
voiceID := os.Getenv("ELEVENLABS_VOICE_ID")
|
||||
if voiceID == "" {
|
||||
voiceID = "EXAVITQu4vr4xnSDxMaL" // Default: "Sarah" voice
|
||||
}
|
||||
|
||||
return &ElevenLabsClient{
|
||||
apiKey: apiKey,
|
||||
voiceID: voiceID,
|
||||
client: &http.Client{
|
||||
Timeout: 120 * time.Second,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// IsConfigured returns true if API key is set
|
||||
func (c *ElevenLabsClient) IsConfigured() bool {
|
||||
return c.apiKey != ""
|
||||
}
|
||||
|
||||
// TextToSpeechRequest represents the API request
|
||||
type TextToSpeechRequest struct {
|
||||
Text string `json:"text"`
|
||||
ModelID string `json:"model_id"`
|
||||
VoiceSettings VoiceSettings `json:"voice_settings"`
|
||||
}
|
||||
|
||||
// VoiceSettings controls voice parameters
|
||||
type VoiceSettings struct {
|
||||
Stability float64 `json:"stability"`
|
||||
SimilarityBoost float64 `json:"similarity_boost"`
|
||||
Style float64 `json:"style"`
|
||||
}
|
||||
|
||||
// TextToSpeech converts text to speech audio (MP3)
|
||||
func (c *ElevenLabsClient) TextToSpeech(text string) ([]byte, error) {
|
||||
if !c.IsConfigured() {
|
||||
return nil, fmt.Errorf("ElevenLabs API key not configured")
|
||||
}
|
||||
|
||||
url := fmt.Sprintf("https://api.elevenlabs.io/v1/text-to-speech/%s", c.voiceID)
|
||||
|
||||
reqBody := TextToSpeechRequest{
|
||||
Text: text,
|
||||
ModelID: "eleven_multilingual_v2",
|
||||
VoiceSettings: VoiceSettings{
|
||||
Stability: 0.5,
|
||||
SimilarityBoost: 0.75,
|
||||
Style: 0.5,
|
||||
},
|
||||
}
|
||||
|
||||
jsonBody, err := json.Marshal(reqBody)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to marshal request: %w", err)
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", url, bytes.NewReader(jsonBody))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create request: %w", err)
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("xi-api-key", c.apiKey)
|
||||
req.Header.Set("Accept", "audio/mpeg")
|
||||
|
||||
resp, err := c.client.Do(req)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("ElevenLabs API request failed: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("ElevenLabs API error %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
audioData, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read audio response: %w", err)
|
||||
}
|
||||
|
||||
return audioData, nil
|
||||
}
|
||||
184
admin-v2/ai-compliance-sdk/internal/academy/heygen_client.go
Normal file
184
admin-v2/ai-compliance-sdk/internal/academy/heygen_client.go
Normal file
@@ -0,0 +1,184 @@
|
||||
package academy
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
// HeyGenClient handles avatar video generation via the HeyGen API
|
||||
type HeyGenClient struct {
|
||||
apiKey string
|
||||
avatarID string
|
||||
client *http.Client
|
||||
}
|
||||
|
||||
// NewHeyGenClient creates a new HeyGen client
|
||||
func NewHeyGenClient() *HeyGenClient {
|
||||
apiKey := os.Getenv("HEYGEN_API_KEY")
|
||||
avatarID := os.Getenv("HEYGEN_AVATAR_ID")
|
||||
if avatarID == "" {
|
||||
avatarID = "josh_lite3_20230714" // Default avatar
|
||||
}
|
||||
|
||||
return &HeyGenClient{
|
||||
apiKey: apiKey,
|
||||
avatarID: avatarID,
|
||||
client: &http.Client{
|
||||
Timeout: 300 * time.Second, // Video generation can take time
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// IsConfigured returns true if API key is set
|
||||
func (c *HeyGenClient) IsConfigured() bool {
|
||||
return c.apiKey != ""
|
||||
}
|
||||
|
||||
// CreateVideoRequest represents the HeyGen API request
|
||||
type CreateVideoRequest struct {
|
||||
VideoInputs []VideoInput `json:"video_inputs"`
|
||||
Dimension Dimension `json:"dimension"`
|
||||
}
|
||||
|
||||
// VideoInput represents a single video segment
|
||||
type VideoInput struct {
|
||||
Character Character `json:"character"`
|
||||
Voice VideoVoice `json:"voice"`
|
||||
}
|
||||
|
||||
// Character represents the avatar
|
||||
type Character struct {
|
||||
Type string `json:"type"`
|
||||
AvatarID string `json:"avatar_id"`
|
||||
}
|
||||
|
||||
// VideoVoice represents the voice/audio source
|
||||
type VideoVoice struct {
|
||||
Type string `json:"type"` // "audio" for pre-generated audio
|
||||
AudioURL string `json:"audio_url,omitempty"`
|
||||
InputText string `json:"input_text,omitempty"`
|
||||
}
|
||||
|
||||
// Dimension represents video dimensions
|
||||
type Dimension struct {
|
||||
Width int `json:"width"`
|
||||
Height int `json:"height"`
|
||||
}
|
||||
|
||||
// CreateVideoResponse represents the HeyGen API response
|
||||
type CreateVideoResponse struct {
|
||||
Data struct {
|
||||
VideoID string `json:"video_id"`
|
||||
} `json:"data"`
|
||||
Error interface{} `json:"error"`
|
||||
}
|
||||
|
||||
// HeyGenVideoStatus represents video status from HeyGen
|
||||
type HeyGenVideoStatus struct {
|
||||
Data struct {
|
||||
Status string `json:"status"` // processing, completed, failed
|
||||
VideoURL string `json:"video_url"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
// CreateVideo creates a video with the avatar and audio
|
||||
func (c *HeyGenClient) CreateVideo(audioURL string) (*CreateVideoResponse, error) {
|
||||
if !c.IsConfigured() {
|
||||
return nil, fmt.Errorf("HeyGen API key not configured")
|
||||
}
|
||||
|
||||
url := "https://api.heygen.com/v2/video/generate"
|
||||
|
||||
reqBody := CreateVideoRequest{
|
||||
VideoInputs: []VideoInput{
|
||||
{
|
||||
Character: Character{
|
||||
Type: "avatar",
|
||||
AvatarID: c.avatarID,
|
||||
},
|
||||
Voice: VideoVoice{
|
||||
Type: "audio",
|
||||
AudioURL: audioURL,
|
||||
},
|
||||
},
|
||||
},
|
||||
Dimension: Dimension{
|
||||
Width: 1920,
|
||||
Height: 1080,
|
||||
},
|
||||
}
|
||||
|
||||
jsonBody, err := json.Marshal(reqBody)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to marshal request: %w", err)
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", url, bytes.NewReader(jsonBody))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create request: %w", err)
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("X-Api-Key", c.apiKey)
|
||||
|
||||
resp, err := c.client.Do(req)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("HeyGen API request failed: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read response: %w", err)
|
||||
}
|
||||
|
||||
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusCreated {
|
||||
return nil, fmt.Errorf("HeyGen API error %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var result CreateVideoResponse
|
||||
if err := json.Unmarshal(body, &result); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse response: %w", err)
|
||||
}
|
||||
|
||||
return &result, nil
|
||||
}
|
||||
|
||||
// GetVideoStatus checks the status of a video generation job
|
||||
func (c *HeyGenClient) GetVideoStatus(videoID string) (*HeyGenVideoStatus, error) {
|
||||
if !c.IsConfigured() {
|
||||
return nil, fmt.Errorf("HeyGen API key not configured")
|
||||
}
|
||||
|
||||
url := fmt.Sprintf("https://api.heygen.com/v1/video_status.get?video_id=%s", videoID)
|
||||
|
||||
req, err := http.NewRequest("GET", url, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create request: %w", err)
|
||||
}
|
||||
|
||||
req.Header.Set("X-Api-Key", c.apiKey)
|
||||
|
||||
resp, err := c.client.Do(req)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("HeyGen API request failed: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read response: %w", err)
|
||||
}
|
||||
|
||||
var status HeyGenVideoStatus
|
||||
if err := json.Unmarshal(body, &status); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse response: %w", err)
|
||||
}
|
||||
|
||||
return &status, nil
|
||||
}
|
||||
@@ -0,0 +1,91 @@
|
||||
package academy
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
)
|
||||
|
||||
// VideoGenerator orchestrates video generation with 3-tier fallback:
|
||||
// 1. HeyGen + ElevenLabs -> Avatar video with voice
|
||||
// 2. ElevenLabs only -> Audio podcast style
|
||||
// 3. No external services -> Text + Quiz only
|
||||
type VideoGenerator struct {
|
||||
elevenLabs *ElevenLabsClient
|
||||
heyGen *HeyGenClient
|
||||
}
|
||||
|
||||
// NewVideoGenerator creates a new video generator
|
||||
func NewVideoGenerator() *VideoGenerator {
|
||||
return &VideoGenerator{
|
||||
elevenLabs: NewElevenLabsClient(),
|
||||
heyGen: NewHeyGenClient(),
|
||||
}
|
||||
}
|
||||
|
||||
// GenerationMode describes the available generation mode
|
||||
type GenerationMode string
|
||||
|
||||
const (
|
||||
ModeAvatarVideo GenerationMode = "avatar_video" // HeyGen + ElevenLabs
|
||||
ModeAudioOnly GenerationMode = "audio_only" // ElevenLabs only
|
||||
ModeTextOnly GenerationMode = "text_only" // No external services
|
||||
)
|
||||
|
||||
// GetAvailableMode returns the best available generation mode
|
||||
func (vg *VideoGenerator) GetAvailableMode() GenerationMode {
|
||||
if vg.heyGen.IsConfigured() && vg.elevenLabs.IsConfigured() {
|
||||
return ModeAvatarVideo
|
||||
}
|
||||
if vg.elevenLabs.IsConfigured() {
|
||||
return ModeAudioOnly
|
||||
}
|
||||
return ModeTextOnly
|
||||
}
|
||||
|
||||
// GenerateAudio generates audio from text using ElevenLabs
|
||||
func (vg *VideoGenerator) GenerateAudio(text string) ([]byte, error) {
|
||||
if !vg.elevenLabs.IsConfigured() {
|
||||
return nil, fmt.Errorf("ElevenLabs not configured")
|
||||
}
|
||||
|
||||
log.Printf("Generating audio for text (%d chars)...", len(text))
|
||||
return vg.elevenLabs.TextToSpeech(text)
|
||||
}
|
||||
|
||||
// GenerateVideo generates a video from audio using HeyGen
|
||||
func (vg *VideoGenerator) GenerateVideo(audioURL string) (string, error) {
|
||||
if !vg.heyGen.IsConfigured() {
|
||||
return "", fmt.Errorf("HeyGen not configured")
|
||||
}
|
||||
|
||||
log.Printf("Creating HeyGen video with audio: %s", audioURL)
|
||||
resp, err := vg.heyGen.CreateVideo(audioURL)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return resp.Data.VideoID, nil
|
||||
}
|
||||
|
||||
// CheckVideoStatus checks if a HeyGen video is ready
|
||||
func (vg *VideoGenerator) CheckVideoStatus(videoID string) (string, string, error) {
|
||||
if !vg.heyGen.IsConfigured() {
|
||||
return "", "", fmt.Errorf("HeyGen not configured")
|
||||
}
|
||||
|
||||
status, err := vg.heyGen.GetVideoStatus(videoID)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
|
||||
return status.Data.Status, status.Data.VideoURL, nil
|
||||
}
|
||||
|
||||
// GetStatus returns the configuration status
|
||||
func (vg *VideoGenerator) GetStatus() map[string]interface{} {
|
||||
return map[string]interface{}{
|
||||
"mode": string(vg.GetAvailableMode()),
|
||||
"elevenLabsConfigured": vg.elevenLabs.IsConfigured(),
|
||||
"heyGenConfigured": vg.heyGen.IsConfigured(),
|
||||
}
|
||||
}
|
||||
950
admin-v2/ai-compliance-sdk/internal/api/academy.go
Normal file
950
admin-v2/ai-compliance-sdk/internal/api/academy.go
Normal file
@@ -0,0 +1,950 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/breakpilot/ai-compliance-sdk/internal/academy"
|
||||
"github.com/breakpilot/ai-compliance-sdk/internal/db"
|
||||
"github.com/breakpilot/ai-compliance-sdk/internal/llm"
|
||||
"github.com/breakpilot/ai-compliance-sdk/internal/rag"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// AcademyHandler handles all Academy-related HTTP requests
|
||||
type AcademyHandler struct {
|
||||
dbPool *db.Pool
|
||||
llmService *llm.Service
|
||||
ragService *rag.Service
|
||||
academyStore *db.AcademyMemStore
|
||||
}
|
||||
|
||||
// NewAcademyHandler creates a new Academy handler
|
||||
func NewAcademyHandler(dbPool *db.Pool, llmService *llm.Service, ragService *rag.Service) *AcademyHandler {
|
||||
return &AcademyHandler{
|
||||
dbPool: dbPool,
|
||||
llmService: llmService,
|
||||
ragService: ragService,
|
||||
academyStore: db.NewAcademyMemStore(),
|
||||
}
|
||||
}
|
||||
|
||||
func (h *AcademyHandler) getTenantID(c *gin.Context) string {
|
||||
tid := c.GetHeader("X-Tenant-ID")
|
||||
if tid == "" {
|
||||
tid = c.Query("tenantId")
|
||||
}
|
||||
if tid == "" {
|
||||
tid = "default-tenant"
|
||||
}
|
||||
return tid
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Course CRUD
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// ListCourses returns all courses for the tenant
|
||||
func (h *AcademyHandler) ListCourses(c *gin.Context) {
|
||||
tenantID := h.getTenantID(c)
|
||||
rows := h.academyStore.ListCourses(tenantID)
|
||||
|
||||
courses := make([]AcademyCourse, 0, len(rows))
|
||||
for _, row := range rows {
|
||||
lessons := h.buildLessonsForCourse(row.ID)
|
||||
courses = append(courses, courseRowToResponse(row, lessons))
|
||||
}
|
||||
|
||||
SuccessResponse(c, courses)
|
||||
}
|
||||
|
||||
// GetCourse returns a single course with its lessons
|
||||
func (h *AcademyHandler) GetCourse(c *gin.Context) {
|
||||
id := c.Param("id")
|
||||
row, err := h.academyStore.GetCourse(id)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "Course not found", "COURSE_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
lessons := h.buildLessonsForCourse(row.ID)
|
||||
SuccessResponse(c, courseRowToResponse(row, lessons))
|
||||
}
|
||||
|
||||
// CreateCourse creates a new course with optional lessons
|
||||
func (h *AcademyHandler) CreateCourse(c *gin.Context) {
|
||||
var req CreateCourseRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
passingScore := req.PassingScore
|
||||
if passingScore == 0 {
|
||||
passingScore = 70
|
||||
}
|
||||
|
||||
roles := req.RequiredForRoles
|
||||
if len(roles) == 0 {
|
||||
roles = []string{"all"}
|
||||
}
|
||||
|
||||
courseRow := h.academyStore.CreateCourse(&db.AcademyCourseRow{
|
||||
TenantID: req.TenantID,
|
||||
Title: req.Title,
|
||||
Description: req.Description,
|
||||
Category: req.Category,
|
||||
PassingScore: passingScore,
|
||||
DurationMinutes: req.DurationMinutes,
|
||||
RequiredForRoles: roles,
|
||||
Status: "draft",
|
||||
})
|
||||
|
||||
// Create lessons
|
||||
for i, lessonReq := range req.Lessons {
|
||||
order := lessonReq.Order
|
||||
if order == 0 {
|
||||
order = i + 1
|
||||
}
|
||||
lessonRow := h.academyStore.CreateLesson(&db.AcademyLessonRow{
|
||||
CourseID: courseRow.ID,
|
||||
Title: lessonReq.Title,
|
||||
Type: lessonReq.Type,
|
||||
ContentMarkdown: lessonReq.ContentMarkdown,
|
||||
VideoURL: lessonReq.VideoURL,
|
||||
SortOrder: order,
|
||||
DurationMinutes: lessonReq.DurationMinutes,
|
||||
})
|
||||
|
||||
// Create quiz questions for this lesson
|
||||
for j, qReq := range lessonReq.QuizQuestions {
|
||||
qOrder := qReq.Order
|
||||
if qOrder == 0 {
|
||||
qOrder = j + 1
|
||||
}
|
||||
h.academyStore.CreateQuizQuestion(&db.AcademyQuizQuestionRow{
|
||||
LessonID: lessonRow.ID,
|
||||
Question: qReq.Question,
|
||||
Options: qReq.Options,
|
||||
CorrectOptionIndex: qReq.CorrectOptionIndex,
|
||||
Explanation: qReq.Explanation,
|
||||
SortOrder: qOrder,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
lessons := h.buildLessonsForCourse(courseRow.ID)
|
||||
c.JSON(http.StatusCreated, Response{
|
||||
Success: true,
|
||||
Data: courseRowToResponse(courseRow, lessons),
|
||||
})
|
||||
}
|
||||
|
||||
// UpdateCourse updates an existing course
|
||||
func (h *AcademyHandler) UpdateCourse(c *gin.Context) {
|
||||
id := c.Param("id")
|
||||
|
||||
var req UpdateCourseRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
updates := make(map[string]interface{})
|
||||
if req.Title != nil {
|
||||
updates["title"] = *req.Title
|
||||
}
|
||||
if req.Description != nil {
|
||||
updates["description"] = *req.Description
|
||||
}
|
||||
if req.Category != nil {
|
||||
updates["category"] = *req.Category
|
||||
}
|
||||
if req.DurationMinutes != nil {
|
||||
updates["durationminutes"] = *req.DurationMinutes
|
||||
}
|
||||
if req.PassingScore != nil {
|
||||
updates["passingscore"] = *req.PassingScore
|
||||
}
|
||||
if req.RequiredForRoles != nil {
|
||||
updates["requiredforroles"] = req.RequiredForRoles
|
||||
}
|
||||
|
||||
row, err := h.academyStore.UpdateCourse(id, updates)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "Course not found", "COURSE_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
lessons := h.buildLessonsForCourse(row.ID)
|
||||
SuccessResponse(c, courseRowToResponse(row, lessons))
|
||||
}
|
||||
|
||||
// DeleteCourse deletes a course and all related data
|
||||
func (h *AcademyHandler) DeleteCourse(c *gin.Context) {
|
||||
id := c.Param("id")
|
||||
|
||||
if err := h.academyStore.DeleteCourse(id); err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "Course not found", "COURSE_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
SuccessResponse(c, gin.H{
|
||||
"courseId": id,
|
||||
"deletedAt": now(),
|
||||
})
|
||||
}
|
||||
|
||||
// GetStatistics returns academy statistics for the tenant
|
||||
func (h *AcademyHandler) GetStatistics(c *gin.Context) {
|
||||
tenantID := h.getTenantID(c)
|
||||
stats := h.academyStore.GetStatistics(tenantID)
|
||||
|
||||
SuccessResponse(c, AcademyStatistics{
|
||||
TotalCourses: stats.TotalCourses,
|
||||
TotalEnrollments: stats.TotalEnrollments,
|
||||
CompletionRate: int(stats.CompletionRate),
|
||||
OverdueCount: stats.OverdueCount,
|
||||
ByCategory: stats.ByCategory,
|
||||
ByStatus: stats.ByStatus,
|
||||
})
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Enrollments
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// ListEnrollments returns enrollments filtered by tenant and optionally course
|
||||
func (h *AcademyHandler) ListEnrollments(c *gin.Context) {
|
||||
tenantID := h.getTenantID(c)
|
||||
courseID := c.Query("courseId")
|
||||
|
||||
rows := h.academyStore.ListEnrollments(tenantID, courseID)
|
||||
|
||||
enrollments := make([]AcademyEnrollment, 0, len(rows))
|
||||
for _, row := range rows {
|
||||
enrollments = append(enrollments, enrollmentRowToResponse(row))
|
||||
}
|
||||
|
||||
SuccessResponse(c, enrollments)
|
||||
}
|
||||
|
||||
// EnrollUser enrolls a user in a course
|
||||
func (h *AcademyHandler) EnrollUser(c *gin.Context) {
|
||||
var req EnrollUserRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
deadline, err := time.Parse(time.RFC3339, req.Deadline)
|
||||
if err != nil {
|
||||
deadline, err = time.Parse("2006-01-02", req.Deadline)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, "Invalid deadline format. Use RFC3339 or YYYY-MM-DD.", "INVALID_DEADLINE")
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
row := h.academyStore.CreateEnrollment(&db.AcademyEnrollmentRow{
|
||||
TenantID: req.TenantID,
|
||||
CourseID: req.CourseID,
|
||||
UserID: req.UserID,
|
||||
UserName: req.UserName,
|
||||
UserEmail: req.UserEmail,
|
||||
Status: "not_started",
|
||||
Progress: 0,
|
||||
Deadline: deadline,
|
||||
})
|
||||
|
||||
c.JSON(http.StatusCreated, Response{
|
||||
Success: true,
|
||||
Data: enrollmentRowToResponse(row),
|
||||
})
|
||||
}
|
||||
|
||||
// UpdateProgress updates the progress of an enrollment
|
||||
func (h *AcademyHandler) UpdateProgress(c *gin.Context) {
|
||||
id := c.Param("id")
|
||||
|
||||
var req UpdateProgressRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
enrollment, err := h.academyStore.GetEnrollment(id)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "Enrollment not found", "ENROLLMENT_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
updates := map[string]interface{}{
|
||||
"progress": req.Progress,
|
||||
}
|
||||
|
||||
// Auto-update status based on progress
|
||||
if req.Progress >= 100 {
|
||||
updates["status"] = "completed"
|
||||
t := time.Now()
|
||||
updates["completedat"] = &t
|
||||
} else if req.Progress > 0 && enrollment.Status == "not_started" {
|
||||
updates["status"] = "in_progress"
|
||||
}
|
||||
|
||||
row, err := h.academyStore.UpdateEnrollment(id, updates)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusInternalServerError, "Failed to update progress", "UPDATE_FAILED")
|
||||
return
|
||||
}
|
||||
|
||||
// Upsert lesson progress if lessonID provided
|
||||
if req.LessonID != "" {
|
||||
t := time.Now()
|
||||
h.academyStore.UpsertLessonProgress(&db.AcademyLessonProgressRow{
|
||||
EnrollmentID: id,
|
||||
LessonID: req.LessonID,
|
||||
Completed: true,
|
||||
CompletedAt: &t,
|
||||
})
|
||||
}
|
||||
|
||||
SuccessResponse(c, enrollmentRowToResponse(row))
|
||||
}
|
||||
|
||||
// CompleteEnrollment marks an enrollment as completed
|
||||
func (h *AcademyHandler) CompleteEnrollment(c *gin.Context) {
|
||||
id := c.Param("id")
|
||||
|
||||
t := time.Now()
|
||||
updates := map[string]interface{}{
|
||||
"status": "completed",
|
||||
"progress": 100,
|
||||
"completedat": &t,
|
||||
}
|
||||
|
||||
row, err := h.academyStore.UpdateEnrollment(id, updates)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "Enrollment not found", "ENROLLMENT_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
SuccessResponse(c, enrollmentRowToResponse(row))
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Quiz
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// SubmitQuiz evaluates quiz answers for a lesson
|
||||
func (h *AcademyHandler) SubmitQuiz(c *gin.Context) {
|
||||
lessonID := c.Param("id")
|
||||
|
||||
var req SubmitQuizRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
// Get the lesson
|
||||
lesson, err := h.academyStore.GetLesson(lessonID)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "Lesson not found", "LESSON_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
// Get quiz questions
|
||||
questions := h.academyStore.ListQuizQuestions(lessonID)
|
||||
if len(questions) == 0 {
|
||||
ErrorResponse(c, http.StatusBadRequest, "No quiz questions found for this lesson", "NO_QUIZ_QUESTIONS")
|
||||
return
|
||||
}
|
||||
|
||||
if len(req.Answers) != len(questions) {
|
||||
ErrorResponse(c, http.StatusBadRequest,
|
||||
fmt.Sprintf("Expected %d answers, got %d", len(questions), len(req.Answers)),
|
||||
"ANSWER_COUNT_MISMATCH")
|
||||
return
|
||||
}
|
||||
|
||||
// Evaluate answers
|
||||
correctCount := 0
|
||||
results := make([]QuizQuestionResult, len(questions))
|
||||
for i, q := range questions {
|
||||
correct := req.Answers[i] == q.CorrectOptionIndex
|
||||
if correct {
|
||||
correctCount++
|
||||
}
|
||||
results[i] = QuizQuestionResult{
|
||||
QuestionID: q.ID,
|
||||
Correct: correct,
|
||||
Explanation: q.Explanation,
|
||||
}
|
||||
}
|
||||
|
||||
score := 0
|
||||
if len(questions) > 0 {
|
||||
score = int(float64(correctCount) / float64(len(questions)) * 100)
|
||||
}
|
||||
|
||||
// Determine pass/fail based on course's passing score
|
||||
passingScore := 70 // default
|
||||
course, err := h.academyStore.GetCourse(lesson.CourseID)
|
||||
if err == nil && course.PassingScore > 0 {
|
||||
passingScore = course.PassingScore
|
||||
}
|
||||
|
||||
SuccessResponse(c, SubmitQuizResponse{
|
||||
Score: score,
|
||||
Passed: score >= passingScore,
|
||||
CorrectAnswers: correctCount,
|
||||
TotalQuestions: len(questions),
|
||||
Results: results,
|
||||
})
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Certificates
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// GenerateCertificateEndpoint generates a certificate for a completed enrollment
|
||||
func (h *AcademyHandler) GenerateCertificateEndpoint(c *gin.Context) {
|
||||
enrollmentID := c.Param("id")
|
||||
|
||||
enrollment, err := h.academyStore.GetEnrollment(enrollmentID)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "Enrollment not found", "ENROLLMENT_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
// Check if already has certificate
|
||||
if enrollment.CertificateID != "" {
|
||||
existing, err := h.academyStore.GetCertificate(enrollment.CertificateID)
|
||||
if err == nil {
|
||||
SuccessResponse(c, certificateRowToResponse(existing))
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Get course name
|
||||
courseName := "Unbekannter Kurs"
|
||||
course, err := h.academyStore.GetCourse(enrollment.CourseID)
|
||||
if err == nil {
|
||||
courseName = course.Title
|
||||
}
|
||||
|
||||
issuedAt := time.Now()
|
||||
validUntil := issuedAt.AddDate(1, 0, 0) // 1 year validity
|
||||
|
||||
cert := h.academyStore.CreateCertificate(&db.AcademyCertificateRow{
|
||||
TenantID: enrollment.TenantID,
|
||||
EnrollmentID: enrollmentID,
|
||||
CourseID: enrollment.CourseID,
|
||||
UserID: enrollment.UserID,
|
||||
UserName: enrollment.UserName,
|
||||
CourseName: courseName,
|
||||
Score: enrollment.Progress,
|
||||
IssuedAt: issuedAt,
|
||||
ValidUntil: validUntil,
|
||||
})
|
||||
|
||||
// Update enrollment with certificate ID
|
||||
h.academyStore.UpdateEnrollment(enrollmentID, map[string]interface{}{
|
||||
"certificateid": cert.ID,
|
||||
})
|
||||
|
||||
c.JSON(http.StatusCreated, Response{
|
||||
Success: true,
|
||||
Data: certificateRowToResponse(cert),
|
||||
})
|
||||
}
|
||||
|
||||
// GetCertificate returns a certificate by ID
|
||||
func (h *AcademyHandler) GetCertificate(c *gin.Context) {
|
||||
id := c.Param("id")
|
||||
|
||||
cert, err := h.academyStore.GetCertificate(id)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "Certificate not found", "CERTIFICATE_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
SuccessResponse(c, certificateRowToResponse(cert))
|
||||
}
|
||||
|
||||
// DownloadCertificatePDF returns the PDF for a certificate
|
||||
func (h *AcademyHandler) DownloadCertificatePDF(c *gin.Context) {
|
||||
id := c.Param("id")
|
||||
|
||||
cert, err := h.academyStore.GetCertificate(id)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "Certificate not found", "CERTIFICATE_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
if cert.PdfURL != "" {
|
||||
c.Redirect(http.StatusFound, cert.PdfURL)
|
||||
return
|
||||
}
|
||||
|
||||
// Generate PDF on-the-fly
|
||||
pdfBytes, err := academy.GenerateCertificatePDF(academy.CertificateData{
|
||||
CertificateID: cert.ID,
|
||||
UserName: cert.UserName,
|
||||
CourseName: cert.CourseName,
|
||||
CompanyName: "",
|
||||
Score: cert.Score,
|
||||
IssuedAt: cert.IssuedAt,
|
||||
ValidUntil: cert.ValidUntil,
|
||||
})
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusInternalServerError, "Failed to generate PDF", "PDF_GENERATION_FAILED")
|
||||
return
|
||||
}
|
||||
|
||||
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=zertifikat-%s.pdf", cert.ID[:min(8, len(cert.ID))]))
|
||||
c.Data(http.StatusOK, "application/pdf", pdfBytes)
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// AI Course Generation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// GenerateCourse generates a course using AI
|
||||
func (h *AcademyHandler) GenerateCourse(c *gin.Context) {
|
||||
var req GenerateCourseRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
// Get RAG context if requested
|
||||
var ragSources []SearchResult
|
||||
if req.UseRAG && h.ragService != nil {
|
||||
query := req.RAGQuery
|
||||
if query == "" {
|
||||
query = req.Topic + " Compliance Schulung"
|
||||
}
|
||||
results, _ := h.ragService.Search(c.Request.Context(), query, 5, "legal_corpus", "")
|
||||
for _, r := range results {
|
||||
ragSources = append(ragSources, SearchResult{
|
||||
ID: r.ID,
|
||||
Content: r.Content,
|
||||
Source: r.Source,
|
||||
Score: r.Score,
|
||||
Metadata: r.Metadata,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Generate course content (mock for now)
|
||||
course := h.generateMockCourse(req)
|
||||
|
||||
// Save to store
|
||||
courseRow := h.academyStore.CreateCourse(&db.AcademyCourseRow{
|
||||
TenantID: req.TenantID,
|
||||
Title: course.Title,
|
||||
Description: course.Description,
|
||||
Category: req.Category,
|
||||
PassingScore: 70,
|
||||
DurationMinutes: course.DurationMinutes,
|
||||
RequiredForRoles: []string{"all"},
|
||||
Status: "draft",
|
||||
})
|
||||
|
||||
for _, lesson := range course.Lessons {
|
||||
lessonRow := h.academyStore.CreateLesson(&db.AcademyLessonRow{
|
||||
CourseID: courseRow.ID,
|
||||
Title: lesson.Title,
|
||||
Type: lesson.Type,
|
||||
ContentMarkdown: lesson.ContentMarkdown,
|
||||
SortOrder: lesson.Order,
|
||||
DurationMinutes: lesson.DurationMinutes,
|
||||
})
|
||||
|
||||
for _, q := range lesson.QuizQuestions {
|
||||
h.academyStore.CreateQuizQuestion(&db.AcademyQuizQuestionRow{
|
||||
LessonID: lessonRow.ID,
|
||||
Question: q.Question,
|
||||
Options: q.Options,
|
||||
CorrectOptionIndex: q.CorrectOptionIndex,
|
||||
Explanation: q.Explanation,
|
||||
SortOrder: q.Order,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
lessons := h.buildLessonsForCourse(courseRow.ID)
|
||||
c.JSON(http.StatusCreated, Response{
|
||||
Success: true,
|
||||
Data: gin.H{
|
||||
"course": courseRowToResponse(courseRow, lessons),
|
||||
"ragSources": ragSources,
|
||||
"model": h.llmService.GetModel(),
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// RegenerateLesson regenerates a single lesson using AI
|
||||
func (h *AcademyHandler) RegenerateLesson(c *gin.Context) {
|
||||
lessonID := c.Param("id")
|
||||
|
||||
_, err := h.academyStore.GetLesson(lessonID)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "Lesson not found", "LESSON_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
// For now, return the existing lesson
|
||||
SuccessResponse(c, gin.H{
|
||||
"lessonId": lessonID,
|
||||
"status": "regeneration_pending",
|
||||
"message": "AI lesson regeneration will be available in a future version",
|
||||
})
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Video Generation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// GenerateVideos initiates video generation for all lessons in a course
|
||||
func (h *AcademyHandler) GenerateVideos(c *gin.Context) {
|
||||
courseID := c.Param("id")
|
||||
|
||||
_, err := h.academyStore.GetCourse(courseID)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "Course not found", "COURSE_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
lessons := h.academyStore.ListLessons(courseID)
|
||||
lessonStatuses := make([]LessonVideoStatus, 0, len(lessons))
|
||||
for _, l := range lessons {
|
||||
if l.Type == "text" || l.Type == "video" {
|
||||
lessonStatuses = append(lessonStatuses, LessonVideoStatus{
|
||||
LessonID: l.ID,
|
||||
Status: "pending",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
SuccessResponse(c, VideoStatusResponse{
|
||||
CourseID: courseID,
|
||||
Status: "pending",
|
||||
Lessons: lessonStatuses,
|
||||
})
|
||||
}
|
||||
|
||||
// GetVideoStatus returns the video generation status for a course
|
||||
func (h *AcademyHandler) GetVideoStatus(c *gin.Context) {
|
||||
courseID := c.Param("id")
|
||||
|
||||
_, err := h.academyStore.GetCourse(courseID)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "Course not found", "COURSE_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
lessons := h.academyStore.ListLessons(courseID)
|
||||
lessonStatuses := make([]LessonVideoStatus, 0, len(lessons))
|
||||
for _, l := range lessons {
|
||||
status := LessonVideoStatus{
|
||||
LessonID: l.ID,
|
||||
Status: "not_started",
|
||||
VideoURL: l.VideoURL,
|
||||
AudioURL: l.AudioURL,
|
||||
}
|
||||
if l.VideoURL != "" {
|
||||
status.Status = "completed"
|
||||
}
|
||||
lessonStatuses = append(lessonStatuses, status)
|
||||
}
|
||||
|
||||
overallStatus := "not_started"
|
||||
hasCompleted := false
|
||||
hasPending := false
|
||||
for _, s := range lessonStatuses {
|
||||
if s.Status == "completed" {
|
||||
hasCompleted = true
|
||||
} else {
|
||||
hasPending = true
|
||||
}
|
||||
}
|
||||
if hasCompleted && !hasPending {
|
||||
overallStatus = "completed"
|
||||
} else if hasCompleted && hasPending {
|
||||
overallStatus = "processing"
|
||||
}
|
||||
|
||||
SuccessResponse(c, VideoStatusResponse{
|
||||
CourseID: courseID,
|
||||
Status: overallStatus,
|
||||
Lessons: lessonStatuses,
|
||||
})
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
func (h *AcademyHandler) buildLessonsForCourse(courseID string) []AcademyLesson {
|
||||
lessonRows := h.academyStore.ListLessons(courseID)
|
||||
lessons := make([]AcademyLesson, 0, len(lessonRows))
|
||||
for _, lr := range lessonRows {
|
||||
var questions []AcademyQuizQuestion
|
||||
if lr.Type == "quiz" {
|
||||
qRows := h.academyStore.ListQuizQuestions(lr.ID)
|
||||
questions = make([]AcademyQuizQuestion, 0, len(qRows))
|
||||
for _, qr := range qRows {
|
||||
questions = append(questions, quizQuestionRowToResponse(qr))
|
||||
}
|
||||
}
|
||||
lessons = append(lessons, lessonRowToResponse(lr, questions))
|
||||
}
|
||||
return lessons
|
||||
}
|
||||
|
||||
func courseRowToResponse(row *db.AcademyCourseRow, lessons []AcademyLesson) AcademyCourse {
|
||||
return AcademyCourse{
|
||||
ID: row.ID,
|
||||
TenantID: row.TenantID,
|
||||
Title: row.Title,
|
||||
Description: row.Description,
|
||||
Category: row.Category,
|
||||
PassingScore: row.PassingScore,
|
||||
DurationMinutes: row.DurationMinutes,
|
||||
RequiredForRoles: row.RequiredForRoles,
|
||||
Status: row.Status,
|
||||
Lessons: lessons,
|
||||
CreatedAt: row.CreatedAt.Format(time.RFC3339),
|
||||
UpdatedAt: row.UpdatedAt.Format(time.RFC3339),
|
||||
}
|
||||
}
|
||||
|
||||
func lessonRowToResponse(row *db.AcademyLessonRow, questions []AcademyQuizQuestion) AcademyLesson {
|
||||
return AcademyLesson{
|
||||
ID: row.ID,
|
||||
CourseID: row.CourseID,
|
||||
Title: row.Title,
|
||||
Type: row.Type,
|
||||
ContentMarkdown: row.ContentMarkdown,
|
||||
VideoURL: row.VideoURL,
|
||||
AudioURL: row.AudioURL,
|
||||
Order: row.SortOrder,
|
||||
DurationMinutes: row.DurationMinutes,
|
||||
QuizQuestions: questions,
|
||||
}
|
||||
}
|
||||
|
||||
func quizQuestionRowToResponse(row *db.AcademyQuizQuestionRow) AcademyQuizQuestion {
|
||||
return AcademyQuizQuestion{
|
||||
ID: row.ID,
|
||||
LessonID: row.LessonID,
|
||||
Question: row.Question,
|
||||
Options: row.Options,
|
||||
CorrectOptionIndex: row.CorrectOptionIndex,
|
||||
Explanation: row.Explanation,
|
||||
Order: row.SortOrder,
|
||||
}
|
||||
}
|
||||
|
||||
func enrollmentRowToResponse(row *db.AcademyEnrollmentRow) AcademyEnrollment {
|
||||
e := AcademyEnrollment{
|
||||
ID: row.ID,
|
||||
TenantID: row.TenantID,
|
||||
CourseID: row.CourseID,
|
||||
UserID: row.UserID,
|
||||
UserName: row.UserName,
|
||||
UserEmail: row.UserEmail,
|
||||
Status: row.Status,
|
||||
Progress: row.Progress,
|
||||
StartedAt: row.StartedAt.Format(time.RFC3339),
|
||||
CertificateID: row.CertificateID,
|
||||
Deadline: row.Deadline.Format(time.RFC3339),
|
||||
CreatedAt: row.CreatedAt.Format(time.RFC3339),
|
||||
UpdatedAt: row.UpdatedAt.Format(time.RFC3339),
|
||||
}
|
||||
if row.CompletedAt != nil {
|
||||
e.CompletedAt = row.CompletedAt.Format(time.RFC3339)
|
||||
}
|
||||
return e
|
||||
}
|
||||
|
||||
func certificateRowToResponse(row *db.AcademyCertificateRow) AcademyCertificate {
|
||||
return AcademyCertificate{
|
||||
ID: row.ID,
|
||||
TenantID: row.TenantID,
|
||||
EnrollmentID: row.EnrollmentID,
|
||||
CourseID: row.CourseID,
|
||||
UserID: row.UserID,
|
||||
UserName: row.UserName,
|
||||
CourseName: row.CourseName,
|
||||
Score: row.Score,
|
||||
IssuedAt: row.IssuedAt.Format(time.RFC3339),
|
||||
ValidUntil: row.ValidUntil.Format(time.RFC3339),
|
||||
PdfURL: row.PdfURL,
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Mock Course Generator (used when LLM is not available)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
func (h *AcademyHandler) generateMockCourse(req GenerateCourseRequest) AcademyCourse {
|
||||
switch req.Category {
|
||||
case "dsgvo_basics":
|
||||
return h.mockDSGVOCourse(req)
|
||||
case "it_security":
|
||||
return h.mockITSecurityCourse(req)
|
||||
case "ai_literacy":
|
||||
return h.mockAILiteracyCourse(req)
|
||||
case "whistleblower_protection":
|
||||
return h.mockWhistleblowerCourse(req)
|
||||
default:
|
||||
return h.mockDSGVOCourse(req)
|
||||
}
|
||||
}
|
||||
|
||||
func (h *AcademyHandler) mockDSGVOCourse(req GenerateCourseRequest) AcademyCourse {
|
||||
return AcademyCourse{
|
||||
Title: "DSGVO-Grundlagen fuer Mitarbeiter",
|
||||
Description: "Umfassende Einfuehrung in die Datenschutz-Grundverordnung. Vermittelt die wichtigsten Grundsaetze des Datenschutzes, Betroffenenrechte und die korrekte Handhabung personenbezogener Daten.",
|
||||
DurationMinutes: 90,
|
||||
Lessons: []AcademyLesson{
|
||||
{
|
||||
Title: "Was ist die DSGVO?",
|
||||
Type: "text",
|
||||
Order: 1,
|
||||
DurationMinutes: 15,
|
||||
ContentMarkdown: "# Was ist die DSGVO?\n\nDie Datenschutz-Grundverordnung (DSGVO) ist eine Verordnung der EU, die seit dem 25. Mai 2018 gilt. Sie schuetzt die Grundrechte natuerlicher Personen bei der Verarbeitung personenbezogener Daten.\n\n## Warum ist die DSGVO wichtig?\n\n- **Einheitlicher Datenschutz** in der gesamten EU\n- **Hohe Bussgelder** bei Verstoessen (bis 20 Mio. EUR oder 4% des Jahresumsatzes)\n- **Staerkung der Betroffenenrechte** (Auskunft, Loeschung, Widerspruch)\n\n## Zentrale Begriffe\n\n- **Personenbezogene Daten**: Alle Informationen, die sich auf eine identifizierte oder identifizierbare Person beziehen\n- **Verantwortlicher**: Die Stelle, die ueber Zweck und Mittel der Verarbeitung entscheidet\n- **Auftragsverarbeiter**: Verarbeitet Daten im Auftrag des Verantwortlichen",
|
||||
},
|
||||
{
|
||||
Title: "Die 7 Grundsaetze der DSGVO",
|
||||
Type: "text",
|
||||
Order: 2,
|
||||
DurationMinutes: 20,
|
||||
ContentMarkdown: "# Die 7 Grundsaetze der DSGVO (Art. 5)\n\n## 1. Rechtmaessigkeit, Verarbeitung nach Treu und Glauben, Transparenz\nPersonenbezogene Daten muessen auf rechtmaessige Weise verarbeitet werden.\n\n## 2. Zweckbindung\nDaten duerfen nur fuer festgelegte, eindeutige und legitime Zwecke erhoben werden.\n\n## 3. Datenminimierung\nEs duerfen nur Daten erhoben werden, die fuer den Zweck erforderlich sind.\n\n## 4. Richtigkeit\nDaten muessen sachlich richtig und auf dem neuesten Stand sein.\n\n## 5. Speicherbegrenzung\nDaten duerfen nur so lange gespeichert werden, wie es fuer den Zweck erforderlich ist.\n\n## 6. Integritaet und Vertraulichkeit\nDaten muessen vor unbefugtem Zugriff geschuetzt werden.\n\n## 7. Rechenschaftspflicht\nDer Verantwortliche muss die Einhaltung der Grundsaetze nachweisen koennen.",
|
||||
},
|
||||
{
|
||||
Title: "Betroffenenrechte (Art. 15-22 DSGVO)",
|
||||
Type: "text",
|
||||
Order: 3,
|
||||
DurationMinutes: 20,
|
||||
ContentMarkdown: "# Betroffenenrechte\n\n## Recht auf Auskunft (Art. 15)\nJede Person hat das Recht zu erfahren, ob und welche Daten ueber sie verarbeitet werden.\n\n## Recht auf Berichtigung (Art. 16)\nUnrichtige Daten muessen berichtigt werden.\n\n## Recht auf Loeschung (Art. 17)\nDas 'Recht auf Vergessenwerden' ermoeglicht die Loeschung personenbezogener Daten.\n\n## Recht auf Einschraenkung (Art. 18)\nBetroffene koennen die Verarbeitung einschraenken lassen.\n\n## Recht auf Datenuebertragbarkeit (Art. 20)\nDaten muessen in einem maschinenlesbaren Format bereitgestellt werden.\n\n## Widerspruchsrecht (Art. 21)\nBetroffene koennen der Verarbeitung widersprechen.",
|
||||
},
|
||||
{
|
||||
Title: "Datenschutz im Arbeitsalltag",
|
||||
Type: "text",
|
||||
Order: 4,
|
||||
DurationMinutes: 15,
|
||||
ContentMarkdown: "# Datenschutz im Arbeitsalltag\n\n## E-Mails\n- Keine personenbezogenen Daten unverschluesselt versenden\n- BCC statt CC bei Massenversand\n- Vorsicht bei Anhangen\n\n## Bildschirmsperre\n- Computer bei Abwesenheit sperren (Win+L / Cmd+Ctrl+Q)\n- Automatische Sperre nach 5 Minuten\n\n## Clean Desk Policy\n- Keine sensiblen Dokumente offen liegen lassen\n- Aktenvernichter fuer Papierdokumente\n\n## Homeoffice\n- VPN nutzen\n- Kein oeffentliches WLAN fuer Firmendaten\n- Bildschirm vor Mitlesern schuetzen\n\n## Datenpannen melden\n- **Sofort** den Datenschutzbeauftragten informieren\n- Innerhalb von 72 Stunden an die Aufsichtsbehoerde\n- Dokumentation der Panne",
|
||||
},
|
||||
{
|
||||
Title: "Wissenstest: DSGVO-Grundlagen",
|
||||
Type: "quiz",
|
||||
Order: 5,
|
||||
DurationMinutes: 20,
|
||||
QuizQuestions: []AcademyQuizQuestion{
|
||||
{
|
||||
Question: "Seit wann gilt die DSGVO?",
|
||||
Options: []string{"1. Januar 2016", "25. Mai 2018", "1. Januar 2020", "25. Mai 2020"},
|
||||
CorrectOptionIndex: 1,
|
||||
Explanation: "Die DSGVO gilt seit dem 25. Mai 2018 in allen EU-Mitgliedstaaten.",
|
||||
Order: 1,
|
||||
},
|
||||
{
|
||||
Question: "Was sind personenbezogene Daten?",
|
||||
Options: []string{"Nur Name und Adresse", "Alle Informationen, die sich auf eine identifizierbare Person beziehen", "Nur digitale Daten", "Nur sensible Gesundheitsdaten"},
|
||||
CorrectOptionIndex: 1,
|
||||
Explanation: "Personenbezogene Daten umfassen alle Informationen, die sich auf eine identifizierte oder identifizierbare natuerliche Person beziehen.",
|
||||
Order: 2,
|
||||
},
|
||||
{
|
||||
Question: "Wie hoch kann das Bussgeld bei DSGVO-Verstoessen maximal sein?",
|
||||
Options: []string{"1 Mio. EUR", "5 Mio. EUR", "10 Mio. EUR oder 2% des Jahresumsatzes", "20 Mio. EUR oder 4% des Jahresumsatzes"},
|
||||
CorrectOptionIndex: 3,
|
||||
Explanation: "Bei schwerwiegenden Verstoessen koennen Bussgelder von bis zu 20 Mio. EUR oder 4% des weltweiten Jahresumsatzes verhaengt werden.",
|
||||
Order: 3,
|
||||
},
|
||||
{
|
||||
Question: "Was bedeutet das Prinzip der Datenminimierung?",
|
||||
Options: []string{"Alle Daten muessen verschluesselt werden", "Es duerfen nur fuer den Zweck erforderliche Daten erhoben werden", "Daten muessen nach 30 Tagen geloescht werden", "Nur Administratoren duerfen auf Daten zugreifen"},
|
||||
CorrectOptionIndex: 1,
|
||||
Explanation: "Datenminimierung bedeutet, dass nur die fuer den jeweiligen Zweck erforderlichen Daten erhoben und verarbeitet werden duerfen.",
|
||||
Order: 4,
|
||||
},
|
||||
{
|
||||
Question: "Innerhalb welcher Frist muss eine Datenpanne der Aufsichtsbehoerde gemeldet werden?",
|
||||
Options: []string{"24 Stunden", "48 Stunden", "72 Stunden", "7 Tage"},
|
||||
CorrectOptionIndex: 2,
|
||||
Explanation: "Gemaess Art. 33 DSGVO muss eine Datenpanne innerhalb von 72 Stunden nach Bekanntwerden der Aufsichtsbehoerde gemeldet werden.",
|
||||
Order: 5,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (h *AcademyHandler) mockITSecurityCourse(req GenerateCourseRequest) AcademyCourse {
|
||||
return AcademyCourse{
|
||||
Title: "IT-Sicherheit & Cybersecurity Awareness",
|
||||
Description: "Sensibilisierung fuer IT-Sicherheitsrisiken und Best Practices im Umgang mit Phishing, Passwoertern und Social Engineering.",
|
||||
DurationMinutes: 60,
|
||||
Lessons: []AcademyLesson{
|
||||
{Title: "Phishing erkennen und vermeiden", Type: "text", Order: 1, DurationMinutes: 15,
|
||||
ContentMarkdown: "# Phishing erkennen\n\n## Typische Merkmale\n- Dringlichkeit ('Ihr Konto wird gesperrt!')\n- Unbekannter Absender\n- Verdaechtige Links\n- Rechtschreibfehler\n\n## Was tun bei Verdacht?\n1. Link NICHT anklicken\n2. Anhang NICHT oeffnen\n3. IT-Sicherheit informieren"},
|
||||
{Title: "Sichere Passwoerter und MFA", Type: "text", Order: 2, DurationMinutes: 15,
|
||||
ContentMarkdown: "# Sichere Passwoerter\n\n## Regeln\n- Mindestens 12 Zeichen\n- Gross-/Kleinbuchstaben, Zahlen, Sonderzeichen\n- Fuer jeden Dienst ein eigenes Passwort\n- Passwort-Manager verwenden\n\n## Multi-Faktor-Authentifizierung\n- Immer aktivieren wenn moeglich\n- App-basiert (z.B. Microsoft Authenticator) bevorzugen"},
|
||||
{Title: "Social Engineering", Type: "text", Order: 3, DurationMinutes: 15,
|
||||
ContentMarkdown: "# Social Engineering\n\nAngreifer nutzen menschliche Schwaechen aus.\n\n## Methoden\n- **Pretexting**: Falsche Identitaet vortaeuschen\n- **Tailgating**: Unbefugter Zutritt durch Hinterherfolgen\n- **CEO Fraud**: Gefaelschte Anweisungen vom Vorgesetzten\n\n## Schutz\n- Identitaet immer verifizieren\n- Bei Unsicherheit nachfragen"},
|
||||
{Title: "Wissenstest: IT-Sicherheit", Type: "quiz", Order: 4, DurationMinutes: 15,
|
||||
QuizQuestions: []AcademyQuizQuestion{
|
||||
{Question: "Was ist ein typisches Merkmal einer Phishing-E-Mail?", Options: []string{"Professionelles Design", "Kuenstliche Dringlichkeit", "Bekannter Absender", "Kurzer Text"}, CorrectOptionIndex: 1, Explanation: "Phishing-Mails erzeugen oft kuenstliche Dringlichkeit.", Order: 1},
|
||||
{Question: "Wie lang sollte ein sicheres Passwort mindestens sein?", Options: []string{"6 Zeichen", "8 Zeichen", "10 Zeichen", "12 Zeichen"}, CorrectOptionIndex: 3, Explanation: "Mindestens 12 Zeichen werden empfohlen.", Order: 2},
|
||||
{Question: "Was ist CEO Fraud?", Options: []string{"Hacker-Angriff auf Server", "Gefaelschte Anweisung vom Vorgesetzten", "Virus in E-Mail-Anhang", "DDoS-Attacke"}, CorrectOptionIndex: 1, Explanation: "CEO Fraud ist eine Social-Engineering-Methode mit gefaelschten Anweisungen.", Order: 3},
|
||||
}},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (h *AcademyHandler) mockAILiteracyCourse(req GenerateCourseRequest) AcademyCourse {
|
||||
return AcademyCourse{
|
||||
Title: "AI Literacy - Sicherer Umgang mit KI",
|
||||
Description: "Grundlagen kuenstlicher Intelligenz, EU AI Act und verantwortungsvoller Einsatz von KI-Werkzeugen im Unternehmen.",
|
||||
DurationMinutes: 75,
|
||||
Lessons: []AcademyLesson{
|
||||
{Title: "Was ist Kuenstliche Intelligenz?", Type: "text", Order: 1, DurationMinutes: 15,
|
||||
ContentMarkdown: "# Was ist KI?\n\nKuenstliche Intelligenz (KI) bezeichnet Systeme, die menschenaehnliche kognitive Faehigkeiten zeigen.\n\n## Arten von KI\n- **Machine Learning**: Lernt aus Daten\n- **Deep Learning**: Neuronale Netze\n- **Generative AI**: Erstellt neue Inhalte (Text, Bild)\n- **LLMs**: Large Language Models wie ChatGPT"},
|
||||
{Title: "Der EU AI Act", Type: "text", Order: 2, DurationMinutes: 20,
|
||||
ContentMarkdown: "# EU AI Act\n\n## Risikoklassen\n- **Unakzeptabel**: Social Scoring, Manipulation\n- **Hochrisiko**: Bildung, HR, Kritische Infrastruktur\n- **Begrenzt**: Chatbots, Empfehlungssysteme\n- **Minimal**: Spam-Filter\n\n## Art. 4: AI Literacy Pflicht\nAlle Mitarbeiter, die KI-Systeme nutzen, muessen geschult werden."},
|
||||
{Title: "KI sicher im Unternehmen nutzen", Type: "text", Order: 3, DurationMinutes: 20,
|
||||
ContentMarkdown: "# KI sicher nutzen\n\n## Dos\n- Ergebnisse immer pruefen\n- Keine vertraulichen Daten eingeben\n- Firmenpolicies beachten\n\n## Don'ts\n- Blindes Vertrauen in KI-Ergebnisse\n- Personenbezogene Daten in externe KI-Tools\n- KI-generierte Inhalte ohne Pruefung veroeffentlichen"},
|
||||
{Title: "Wissenstest: AI Literacy", Type: "quiz", Order: 4, DurationMinutes: 20,
|
||||
QuizQuestions: []AcademyQuizQuestion{
|
||||
{Question: "Was verlangt Art. 4 des EU AI Acts?", Options: []string{"Verbot aller KI-Systeme", "AI Literacy Schulung fuer KI-Nutzer", "Nur Open-Source KI erlaubt", "KI nur in der IT-Abteilung"}, CorrectOptionIndex: 1, Explanation: "Art. 4 EU AI Act fordert AI Literacy fuer alle Mitarbeiter, die KI-Systeme nutzen.", Order: 1},
|
||||
{Question: "Duerfen vertrauliche Firmendaten in externe KI-Tools eingegeben werden?", Options: []string{"Ja, immer", "Nur in ChatGPT", "Nein, grundsaetzlich nicht", "Nur mit VPN"}, CorrectOptionIndex: 2, Explanation: "Vertrauliche Daten duerfen nicht in externe KI-Tools eingegeben werden.", Order: 2},
|
||||
}},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (h *AcademyHandler) mockWhistleblowerCourse(req GenerateCourseRequest) AcademyCourse {
|
||||
return AcademyCourse{
|
||||
Title: "Hinweisgeberschutz (HinSchG)",
|
||||
Description: "Einführung in das Hinweisgeberschutzgesetz, interne Meldewege und Schutz von Whistleblowern.",
|
||||
DurationMinutes: 45,
|
||||
Lessons: []AcademyLesson{
|
||||
{Title: "Das Hinweisgeberschutzgesetz", Type: "text", Order: 1, DurationMinutes: 15,
|
||||
ContentMarkdown: "# Hinweisgeberschutzgesetz (HinSchG)\n\nSeit Juli 2023 muessen Unternehmen ab 50 Mitarbeitern interne Meldestellen einrichten.\n\n## Was ist geschuetzt?\n- Meldungen ueber Rechtsverstoesse\n- Verstoesse gegen EU-Recht\n- Straftaten und Ordnungswidrigkeiten"},
|
||||
{Title: "Interne Meldewege", Type: "text", Order: 2, DurationMinutes: 15,
|
||||
ContentMarkdown: "# Interne Meldewege\n\n## Wie melde ich einen Verstoss?\n1. **Interne Meldestelle** (bevorzugt)\n2. **Externe Meldestelle** (BfJ)\n3. **Offenlegung** (nur als letztes Mittel)\n\n## Schutz fuer Hinweisgeber\n- Kuendigungsschutz\n- Keine Benachteiligung\n- Vertraulichkeit"},
|
||||
{Title: "Wissenstest: Hinweisgeberschutz", Type: "quiz", Order: 3, DurationMinutes: 15,
|
||||
QuizQuestions: []AcademyQuizQuestion{
|
||||
{Question: "Ab wie vielen Mitarbeitern muessen Unternehmen eine Meldestelle einrichten?", Options: []string{"10", "25", "50", "250"}, CorrectOptionIndex: 2, Explanation: "Unternehmen ab 50 Beschaeftigten muessen eine interne Meldestelle einrichten.", Order: 1},
|
||||
{Question: "Welche Meldung ist NICHT durch das HinSchG geschuetzt?", Options: []string{"Straftaten", "Verstoesse gegen EU-Recht", "Persoenliche Beschwerden ueber Kollegen", "Umweltverstoesse"}, CorrectOptionIndex: 2, Explanation: "Persoenliche Konflikte fallen nicht unter das HinSchG.", Order: 2},
|
||||
}},
|
||||
},
|
||||
}
|
||||
}
|
||||
209
admin-v2/ai-compliance-sdk/internal/api/academy_models.go
Normal file
209
admin-v2/ai-compliance-sdk/internal/api/academy_models.go
Normal file
@@ -0,0 +1,209 @@
|
||||
package api
|
||||
|
||||
// Academy Course models
|
||||
|
||||
// AcademyCourse represents a training course in the Academy module
|
||||
type AcademyCourse struct {
|
||||
ID string `json:"id"`
|
||||
TenantID string `json:"tenantId,omitempty"`
|
||||
Title string `json:"title"`
|
||||
Description string `json:"description"`
|
||||
Category string `json:"category"`
|
||||
PassingScore int `json:"passingScore"`
|
||||
DurationMinutes int `json:"durationMinutes"`
|
||||
RequiredForRoles []string `json:"requiredForRoles"`
|
||||
Status string `json:"status"`
|
||||
Lessons []AcademyLesson `json:"lessons"`
|
||||
CreatedAt string `json:"createdAt"`
|
||||
UpdatedAt string `json:"updatedAt"`
|
||||
}
|
||||
|
||||
// AcademyLesson represents a single lesson within a course
|
||||
type AcademyLesson struct {
|
||||
ID string `json:"id"`
|
||||
CourseID string `json:"courseId"`
|
||||
Title string `json:"title"`
|
||||
Type string `json:"type"` // video, text, quiz
|
||||
ContentMarkdown string `json:"contentMarkdown"`
|
||||
VideoURL string `json:"videoUrl,omitempty"`
|
||||
AudioURL string `json:"audioUrl,omitempty"`
|
||||
Order int `json:"order"`
|
||||
DurationMinutes int `json:"durationMinutes"`
|
||||
QuizQuestions []AcademyQuizQuestion `json:"quizQuestions,omitempty"`
|
||||
}
|
||||
|
||||
// AcademyQuizQuestion represents a single quiz question within a lesson
|
||||
type AcademyQuizQuestion struct {
|
||||
ID string `json:"id"`
|
||||
LessonID string `json:"lessonId"`
|
||||
Question string `json:"question"`
|
||||
Options []string `json:"options"`
|
||||
CorrectOptionIndex int `json:"correctOptionIndex"`
|
||||
Explanation string `json:"explanation"`
|
||||
Order int `json:"order"`
|
||||
}
|
||||
|
||||
// AcademyEnrollment represents a user's enrollment in a course
|
||||
type AcademyEnrollment struct {
|
||||
ID string `json:"id"`
|
||||
TenantID string `json:"tenantId,omitempty"`
|
||||
CourseID string `json:"courseId"`
|
||||
UserID string `json:"userId"`
|
||||
UserName string `json:"userName"`
|
||||
UserEmail string `json:"userEmail"`
|
||||
Status string `json:"status"` // not_started, in_progress, completed, expired
|
||||
Progress int `json:"progress"` // 0-100
|
||||
StartedAt string `json:"startedAt"`
|
||||
CompletedAt string `json:"completedAt,omitempty"`
|
||||
CertificateID string `json:"certificateId,omitempty"`
|
||||
Deadline string `json:"deadline"`
|
||||
CreatedAt string `json:"createdAt,omitempty"`
|
||||
UpdatedAt string `json:"updatedAt,omitempty"`
|
||||
}
|
||||
|
||||
// AcademyCertificate represents a certificate issued upon course completion
|
||||
type AcademyCertificate struct {
|
||||
ID string `json:"id"`
|
||||
TenantID string `json:"tenantId,omitempty"`
|
||||
EnrollmentID string `json:"enrollmentId"`
|
||||
CourseID string `json:"courseId"`
|
||||
UserID string `json:"userId"`
|
||||
UserName string `json:"userName"`
|
||||
CourseName string `json:"courseName"`
|
||||
Score int `json:"score"`
|
||||
IssuedAt string `json:"issuedAt"`
|
||||
ValidUntil string `json:"validUntil"`
|
||||
PdfURL string `json:"pdfUrl,omitempty"`
|
||||
}
|
||||
|
||||
// AcademyLessonProgress tracks a user's progress through a single lesson
|
||||
type AcademyLessonProgress struct {
|
||||
ID string `json:"id"`
|
||||
EnrollmentID string `json:"enrollmentId"`
|
||||
LessonID string `json:"lessonId"`
|
||||
Completed bool `json:"completed"`
|
||||
QuizScore *int `json:"quizScore,omitempty"`
|
||||
CompletedAt string `json:"completedAt,omitempty"`
|
||||
}
|
||||
|
||||
// AcademyStatistics provides aggregate statistics for the Academy module
|
||||
type AcademyStatistics struct {
|
||||
TotalCourses int `json:"totalCourses"`
|
||||
TotalEnrollments int `json:"totalEnrollments"`
|
||||
CompletionRate int `json:"completionRate"`
|
||||
OverdueCount int `json:"overdueCount"`
|
||||
ByCategory map[string]int `json:"byCategory"`
|
||||
ByStatus map[string]int `json:"byStatus"`
|
||||
}
|
||||
|
||||
// Request types
|
||||
|
||||
// CreateCourseRequest is the request body for creating a new course
|
||||
type CreateCourseRequest struct {
|
||||
TenantID string `json:"tenantId" binding:"required"`
|
||||
Title string `json:"title" binding:"required"`
|
||||
Description string `json:"description"`
|
||||
Category string `json:"category" binding:"required"`
|
||||
DurationMinutes int `json:"durationMinutes"`
|
||||
RequiredForRoles []string `json:"requiredForRoles"`
|
||||
PassingScore int `json:"passingScore"`
|
||||
Lessons []CreateLessonRequest `json:"lessons"`
|
||||
}
|
||||
|
||||
// CreateLessonRequest is the request body for creating a lesson within a course
|
||||
type CreateLessonRequest struct {
|
||||
Title string `json:"title" binding:"required"`
|
||||
Type string `json:"type" binding:"required"`
|
||||
ContentMarkdown string `json:"contentMarkdown"`
|
||||
VideoURL string `json:"videoUrl"`
|
||||
Order int `json:"order"`
|
||||
DurationMinutes int `json:"durationMinutes"`
|
||||
QuizQuestions []CreateQuizQuestionRequest `json:"quizQuestions"`
|
||||
}
|
||||
|
||||
// CreateQuizQuestionRequest is the request body for creating a quiz question
|
||||
type CreateQuizQuestionRequest struct {
|
||||
Question string `json:"question" binding:"required"`
|
||||
Options []string `json:"options" binding:"required"`
|
||||
CorrectOptionIndex int `json:"correctOptionIndex"`
|
||||
Explanation string `json:"explanation"`
|
||||
Order int `json:"order"`
|
||||
}
|
||||
|
||||
// UpdateCourseRequest is the request body for updating an existing course
|
||||
type UpdateCourseRequest struct {
|
||||
Title *string `json:"title"`
|
||||
Description *string `json:"description"`
|
||||
Category *string `json:"category"`
|
||||
DurationMinutes *int `json:"durationMinutes"`
|
||||
RequiredForRoles []string `json:"requiredForRoles"`
|
||||
PassingScore *int `json:"passingScore"`
|
||||
}
|
||||
|
||||
// EnrollUserRequest is the request body for enrolling a user in a course
|
||||
type EnrollUserRequest struct {
|
||||
TenantID string `json:"tenantId" binding:"required"`
|
||||
CourseID string `json:"courseId" binding:"required"`
|
||||
UserID string `json:"userId" binding:"required"`
|
||||
UserName string `json:"userName" binding:"required"`
|
||||
UserEmail string `json:"userEmail" binding:"required"`
|
||||
Deadline string `json:"deadline" binding:"required"`
|
||||
}
|
||||
|
||||
// UpdateProgressRequest is the request body for updating enrollment progress
|
||||
type UpdateProgressRequest struct {
|
||||
Progress int `json:"progress"`
|
||||
LessonID string `json:"lessonId"`
|
||||
}
|
||||
|
||||
// SubmitQuizRequest is the request body for submitting quiz answers
|
||||
type SubmitQuizRequest struct {
|
||||
Answers []int `json:"answers" binding:"required"`
|
||||
}
|
||||
|
||||
// SubmitQuizResponse is the response for a quiz submission
|
||||
type SubmitQuizResponse struct {
|
||||
Score int `json:"score"`
|
||||
Passed bool `json:"passed"`
|
||||
CorrectAnswers int `json:"correctAnswers"`
|
||||
TotalQuestions int `json:"totalQuestions"`
|
||||
Results []QuizQuestionResult `json:"results"`
|
||||
}
|
||||
|
||||
// QuizQuestionResult represents the result of a single quiz question
|
||||
type QuizQuestionResult struct {
|
||||
QuestionID string `json:"questionId"`
|
||||
Correct bool `json:"correct"`
|
||||
Explanation string `json:"explanation"`
|
||||
}
|
||||
|
||||
// GenerateCourseRequest is the request body for AI-generating a course
|
||||
type GenerateCourseRequest struct {
|
||||
TenantID string `json:"tenantId" binding:"required"`
|
||||
Topic string `json:"topic" binding:"required"`
|
||||
Category string `json:"category" binding:"required"`
|
||||
TargetGroup string `json:"targetGroup"`
|
||||
Language string `json:"language"`
|
||||
UseRAG bool `json:"useRag"`
|
||||
RAGQuery string `json:"ragQuery"`
|
||||
}
|
||||
|
||||
// GenerateVideosRequest is the request body for generating lesson videos
|
||||
type GenerateVideosRequest struct {
|
||||
TenantID string `json:"tenantId" binding:"required"`
|
||||
}
|
||||
|
||||
// VideoStatusResponse represents the video generation status for a course
|
||||
type VideoStatusResponse struct {
|
||||
CourseID string `json:"courseId"`
|
||||
Status string `json:"status"` // pending, processing, completed, failed
|
||||
Lessons []LessonVideoStatus `json:"lessons"`
|
||||
}
|
||||
|
||||
// LessonVideoStatus represents the video generation status for a single lesson
|
||||
type LessonVideoStatus struct {
|
||||
LessonID string `json:"lessonId"`
|
||||
Status string `json:"status"`
|
||||
VideoURL string `json:"videoUrl,omitempty"`
|
||||
AudioURL string `json:"audioUrl,omitempty"`
|
||||
}
|
||||
327
admin-v2/ai-compliance-sdk/internal/api/checkpoint.go
Normal file
327
admin-v2/ai-compliance-sdk/internal/api/checkpoint.go
Normal file
@@ -0,0 +1,327 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// Checkpoint represents a checkpoint definition
|
||||
type Checkpoint struct {
|
||||
ID string `json:"id"`
|
||||
Step string `json:"step"`
|
||||
Name string `json:"name"`
|
||||
Type string `json:"type"`
|
||||
BlocksProgress bool `json:"blocksProgress"`
|
||||
RequiresReview string `json:"requiresReview"`
|
||||
AutoValidate bool `json:"autoValidate"`
|
||||
Description string `json:"description"`
|
||||
}
|
||||
|
||||
// CheckpointHandler handles checkpoint-related requests
|
||||
type CheckpointHandler struct {
|
||||
checkpoints map[string]Checkpoint
|
||||
}
|
||||
|
||||
// NewCheckpointHandler creates a new checkpoint handler
|
||||
func NewCheckpointHandler() *CheckpointHandler {
|
||||
return &CheckpointHandler{
|
||||
checkpoints: initCheckpoints(),
|
||||
}
|
||||
}
|
||||
|
||||
func initCheckpoints() map[string]Checkpoint {
|
||||
return map[string]Checkpoint{
|
||||
"CP-UC": {
|
||||
ID: "CP-UC",
|
||||
Step: "use-case-workshop",
|
||||
Name: "Use Case Erfassung",
|
||||
Type: "REQUIRED",
|
||||
BlocksProgress: true,
|
||||
RequiresReview: "NONE",
|
||||
AutoValidate: true,
|
||||
Description: "Mindestens ein Use Case muss erfasst sein",
|
||||
},
|
||||
"CP-SCAN": {
|
||||
ID: "CP-SCAN",
|
||||
Step: "screening",
|
||||
Name: "System Screening",
|
||||
Type: "REQUIRED",
|
||||
BlocksProgress: true,
|
||||
RequiresReview: "NONE",
|
||||
AutoValidate: true,
|
||||
Description: "SBOM und Security Scan müssen abgeschlossen sein",
|
||||
},
|
||||
"CP-MOD": {
|
||||
ID: "CP-MOD",
|
||||
Step: "modules",
|
||||
Name: "Modul-Zuweisung",
|
||||
Type: "REQUIRED",
|
||||
BlocksProgress: true,
|
||||
RequiresReview: "NONE",
|
||||
AutoValidate: true,
|
||||
Description: "Mindestens ein Compliance-Modul muss zugewiesen sein",
|
||||
},
|
||||
"CP-REQ": {
|
||||
ID: "CP-REQ",
|
||||
Step: "requirements",
|
||||
Name: "Anforderungen",
|
||||
Type: "REQUIRED",
|
||||
BlocksProgress: true,
|
||||
RequiresReview: "NONE",
|
||||
AutoValidate: true,
|
||||
Description: "Anforderungen müssen aus Regulierungen abgeleitet sein",
|
||||
},
|
||||
"CP-CTRL": {
|
||||
ID: "CP-CTRL",
|
||||
Step: "controls",
|
||||
Name: "Controls",
|
||||
Type: "REQUIRED",
|
||||
BlocksProgress: true,
|
||||
RequiresReview: "NONE",
|
||||
AutoValidate: true,
|
||||
Description: "Controls müssen den Anforderungen zugeordnet sein",
|
||||
},
|
||||
"CP-EVI": {
|
||||
ID: "CP-EVI",
|
||||
Step: "evidence",
|
||||
Name: "Nachweise",
|
||||
Type: "REQUIRED",
|
||||
BlocksProgress: true,
|
||||
RequiresReview: "NONE",
|
||||
AutoValidate: true,
|
||||
Description: "Nachweise für Controls müssen dokumentiert sein",
|
||||
},
|
||||
"CP-CHK": {
|
||||
ID: "CP-CHK",
|
||||
Step: "audit-checklist",
|
||||
Name: "Audit Checklist",
|
||||
Type: "REQUIRED",
|
||||
BlocksProgress: true,
|
||||
RequiresReview: "NONE",
|
||||
AutoValidate: true,
|
||||
Description: "Prüfliste muss generiert und überprüft sein",
|
||||
},
|
||||
"CP-RISK": {
|
||||
ID: "CP-RISK",
|
||||
Step: "risks",
|
||||
Name: "Risikobewertung",
|
||||
Type: "REQUIRED",
|
||||
BlocksProgress: true,
|
||||
RequiresReview: "NONE",
|
||||
AutoValidate: true,
|
||||
Description: "Kritische Risiken müssen Mitigationsmaßnahmen haben",
|
||||
},
|
||||
"CP-AI": {
|
||||
ID: "CP-AI",
|
||||
Step: "ai-act",
|
||||
Name: "AI Act Klassifizierung",
|
||||
Type: "REQUIRED",
|
||||
BlocksProgress: true,
|
||||
RequiresReview: "LEGAL",
|
||||
AutoValidate: false,
|
||||
Description: "KI-System muss klassifiziert sein",
|
||||
},
|
||||
"CP-OBL": {
|
||||
ID: "CP-OBL",
|
||||
Step: "obligations",
|
||||
Name: "Pflichtenübersicht",
|
||||
Type: "REQUIRED",
|
||||
BlocksProgress: true,
|
||||
RequiresReview: "NONE",
|
||||
AutoValidate: true,
|
||||
Description: "Rechtliche Pflichten müssen identifiziert sein",
|
||||
},
|
||||
"CP-DSFA": {
|
||||
ID: "CP-DSFA",
|
||||
Step: "dsfa",
|
||||
Name: "DSFA",
|
||||
Type: "RECOMMENDED",
|
||||
BlocksProgress: false,
|
||||
RequiresReview: "DSB",
|
||||
AutoValidate: false,
|
||||
Description: "Datenschutz-Folgenabschätzung muss erstellt und genehmigt sein",
|
||||
},
|
||||
"CP-TOM": {
|
||||
ID: "CP-TOM",
|
||||
Step: "tom",
|
||||
Name: "TOMs",
|
||||
Type: "REQUIRED",
|
||||
BlocksProgress: true,
|
||||
RequiresReview: "NONE",
|
||||
AutoValidate: true,
|
||||
Description: "Technische und organisatorische Maßnahmen müssen definiert sein",
|
||||
},
|
||||
"CP-VVT": {
|
||||
ID: "CP-VVT",
|
||||
Step: "vvt",
|
||||
Name: "Verarbeitungsverzeichnis",
|
||||
Type: "REQUIRED",
|
||||
BlocksProgress: true,
|
||||
RequiresReview: "DSB",
|
||||
AutoValidate: false,
|
||||
Description: "Verarbeitungsverzeichnis muss vollständig sein",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// GetAll returns all checkpoint definitions
|
||||
func (h *CheckpointHandler) GetAll(c *gin.Context) {
|
||||
tenantID := c.Query("tenantId")
|
||||
|
||||
checkpointList := make([]Checkpoint, 0, len(h.checkpoints))
|
||||
for _, cp := range h.checkpoints {
|
||||
checkpointList = append(checkpointList, cp)
|
||||
}
|
||||
|
||||
SuccessResponse(c, gin.H{
|
||||
"tenantId": tenantID,
|
||||
"checkpoints": checkpointList,
|
||||
})
|
||||
}
|
||||
|
||||
// Validate validates a specific checkpoint
|
||||
func (h *CheckpointHandler) Validate(c *gin.Context) {
|
||||
var req struct {
|
||||
TenantID string `json:"tenantId" binding:"required"`
|
||||
CheckpointID string `json:"checkpointId" binding:"required"`
|
||||
Data map[string]interface{} `json:"data"`
|
||||
}
|
||||
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
checkpoint, ok := h.checkpoints[req.CheckpointID]
|
||||
if !ok {
|
||||
ErrorResponse(c, http.StatusNotFound, "Checkpoint not found", "CHECKPOINT_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
// Perform validation based on checkpoint ID
|
||||
result := h.validateCheckpoint(checkpoint, req.Data)
|
||||
|
||||
SuccessResponse(c, result)
|
||||
}
|
||||
|
||||
func (h *CheckpointHandler) validateCheckpoint(checkpoint Checkpoint, data map[string]interface{}) CheckpointResult {
|
||||
result := CheckpointResult{
|
||||
CheckpointID: checkpoint.ID,
|
||||
Passed: true,
|
||||
ValidatedAt: now(),
|
||||
ValidatedBy: "SYSTEM",
|
||||
Errors: []ValidationError{},
|
||||
Warnings: []ValidationError{},
|
||||
}
|
||||
|
||||
// Validation logic based on checkpoint
|
||||
switch checkpoint.ID {
|
||||
case "CP-UC":
|
||||
useCases, _ := data["useCases"].([]interface{})
|
||||
if len(useCases) == 0 {
|
||||
result.Passed = false
|
||||
result.Errors = append(result.Errors, ValidationError{
|
||||
RuleID: "uc-min-count",
|
||||
Field: "useCases",
|
||||
Message: "Mindestens ein Use Case muss erstellt werden",
|
||||
Severity: "ERROR",
|
||||
})
|
||||
}
|
||||
|
||||
case "CP-SCAN":
|
||||
screening, _ := data["screening"].(map[string]interface{})
|
||||
if screening == nil || screening["status"] != "COMPLETED" {
|
||||
result.Passed = false
|
||||
result.Errors = append(result.Errors, ValidationError{
|
||||
RuleID: "scan-complete",
|
||||
Field: "screening",
|
||||
Message: "Security Scan muss abgeschlossen sein",
|
||||
Severity: "ERROR",
|
||||
})
|
||||
}
|
||||
|
||||
case "CP-MOD":
|
||||
modules, _ := data["modules"].([]interface{})
|
||||
if len(modules) == 0 {
|
||||
result.Passed = false
|
||||
result.Errors = append(result.Errors, ValidationError{
|
||||
RuleID: "mod-min-count",
|
||||
Field: "modules",
|
||||
Message: "Mindestens ein Modul muss zugewiesen werden",
|
||||
Severity: "ERROR",
|
||||
})
|
||||
}
|
||||
|
||||
case "CP-RISK":
|
||||
risks, _ := data["risks"].([]interface{})
|
||||
criticalUnmitigated := 0
|
||||
for _, r := range risks {
|
||||
risk, ok := r.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
severity, _ := risk["severity"].(string)
|
||||
if severity == "CRITICAL" || severity == "HIGH" {
|
||||
mitigations, _ := risk["mitigation"].([]interface{})
|
||||
if len(mitigations) == 0 {
|
||||
criticalUnmitigated++
|
||||
}
|
||||
}
|
||||
}
|
||||
if criticalUnmitigated > 0 {
|
||||
result.Passed = false
|
||||
result.Errors = append(result.Errors, ValidationError{
|
||||
RuleID: "critical-risks-mitigated",
|
||||
Field: "risks",
|
||||
Message: "Kritische Risiken ohne Mitigationsmaßnahmen gefunden",
|
||||
Severity: "ERROR",
|
||||
})
|
||||
}
|
||||
|
||||
case "CP-DSFA":
|
||||
dsfa, _ := data["dsfa"].(map[string]interface{})
|
||||
if dsfa == nil {
|
||||
result.Passed = false
|
||||
result.Errors = append(result.Errors, ValidationError{
|
||||
RuleID: "dsfa-exists",
|
||||
Field: "dsfa",
|
||||
Message: "DSFA muss erstellt werden",
|
||||
Severity: "ERROR",
|
||||
})
|
||||
} else if dsfa["status"] != "APPROVED" {
|
||||
result.Warnings = append(result.Warnings, ValidationError{
|
||||
RuleID: "dsfa-approved",
|
||||
Field: "dsfa",
|
||||
Message: "DSFA sollte vom DSB genehmigt werden",
|
||||
Severity: "WARNING",
|
||||
})
|
||||
}
|
||||
|
||||
case "CP-TOM":
|
||||
toms, _ := data["toms"].([]interface{})
|
||||
if len(toms) == 0 {
|
||||
result.Passed = false
|
||||
result.Errors = append(result.Errors, ValidationError{
|
||||
RuleID: "tom-min-count",
|
||||
Field: "toms",
|
||||
Message: "Mindestens eine TOM muss definiert werden",
|
||||
Severity: "ERROR",
|
||||
})
|
||||
}
|
||||
|
||||
case "CP-VVT":
|
||||
vvt, _ := data["vvt"].([]interface{})
|
||||
if len(vvt) == 0 {
|
||||
result.Passed = false
|
||||
result.Errors = append(result.Errors, ValidationError{
|
||||
RuleID: "vvt-min-count",
|
||||
Field: "vvt",
|
||||
Message: "Mindestens eine Verarbeitungstätigkeit muss dokumentiert werden",
|
||||
Severity: "ERROR",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
383
admin-v2/ai-compliance-sdk/internal/api/generate.go
Normal file
383
admin-v2/ai-compliance-sdk/internal/api/generate.go
Normal file
@@ -0,0 +1,383 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/breakpilot/ai-compliance-sdk/internal/llm"
|
||||
"github.com/breakpilot/ai-compliance-sdk/internal/rag"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// GenerateHandler handles document generation requests
|
||||
type GenerateHandler struct {
|
||||
llmService *llm.Service
|
||||
ragService *rag.Service
|
||||
}
|
||||
|
||||
// NewGenerateHandler creates a new generate handler
|
||||
func NewGenerateHandler(llmService *llm.Service, ragService *rag.Service) *GenerateHandler {
|
||||
return &GenerateHandler{
|
||||
llmService: llmService,
|
||||
ragService: ragService,
|
||||
}
|
||||
}
|
||||
|
||||
// GenerateDSFA generates a Data Protection Impact Assessment
|
||||
func (h *GenerateHandler) GenerateDSFA(c *gin.Context) {
|
||||
var req GenerateRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
// Get RAG context if requested
|
||||
var ragSources []llm.SearchResult
|
||||
if req.UseRAG && h.ragService != nil {
|
||||
query := req.RAGQuery
|
||||
if query == "" {
|
||||
query = "DSFA Datenschutz-Folgenabschaetzung Anforderungen"
|
||||
}
|
||||
results, _ := h.ragService.Search(c.Request.Context(), query, 5, "legal_corpus", "regulation:DSGVO")
|
||||
for _, r := range results {
|
||||
ragSources = append(ragSources, llm.SearchResult{
|
||||
ID: r.ID,
|
||||
Content: r.Content,
|
||||
Source: r.Source,
|
||||
Score: r.Score,
|
||||
Metadata: r.Metadata,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Generate DSFA content
|
||||
content, tokensUsed, err := h.llmService.GenerateDSFA(c.Request.Context(), req.Context, ragSources)
|
||||
if err != nil {
|
||||
// Return mock content if LLM fails
|
||||
content = h.getMockDSFA(req.Context)
|
||||
tokensUsed = 0
|
||||
}
|
||||
|
||||
SuccessResponse(c, GenerateResponse{
|
||||
Content: content,
|
||||
GeneratedAt: now(),
|
||||
Model: h.llmService.GetModel(),
|
||||
TokensUsed: tokensUsed,
|
||||
RAGSources: convertLLMSources(ragSources),
|
||||
Confidence: 0.85,
|
||||
})
|
||||
}
|
||||
|
||||
// GenerateTOM generates Technical and Organizational Measures
|
||||
func (h *GenerateHandler) GenerateTOM(c *gin.Context) {
|
||||
var req GenerateRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
// Get RAG context if requested
|
||||
var llmRagSources []llm.SearchResult
|
||||
if req.UseRAG && h.ragService != nil {
|
||||
query := req.RAGQuery
|
||||
if query == "" {
|
||||
query = "technische organisatorische Massnahmen TOM Datenschutz"
|
||||
}
|
||||
results, _ := h.ragService.Search(c.Request.Context(), query, 5, "legal_corpus", "")
|
||||
for _, r := range results {
|
||||
llmRagSources = append(llmRagSources, llm.SearchResult{
|
||||
ID: r.ID,
|
||||
Content: r.Content,
|
||||
Source: r.Source,
|
||||
Score: r.Score,
|
||||
Metadata: r.Metadata,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Generate TOM content
|
||||
content, tokensUsed, err := h.llmService.GenerateTOM(c.Request.Context(), req.Context, llmRagSources)
|
||||
if err != nil {
|
||||
content = h.getMockTOM(req.Context)
|
||||
tokensUsed = 0
|
||||
}
|
||||
|
||||
SuccessResponse(c, GenerateResponse{
|
||||
Content: content,
|
||||
GeneratedAt: now(),
|
||||
Model: h.llmService.GetModel(),
|
||||
TokensUsed: tokensUsed,
|
||||
RAGSources: convertLLMSources(llmRagSources),
|
||||
Confidence: 0.82,
|
||||
})
|
||||
}
|
||||
|
||||
// GenerateVVT generates Processing Activity Register
|
||||
func (h *GenerateHandler) GenerateVVT(c *gin.Context) {
|
||||
var req GenerateRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
// Get RAG context if requested
|
||||
var llmRagSources []llm.SearchResult
|
||||
if req.UseRAG && h.ragService != nil {
|
||||
query := req.RAGQuery
|
||||
if query == "" {
|
||||
query = "Verarbeitungsverzeichnis Art. 30 DSGVO"
|
||||
}
|
||||
results, _ := h.ragService.Search(c.Request.Context(), query, 5, "legal_corpus", "regulation:DSGVO")
|
||||
for _, r := range results {
|
||||
llmRagSources = append(llmRagSources, llm.SearchResult{
|
||||
ID: r.ID,
|
||||
Content: r.Content,
|
||||
Source: r.Source,
|
||||
Score: r.Score,
|
||||
Metadata: r.Metadata,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Generate VVT content
|
||||
content, tokensUsed, err := h.llmService.GenerateVVT(c.Request.Context(), req.Context, llmRagSources)
|
||||
if err != nil {
|
||||
content = h.getMockVVT(req.Context)
|
||||
tokensUsed = 0
|
||||
}
|
||||
|
||||
SuccessResponse(c, GenerateResponse{
|
||||
Content: content,
|
||||
GeneratedAt: now(),
|
||||
Model: h.llmService.GetModel(),
|
||||
TokensUsed: tokensUsed,
|
||||
RAGSources: convertLLMSources(llmRagSources),
|
||||
Confidence: 0.88,
|
||||
})
|
||||
}
|
||||
|
||||
// GenerateGutachten generates an expert opinion/assessment
|
||||
func (h *GenerateHandler) GenerateGutachten(c *gin.Context) {
|
||||
var req GenerateRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
// Get RAG context if requested
|
||||
var llmRagSources []llm.SearchResult
|
||||
if req.UseRAG && h.ragService != nil {
|
||||
query := req.RAGQuery
|
||||
if query == "" {
|
||||
query = "Compliance Bewertung Gutachten"
|
||||
}
|
||||
results, _ := h.ragService.Search(c.Request.Context(), query, 5, "legal_corpus", "")
|
||||
for _, r := range results {
|
||||
llmRagSources = append(llmRagSources, llm.SearchResult{
|
||||
ID: r.ID,
|
||||
Content: r.Content,
|
||||
Source: r.Source,
|
||||
Score: r.Score,
|
||||
Metadata: r.Metadata,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Generate Gutachten content
|
||||
content, tokensUsed, err := h.llmService.GenerateGutachten(c.Request.Context(), req.Context, llmRagSources)
|
||||
if err != nil {
|
||||
content = h.getMockGutachten(req.Context)
|
||||
tokensUsed = 0
|
||||
}
|
||||
|
||||
SuccessResponse(c, GenerateResponse{
|
||||
Content: content,
|
||||
GeneratedAt: now(),
|
||||
Model: h.llmService.GetModel(),
|
||||
TokensUsed: tokensUsed,
|
||||
RAGSources: convertLLMSources(llmRagSources),
|
||||
Confidence: 0.80,
|
||||
})
|
||||
}
|
||||
|
||||
// Mock content generators for when LLM is not available
|
||||
func (h *GenerateHandler) getMockDSFA(context map[string]interface{}) string {
|
||||
return `# Datenschutz-Folgenabschätzung (DSFA)
|
||||
|
||||
## 1. Systematische Beschreibung der Verarbeitungsvorgänge
|
||||
|
||||
Die geplante Verarbeitung umfasst die Analyse von Kundendaten mittels KI-gestützter Systeme zur Verbesserung der Servicequalität und Personalisierung von Angeboten.
|
||||
|
||||
### Verarbeitungszwecke:
|
||||
- Kundensegmentierung und Analyse des Nutzerverhaltens
|
||||
- Personalisierte Empfehlungen
|
||||
- Optimierung von Geschäftsprozessen
|
||||
|
||||
### Rechtsgrundlage:
|
||||
- Art. 6 Abs. 1 lit. f DSGVO (berechtigtes Interesse)
|
||||
- Alternativ: Art. 6 Abs. 1 lit. a DSGVO (Einwilligung)
|
||||
|
||||
## 2. Bewertung der Notwendigkeit und Verhältnismäßigkeit
|
||||
|
||||
Die Verarbeitung ist für die genannten Zwecke erforderlich und verhältnismäßig. Alternative Maßnahmen wurden geprüft, jedoch sind diese weniger effektiv.
|
||||
|
||||
## 3. Risikobewertung
|
||||
|
||||
### Identifizierte Risiken:
|
||||
| Risiko | Eintrittswahrscheinlichkeit | Schwere | Maßnahmen |
|
||||
|--------|---------------------------|---------|-----------|
|
||||
| Unbefugter Zugriff | Mittel | Hoch | Verschlüsselung, Zugangskontrolle |
|
||||
| Profilbildung | Hoch | Mittel | Anonymisierung, Einwilligung |
|
||||
| Datenverlust | Niedrig | Hoch | Backup, Redundanz |
|
||||
|
||||
## 4. Maßnahmen zur Risikominderung
|
||||
|
||||
- Implementierung von Verschlüsselung (AES-256)
|
||||
- Strenge Zugriffskontrollen nach dem Least-Privilege-Prinzip
|
||||
- Regelmäßige Datenschutz-Schulungen
|
||||
- Audit-Logging aller Zugriffe
|
||||
|
||||
## 5. Stellungnahme des Datenschutzbeauftragten
|
||||
|
||||
[Hier Stellungnahme einfügen]
|
||||
|
||||
## 6. Dokumentation der Konsultation
|
||||
|
||||
Erstellt am: ${new Date().toISOString()}
|
||||
Status: ENTWURF
|
||||
`
|
||||
}
|
||||
|
||||
func (h *GenerateHandler) getMockTOM(context map[string]interface{}) string {
|
||||
return `# Technische und Organisatorische Maßnahmen (TOMs)
|
||||
|
||||
## 1. Vertraulichkeit (Art. 32 Abs. 1 lit. b DSGVO)
|
||||
|
||||
### 1.1 Zutrittskontrolle
|
||||
- Alarmanlage
|
||||
- Chipkarten-/Transponder-System
|
||||
- Videoüberwachung der Eingänge
|
||||
- Besuchererfassung und -begleitung
|
||||
|
||||
### 1.2 Zugangskontrolle
|
||||
- Passwort-Richtlinie (min. 12 Zeichen, Komplexitätsanforderungen)
|
||||
- Multi-Faktor-Authentifizierung
|
||||
- Automatische Bildschirmsperre
|
||||
- VPN für Remote-Zugriffe
|
||||
|
||||
### 1.3 Zugriffskontrolle
|
||||
- Rollenbasiertes Berechtigungskonzept
|
||||
- Need-to-know-Prinzip
|
||||
- Regelmäßige Überprüfung der Zugriffsrechte
|
||||
- Protokollierung aller Zugriffe
|
||||
|
||||
## 2. Integrität (Art. 32 Abs. 1 lit. b DSGVO)
|
||||
|
||||
### 2.1 Weitergabekontrolle
|
||||
- Transportverschlüsselung (TLS 1.3)
|
||||
- Ende-zu-Ende-Verschlüsselung für sensible Daten
|
||||
- Sichere E-Mail-Kommunikation (S/MIME)
|
||||
|
||||
### 2.2 Eingabekontrolle
|
||||
- Protokollierung aller Datenänderungen
|
||||
- Benutzeridentifikation bei Änderungen
|
||||
- Audit-Trail für alle Transaktionen
|
||||
|
||||
## 3. Verfügbarkeit (Art. 32 Abs. 1 lit. c DSGVO)
|
||||
|
||||
### 3.1 Verfügbarkeitskontrolle
|
||||
- Tägliche Backups
|
||||
- Georedundante Datenspeicherung
|
||||
- USV-Anlage
|
||||
- Notfallplan
|
||||
|
||||
### 3.2 Wiederherstellung
|
||||
- Dokumentierte Wiederherstellungsverfahren
|
||||
- Regelmäßige Backup-Tests
|
||||
- Maximale Wiederherstellungszeit: 4 Stunden
|
||||
|
||||
## 4. Belastbarkeit (Art. 32 Abs. 1 lit. b DSGVO)
|
||||
|
||||
- Lastverteilung
|
||||
- DDoS-Schutz
|
||||
- Skalierbare Infrastruktur
|
||||
`
|
||||
}
|
||||
|
||||
func (h *GenerateHandler) getMockVVT(context map[string]interface{}) string {
|
||||
return `# Verzeichnis der Verarbeitungstätigkeiten (Art. 30 DSGVO)
|
||||
|
||||
## Verarbeitungstätigkeit: Kundenanalyse und Personalisierung
|
||||
|
||||
### Angaben nach Art. 30 Abs. 1 DSGVO:
|
||||
|
||||
| Feld | Inhalt |
|
||||
|------|--------|
|
||||
| **Name des Verantwortlichen** | [Unternehmensname] |
|
||||
| **Kontaktdaten** | [Adresse, E-Mail, Telefon] |
|
||||
| **Datenschutzbeauftragter** | [Name, Kontakt] |
|
||||
| **Zweck der Verarbeitung** | Kundensegmentierung, Personalisierung, Serviceoptimierung |
|
||||
| **Kategorien betroffener Personen** | Kunden, Interessenten |
|
||||
| **Kategorien personenbezogener Daten** | Kontaktdaten, Nutzungsdaten, Transaktionsdaten |
|
||||
| **Kategorien von Empfängern** | Interne Abteilungen, IT-Dienstleister |
|
||||
| **Drittlandtransfer** | Nein / Ja (mit Angabe der Garantien) |
|
||||
| **Löschfristen** | 3 Jahre nach letzter Aktivität |
|
||||
| **TOM-Referenz** | Siehe TOM-Dokument v1.0 |
|
||||
|
||||
### Rechtsgrundlage:
|
||||
Art. 6 Abs. 1 lit. f DSGVO - Berechtigtes Interesse
|
||||
|
||||
### Dokumentation:
|
||||
- Erstellt: ${new Date().toISOString()}
|
||||
- Letzte Aktualisierung: ${new Date().toISOString()}
|
||||
- Version: 1.0
|
||||
`
|
||||
}
|
||||
|
||||
func (h *GenerateHandler) getMockGutachten(context map[string]interface{}) string {
|
||||
return `# Compliance-Gutachten
|
||||
|
||||
## Zusammenfassung
|
||||
|
||||
Das geprüfte KI-System erfüllt die wesentlichen Anforderungen der DSGVO und des AI Acts. Es wurden jedoch Optimierungspotenziale identifiziert.
|
||||
|
||||
## Prüfungsumfang
|
||||
|
||||
- DSGVO-Konformität
|
||||
- AI Act Compliance
|
||||
- NIS2-Anforderungen
|
||||
|
||||
## Bewertungsergebnis
|
||||
|
||||
| Bereich | Bewertung | Handlungsbedarf |
|
||||
|---------|-----------|-----------------|
|
||||
| Datenschutz | Gut | Gering |
|
||||
| KI-Risikoeinstufung | Erfüllt | Keiner |
|
||||
| Cybersicherheit | Befriedigend | Mittel |
|
||||
|
||||
## Empfehlungen
|
||||
|
||||
1. Verstärkung der Dokumentation
|
||||
2. Regelmäßige Audits einplanen
|
||||
3. Schulungsmaßnahmen erweitern
|
||||
|
||||
Erstellt am: ${new Date().toISOString()}
|
||||
`
|
||||
}
|
||||
|
||||
// convertLLMSources converts llm.SearchResult to api.SearchResult for the response
|
||||
func convertLLMSources(sources []llm.SearchResult) []SearchResult {
|
||||
if sources == nil {
|
||||
return nil
|
||||
}
|
||||
result := make([]SearchResult, len(sources))
|
||||
for i, s := range sources {
|
||||
result[i] = SearchResult{
|
||||
ID: s.ID,
|
||||
Content: s.Content,
|
||||
Source: s.Source,
|
||||
Score: s.Score,
|
||||
Metadata: s.Metadata,
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
182
admin-v2/ai-compliance-sdk/internal/api/rag.go
Normal file
182
admin-v2/ai-compliance-sdk/internal/api/rag.go
Normal file
@@ -0,0 +1,182 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strconv"
|
||||
|
||||
"github.com/breakpilot/ai-compliance-sdk/internal/rag"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// RAGHandler handles RAG search requests
|
||||
type RAGHandler struct {
|
||||
ragService *rag.Service
|
||||
}
|
||||
|
||||
// NewRAGHandler creates a new RAG handler
|
||||
func NewRAGHandler(ragService *rag.Service) *RAGHandler {
|
||||
return &RAGHandler{
|
||||
ragService: ragService,
|
||||
}
|
||||
}
|
||||
|
||||
// Search performs semantic search on the legal corpus
|
||||
func (h *RAGHandler) Search(c *gin.Context) {
|
||||
query := c.Query("q")
|
||||
if query == "" {
|
||||
ErrorResponse(c, http.StatusBadRequest, "Query parameter 'q' is required", "MISSING_QUERY")
|
||||
return
|
||||
}
|
||||
|
||||
topK := 5
|
||||
if topKStr := c.Query("top_k"); topKStr != "" {
|
||||
if parsed, err := strconv.Atoi(topKStr); err == nil && parsed > 0 {
|
||||
topK = parsed
|
||||
}
|
||||
}
|
||||
|
||||
collection := c.DefaultQuery("collection", "legal_corpus")
|
||||
filter := c.Query("filter") // e.g., "regulation:DSGVO" or "category:ai_act"
|
||||
|
||||
// Check if RAG service is available
|
||||
if h.ragService == nil {
|
||||
// Return mock data when RAG is not available
|
||||
SuccessResponse(c, gin.H{
|
||||
"query": query,
|
||||
"topK": topK,
|
||||
"results": h.getMockResults(query),
|
||||
"source": "mock",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
results, err := h.ragService.Search(c.Request.Context(), query, topK, collection, filter)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusInternalServerError, "Search failed: "+err.Error(), "SEARCH_FAILED")
|
||||
return
|
||||
}
|
||||
|
||||
SuccessResponse(c, gin.H{
|
||||
"query": query,
|
||||
"topK": topK,
|
||||
"results": results,
|
||||
"source": "qdrant",
|
||||
})
|
||||
}
|
||||
|
||||
// GetCorpusStatus returns the status of the legal corpus
|
||||
func (h *RAGHandler) GetCorpusStatus(c *gin.Context) {
|
||||
if h.ragService == nil {
|
||||
SuccessResponse(c, gin.H{
|
||||
"status": "unavailable",
|
||||
"collections": []string{},
|
||||
"documents": 0,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
status, err := h.ragService.GetCorpusStatus(c.Request.Context())
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusInternalServerError, "Failed to get corpus status", "STATUS_FAILED")
|
||||
return
|
||||
}
|
||||
|
||||
SuccessResponse(c, status)
|
||||
}
|
||||
|
||||
// IndexDocument indexes a new document into the corpus
|
||||
func (h *RAGHandler) IndexDocument(c *gin.Context) {
|
||||
var req struct {
|
||||
Collection string `json:"collection" binding:"required"`
|
||||
ID string `json:"id" binding:"required"`
|
||||
Content string `json:"content" binding:"required"`
|
||||
Metadata map[string]string `json:"metadata"`
|
||||
}
|
||||
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
if h.ragService == nil {
|
||||
ErrorResponse(c, http.StatusServiceUnavailable, "RAG service not available", "SERVICE_UNAVAILABLE")
|
||||
return
|
||||
}
|
||||
|
||||
err := h.ragService.IndexDocument(c.Request.Context(), req.Collection, req.ID, req.Content, req.Metadata)
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusInternalServerError, "Failed to index document: "+err.Error(), "INDEX_FAILED")
|
||||
return
|
||||
}
|
||||
|
||||
SuccessResponse(c, gin.H{
|
||||
"indexed": true,
|
||||
"id": req.ID,
|
||||
"collection": req.Collection,
|
||||
"indexedAt": now(),
|
||||
})
|
||||
}
|
||||
|
||||
// getMockResults returns mock search results for development
|
||||
func (h *RAGHandler) getMockResults(query string) []SearchResult {
|
||||
// Simplified mock results based on common compliance queries
|
||||
results := []SearchResult{
|
||||
{
|
||||
ID: "dsgvo-art-5",
|
||||
Content: "Art. 5 DSGVO - Grundsätze für die Verarbeitung personenbezogener Daten: Personenbezogene Daten müssen auf rechtmäßige Weise, nach Treu und Glauben und in einer für die betroffene Person nachvollziehbaren Weise verarbeitet werden.",
|
||||
Source: "DSGVO",
|
||||
Score: 0.95,
|
||||
Metadata: map[string]string{
|
||||
"article": "5",
|
||||
"regulation": "DSGVO",
|
||||
"category": "grundsaetze",
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "dsgvo-art-6",
|
||||
Content: "Art. 6 DSGVO - Rechtmäßigkeit der Verarbeitung: Die Verarbeitung ist nur rechtmäßig, wenn mindestens eine der folgenden Bedingungen erfüllt ist: Einwilligung, Vertragserfüllung, rechtliche Verpflichtung, lebenswichtige Interessen, öffentliche Aufgabe, berechtigtes Interesse.",
|
||||
Source: "DSGVO",
|
||||
Score: 0.89,
|
||||
Metadata: map[string]string{
|
||||
"article": "6",
|
||||
"regulation": "DSGVO",
|
||||
"category": "rechtsgrundlage",
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "ai-act-art-6",
|
||||
Content: "Art. 6 AI Act - Klassifizierungsregeln für Hochrisiko-KI-Systeme: Ein KI-System gilt als Hochrisiko-System, wenn es als Sicherheitskomponente eines Produkts verwendet wird oder selbst ein Produkt ist, das unter die in Anhang II aufgeführten Harmonisierungsrechtsvorschriften fällt.",
|
||||
Source: "AI Act",
|
||||
Score: 0.85,
|
||||
Metadata: map[string]string{
|
||||
"article": "6",
|
||||
"regulation": "AI_ACT",
|
||||
"category": "hochrisiko",
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "nis2-art-21",
|
||||
Content: "Art. 21 NIS2 - Risikomanagementmaßnahmen: Wesentliche und wichtige Einrichtungen müssen geeignete und verhältnismäßige technische, operative und organisatorische Maßnahmen ergreifen, um die Risiken für die Sicherheit der Netz- und Informationssysteme zu beherrschen.",
|
||||
Source: "NIS2",
|
||||
Score: 0.78,
|
||||
Metadata: map[string]string{
|
||||
"article": "21",
|
||||
"regulation": "NIS2",
|
||||
"category": "risikomanagement",
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "dsgvo-art-35",
|
||||
Content: "Art. 35 DSGVO - Datenschutz-Folgenabschätzung: Hat eine Form der Verarbeitung, insbesondere bei Verwendung neuer Technologien, aufgrund der Art, des Umfangs, der Umstände und der Zwecke der Verarbeitung voraussichtlich ein hohes Risiko für die Rechte und Freiheiten natürlicher Personen zur Folge, so führt der Verantwortliche vorab eine Abschätzung der Folgen der vorgesehenen Verarbeitungsvorgänge für den Schutz personenbezogener Daten durch.",
|
||||
Source: "DSGVO",
|
||||
Score: 0.75,
|
||||
Metadata: map[string]string{
|
||||
"article": "35",
|
||||
"regulation": "DSGVO",
|
||||
"category": "dsfa",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
return results
|
||||
}
|
||||
96
admin-v2/ai-compliance-sdk/internal/api/router.go
Normal file
96
admin-v2/ai-compliance-sdk/internal/api/router.go
Normal file
@@ -0,0 +1,96 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// Response represents a standard API response
|
||||
type Response struct {
|
||||
Success bool `json:"success"`
|
||||
Data interface{} `json:"data,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
Code string `json:"code,omitempty"`
|
||||
}
|
||||
|
||||
// SuccessResponse creates a success response
|
||||
func SuccessResponse(c *gin.Context, data interface{}) {
|
||||
c.JSON(http.StatusOK, Response{
|
||||
Success: true,
|
||||
Data: data,
|
||||
})
|
||||
}
|
||||
|
||||
// ErrorResponse creates an error response
|
||||
func ErrorResponse(c *gin.Context, status int, err string, code string) {
|
||||
c.JSON(status, Response{
|
||||
Success: false,
|
||||
Error: err,
|
||||
Code: code,
|
||||
})
|
||||
}
|
||||
|
||||
// StateData represents state response data
|
||||
type StateData struct {
|
||||
TenantID string `json:"tenantId"`
|
||||
State interface{} `json:"state"`
|
||||
Version int `json:"version"`
|
||||
LastModified string `json:"lastModified"`
|
||||
}
|
||||
|
||||
// ValidationError represents a validation error
|
||||
type ValidationError struct {
|
||||
RuleID string `json:"ruleId"`
|
||||
Field string `json:"field"`
|
||||
Message string `json:"message"`
|
||||
Severity string `json:"severity"`
|
||||
}
|
||||
|
||||
// CheckpointResult represents checkpoint validation result
|
||||
type CheckpointResult struct {
|
||||
CheckpointID string `json:"checkpointId"`
|
||||
Passed bool `json:"passed"`
|
||||
ValidatedAt string `json:"validatedAt"`
|
||||
ValidatedBy string `json:"validatedBy"`
|
||||
Errors []ValidationError `json:"errors"`
|
||||
Warnings []ValidationError `json:"warnings"`
|
||||
}
|
||||
|
||||
// SearchResult represents a RAG search result
|
||||
type SearchResult struct {
|
||||
ID string `json:"id"`
|
||||
Content string `json:"content"`
|
||||
Source string `json:"source"`
|
||||
Score float64 `json:"score"`
|
||||
Metadata map[string]string `json:"metadata,omitempty"`
|
||||
Highlights []string `json:"highlights,omitempty"`
|
||||
}
|
||||
|
||||
// GenerateRequest represents a document generation request
|
||||
type GenerateRequest struct {
|
||||
TenantID string `json:"tenantId" binding:"required"`
|
||||
Context map[string]interface{} `json:"context"`
|
||||
Template string `json:"template,omitempty"`
|
||||
Language string `json:"language,omitempty"`
|
||||
UseRAG bool `json:"useRag"`
|
||||
RAGQuery string `json:"ragQuery,omitempty"`
|
||||
MaxTokens int `json:"maxTokens,omitempty"`
|
||||
Temperature float64 `json:"temperature,omitempty"`
|
||||
}
|
||||
|
||||
// GenerateResponse represents a document generation response
|
||||
type GenerateResponse struct {
|
||||
Content string `json:"content"`
|
||||
GeneratedAt string `json:"generatedAt"`
|
||||
Model string `json:"model"`
|
||||
TokensUsed int `json:"tokensUsed"`
|
||||
RAGSources []SearchResult `json:"ragSources,omitempty"`
|
||||
Confidence float64 `json:"confidence,omitempty"`
|
||||
}
|
||||
|
||||
// Timestamps helper
|
||||
func now() string {
|
||||
return time.Now().UTC().Format(time.RFC3339)
|
||||
}
|
||||
171
admin-v2/ai-compliance-sdk/internal/api/state.go
Normal file
171
admin-v2/ai-compliance-sdk/internal/api/state.go
Normal file
@@ -0,0 +1,171 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"strconv"
|
||||
|
||||
"github.com/breakpilot/ai-compliance-sdk/internal/db"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// StateHandler handles state management requests
|
||||
type StateHandler struct {
|
||||
dbPool *db.Pool
|
||||
memStore *db.InMemoryStore
|
||||
}
|
||||
|
||||
// NewStateHandler creates a new state handler
|
||||
func NewStateHandler(dbPool *db.Pool) *StateHandler {
|
||||
return &StateHandler{
|
||||
dbPool: dbPool,
|
||||
memStore: db.NewInMemoryStore(),
|
||||
}
|
||||
}
|
||||
|
||||
// GetState retrieves state for a tenant
|
||||
func (h *StateHandler) GetState(c *gin.Context) {
|
||||
tenantID := c.Param("tenantId")
|
||||
if tenantID == "" {
|
||||
ErrorResponse(c, http.StatusBadRequest, "tenantId is required", "MISSING_TENANT_ID")
|
||||
return
|
||||
}
|
||||
|
||||
var state *db.SDKState
|
||||
var err error
|
||||
|
||||
// Try database first, fall back to in-memory
|
||||
if h.dbPool != nil {
|
||||
state, err = h.dbPool.GetState(c.Request.Context(), tenantID)
|
||||
} else {
|
||||
state, err = h.memStore.GetState(tenantID)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusNotFound, "State not found", "STATE_NOT_FOUND")
|
||||
return
|
||||
}
|
||||
|
||||
// Generate ETag
|
||||
etag := generateETag(state.Version, state.UpdatedAt.String())
|
||||
|
||||
// Check If-None-Match header
|
||||
if c.GetHeader("If-None-Match") == etag {
|
||||
c.Status(http.StatusNotModified)
|
||||
return
|
||||
}
|
||||
|
||||
// Parse state JSON
|
||||
var stateData interface{}
|
||||
if err := json.Unmarshal(state.State, &stateData); err != nil {
|
||||
stateData = state.State
|
||||
}
|
||||
|
||||
c.Header("ETag", etag)
|
||||
c.Header("Last-Modified", state.UpdatedAt.Format("Mon, 02 Jan 2006 15:04:05 GMT"))
|
||||
c.Header("Cache-Control", "private, no-cache")
|
||||
|
||||
SuccessResponse(c, StateData{
|
||||
TenantID: state.TenantID,
|
||||
State: stateData,
|
||||
Version: state.Version,
|
||||
LastModified: state.UpdatedAt.Format("2006-01-02T15:04:05Z07:00"),
|
||||
})
|
||||
}
|
||||
|
||||
// SaveState saves state for a tenant
|
||||
func (h *StateHandler) SaveState(c *gin.Context) {
|
||||
var req struct {
|
||||
TenantID string `json:"tenantId" binding:"required"`
|
||||
UserID string `json:"userId"`
|
||||
State json.RawMessage `json:"state" binding:"required"`
|
||||
Version *int `json:"version"`
|
||||
}
|
||||
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
ErrorResponse(c, http.StatusBadRequest, err.Error(), "INVALID_REQUEST")
|
||||
return
|
||||
}
|
||||
|
||||
// Check If-Match header for optimistic locking
|
||||
var expectedVersion *int
|
||||
if ifMatch := c.GetHeader("If-Match"); ifMatch != "" {
|
||||
v, err := strconv.Atoi(ifMatch)
|
||||
if err == nil {
|
||||
expectedVersion = &v
|
||||
}
|
||||
} else if req.Version != nil {
|
||||
expectedVersion = req.Version
|
||||
}
|
||||
|
||||
var state *db.SDKState
|
||||
var err error
|
||||
|
||||
// Try database first, fall back to in-memory
|
||||
if h.dbPool != nil {
|
||||
state, err = h.dbPool.SaveState(c.Request.Context(), req.TenantID, req.UserID, req.State, expectedVersion)
|
||||
} else {
|
||||
state, err = h.memStore.SaveState(req.TenantID, req.UserID, req.State, expectedVersion)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
if err.Error() == "version conflict" {
|
||||
ErrorResponse(c, http.StatusConflict, "Version conflict. State was modified by another request.", "VERSION_CONFLICT")
|
||||
return
|
||||
}
|
||||
ErrorResponse(c, http.StatusInternalServerError, "Failed to save state", "SAVE_FAILED")
|
||||
return
|
||||
}
|
||||
|
||||
// Generate ETag
|
||||
etag := generateETag(state.Version, state.UpdatedAt.String())
|
||||
|
||||
// Parse state JSON
|
||||
var stateData interface{}
|
||||
if err := json.Unmarshal(state.State, &stateData); err != nil {
|
||||
stateData = state.State
|
||||
}
|
||||
|
||||
c.Header("ETag", etag)
|
||||
c.Header("Last-Modified", state.UpdatedAt.Format("Mon, 02 Jan 2006 15:04:05 GMT"))
|
||||
|
||||
SuccessResponse(c, StateData{
|
||||
TenantID: state.TenantID,
|
||||
State: stateData,
|
||||
Version: state.Version,
|
||||
LastModified: state.UpdatedAt.Format("2006-01-02T15:04:05Z07:00"),
|
||||
})
|
||||
}
|
||||
|
||||
// DeleteState deletes state for a tenant
|
||||
func (h *StateHandler) DeleteState(c *gin.Context) {
|
||||
tenantID := c.Param("tenantId")
|
||||
if tenantID == "" {
|
||||
ErrorResponse(c, http.StatusBadRequest, "tenantId is required", "MISSING_TENANT_ID")
|
||||
return
|
||||
}
|
||||
|
||||
var err error
|
||||
|
||||
// Try database first, fall back to in-memory
|
||||
if h.dbPool != nil {
|
||||
err = h.dbPool.DeleteState(c.Request.Context(), tenantID)
|
||||
} else {
|
||||
err = h.memStore.DeleteState(tenantID)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
ErrorResponse(c, http.StatusInternalServerError, "Failed to delete state", "DELETE_FAILED")
|
||||
return
|
||||
}
|
||||
|
||||
SuccessResponse(c, gin.H{
|
||||
"tenantId": tenantID,
|
||||
"deletedAt": now(),
|
||||
})
|
||||
}
|
||||
|
||||
// generateETag creates an ETag from version and timestamp
|
||||
func generateETag(version int, timestamp string) string {
|
||||
return "\"" + strconv.Itoa(version) + "-" + timestamp[:8] + "\""
|
||||
}
|
||||
681
admin-v2/ai-compliance-sdk/internal/db/academy_store.go
Normal file
681
admin-v2/ai-compliance-sdk/internal/db/academy_store.go
Normal file
@@ -0,0 +1,681 @@
|
||||
package db
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// AcademyMemStore provides in-memory storage for academy data
|
||||
type AcademyMemStore struct {
|
||||
mu sync.RWMutex
|
||||
courses map[string]*AcademyCourseRow
|
||||
lessons map[string]*AcademyLessonRow
|
||||
quizQuestions map[string]*AcademyQuizQuestionRow
|
||||
enrollments map[string]*AcademyEnrollmentRow
|
||||
certificates map[string]*AcademyCertificateRow
|
||||
lessonProgress map[string]*AcademyLessonProgressRow
|
||||
}
|
||||
|
||||
// Row types matching the DB schema
|
||||
type AcademyCourseRow struct {
|
||||
ID string
|
||||
TenantID string
|
||||
Title string
|
||||
Description string
|
||||
Category string
|
||||
PassingScore int
|
||||
DurationMinutes int
|
||||
RequiredForRoles []string
|
||||
Status string
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
}
|
||||
|
||||
type AcademyLessonRow struct {
|
||||
ID string
|
||||
CourseID string
|
||||
Title string
|
||||
Type string
|
||||
ContentMarkdown string
|
||||
VideoURL string
|
||||
AudioURL string
|
||||
SortOrder int
|
||||
DurationMinutes int
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
}
|
||||
|
||||
type AcademyQuizQuestionRow struct {
|
||||
ID string
|
||||
LessonID string
|
||||
Question string
|
||||
Options []string
|
||||
CorrectOptionIndex int
|
||||
Explanation string
|
||||
SortOrder int
|
||||
CreatedAt time.Time
|
||||
}
|
||||
|
||||
type AcademyEnrollmentRow struct {
|
||||
ID string
|
||||
TenantID string
|
||||
CourseID string
|
||||
UserID string
|
||||
UserName string
|
||||
UserEmail string
|
||||
Status string
|
||||
Progress int
|
||||
StartedAt time.Time
|
||||
CompletedAt *time.Time
|
||||
CertificateID string
|
||||
Deadline time.Time
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
}
|
||||
|
||||
type AcademyCertificateRow struct {
|
||||
ID string
|
||||
TenantID string
|
||||
EnrollmentID string
|
||||
CourseID string
|
||||
UserID string
|
||||
UserName string
|
||||
CourseName string
|
||||
Score int
|
||||
IssuedAt time.Time
|
||||
ValidUntil time.Time
|
||||
PdfURL string
|
||||
}
|
||||
|
||||
type AcademyLessonProgressRow struct {
|
||||
ID string
|
||||
EnrollmentID string
|
||||
LessonID string
|
||||
Completed bool
|
||||
QuizScore *int
|
||||
CompletedAt *time.Time
|
||||
}
|
||||
|
||||
type AcademyStatisticsRow struct {
|
||||
TotalCourses int
|
||||
TotalEnrollments int
|
||||
CompletionRate float64
|
||||
OverdueCount int
|
||||
ByCategory map[string]int
|
||||
ByStatus map[string]int
|
||||
}
|
||||
|
||||
func NewAcademyMemStore() *AcademyMemStore {
|
||||
return &AcademyMemStore{
|
||||
courses: make(map[string]*AcademyCourseRow),
|
||||
lessons: make(map[string]*AcademyLessonRow),
|
||||
quizQuestions: make(map[string]*AcademyQuizQuestionRow),
|
||||
enrollments: make(map[string]*AcademyEnrollmentRow),
|
||||
certificates: make(map[string]*AcademyCertificateRow),
|
||||
lessonProgress: make(map[string]*AcademyLessonProgressRow),
|
||||
}
|
||||
}
|
||||
|
||||
// generateID creates a simple unique ID
|
||||
func generateID() string {
|
||||
return fmt.Sprintf("%d", time.Now().UnixNano())
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Course CRUD
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// ListCourses returns all courses for a tenant, sorted by UpdatedAt DESC.
|
||||
func (s *AcademyMemStore) ListCourses(tenantID string) []*AcademyCourseRow {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
var result []*AcademyCourseRow
|
||||
for _, c := range s.courses {
|
||||
if c.TenantID == tenantID {
|
||||
result = append(result, c)
|
||||
}
|
||||
}
|
||||
|
||||
sort.Slice(result, func(i, j int) bool {
|
||||
return result[i].UpdatedAt.After(result[j].UpdatedAt)
|
||||
})
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// GetCourse retrieves a single course by ID.
|
||||
func (s *AcademyMemStore) GetCourse(id string) (*AcademyCourseRow, error) {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
c, ok := s.courses[id]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("course not found: %s", id)
|
||||
}
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// CreateCourse inserts a new course with auto-generated ID and timestamps.
|
||||
func (s *AcademyMemStore) CreateCourse(row *AcademyCourseRow) *AcademyCourseRow {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
now := time.Now()
|
||||
row.ID = generateID()
|
||||
row.CreatedAt = now
|
||||
row.UpdatedAt = now
|
||||
s.courses[row.ID] = row
|
||||
return row
|
||||
}
|
||||
|
||||
// UpdateCourse partially updates a course. Supported keys: Title, Description,
|
||||
// Category, PassingScore, DurationMinutes, RequiredForRoles, Status.
|
||||
func (s *AcademyMemStore) UpdateCourse(id string, updates map[string]interface{}) (*AcademyCourseRow, error) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
c, ok := s.courses[id]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("course not found: %s", id)
|
||||
}
|
||||
|
||||
for k, v := range updates {
|
||||
switch strings.ToLower(k) {
|
||||
case "title":
|
||||
if val, ok := v.(string); ok {
|
||||
c.Title = val
|
||||
}
|
||||
case "description":
|
||||
if val, ok := v.(string); ok {
|
||||
c.Description = val
|
||||
}
|
||||
case "category":
|
||||
if val, ok := v.(string); ok {
|
||||
c.Category = val
|
||||
}
|
||||
case "passingscore", "passing_score":
|
||||
switch val := v.(type) {
|
||||
case int:
|
||||
c.PassingScore = val
|
||||
case float64:
|
||||
c.PassingScore = int(val)
|
||||
}
|
||||
case "durationminutes", "duration_minutes":
|
||||
switch val := v.(type) {
|
||||
case int:
|
||||
c.DurationMinutes = val
|
||||
case float64:
|
||||
c.DurationMinutes = int(val)
|
||||
}
|
||||
case "requiredforroles", "required_for_roles":
|
||||
if val, ok := v.([]string); ok {
|
||||
c.RequiredForRoles = val
|
||||
}
|
||||
case "status":
|
||||
if val, ok := v.(string); ok {
|
||||
c.Status = val
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
c.UpdatedAt = time.Now()
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// DeleteCourse removes a course and all related lessons, quiz questions,
|
||||
// enrollments, certificates, and lesson progress.
|
||||
func (s *AcademyMemStore) DeleteCourse(id string) error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if _, ok := s.courses[id]; !ok {
|
||||
return fmt.Errorf("course not found: %s", id)
|
||||
}
|
||||
|
||||
// Collect lesson IDs for this course
|
||||
lessonIDs := make(map[string]bool)
|
||||
for lid, l := range s.lessons {
|
||||
if l.CourseID == id {
|
||||
lessonIDs[lid] = true
|
||||
}
|
||||
}
|
||||
|
||||
// Delete quiz questions belonging to those lessons
|
||||
for qid, q := range s.quizQuestions {
|
||||
if lessonIDs[q.LessonID] {
|
||||
delete(s.quizQuestions, qid)
|
||||
}
|
||||
}
|
||||
|
||||
// Delete lessons
|
||||
for lid := range lessonIDs {
|
||||
delete(s.lessons, lid)
|
||||
}
|
||||
|
||||
// Collect enrollment IDs for this course
|
||||
enrollmentIDs := make(map[string]bool)
|
||||
for eid, e := range s.enrollments {
|
||||
if e.CourseID == id {
|
||||
enrollmentIDs[eid] = true
|
||||
}
|
||||
}
|
||||
|
||||
// Delete lesson progress belonging to those enrollments
|
||||
for pid, p := range s.lessonProgress {
|
||||
if enrollmentIDs[p.EnrollmentID] {
|
||||
delete(s.lessonProgress, pid)
|
||||
}
|
||||
}
|
||||
|
||||
// Delete certificates belonging to those enrollments
|
||||
for cid, cert := range s.certificates {
|
||||
if cert.CourseID == id {
|
||||
delete(s.certificates, cid)
|
||||
}
|
||||
}
|
||||
|
||||
// Delete enrollments
|
||||
for eid := range enrollmentIDs {
|
||||
delete(s.enrollments, eid)
|
||||
}
|
||||
|
||||
// Delete the course itself
|
||||
delete(s.courses, id)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Lesson CRUD
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// ListLessons returns all lessons for a course, sorted by SortOrder ASC.
|
||||
func (s *AcademyMemStore) ListLessons(courseID string) []*AcademyLessonRow {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
var result []*AcademyLessonRow
|
||||
for _, l := range s.lessons {
|
||||
if l.CourseID == courseID {
|
||||
result = append(result, l)
|
||||
}
|
||||
}
|
||||
|
||||
sort.Slice(result, func(i, j int) bool {
|
||||
return result[i].SortOrder < result[j].SortOrder
|
||||
})
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// GetLesson retrieves a single lesson by ID.
|
||||
func (s *AcademyMemStore) GetLesson(id string) (*AcademyLessonRow, error) {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
l, ok := s.lessons[id]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("lesson not found: %s", id)
|
||||
}
|
||||
return l, nil
|
||||
}
|
||||
|
||||
// CreateLesson inserts a new lesson with auto-generated ID and timestamps.
|
||||
func (s *AcademyMemStore) CreateLesson(row *AcademyLessonRow) *AcademyLessonRow {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
now := time.Now()
|
||||
row.ID = generateID()
|
||||
row.CreatedAt = now
|
||||
row.UpdatedAt = now
|
||||
s.lessons[row.ID] = row
|
||||
return row
|
||||
}
|
||||
|
||||
// UpdateLesson partially updates a lesson. Supported keys: Title, Type,
|
||||
// ContentMarkdown, VideoURL, AudioURL, SortOrder, DurationMinutes.
|
||||
func (s *AcademyMemStore) UpdateLesson(id string, updates map[string]interface{}) (*AcademyLessonRow, error) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
l, ok := s.lessons[id]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("lesson not found: %s", id)
|
||||
}
|
||||
|
||||
for k, v := range updates {
|
||||
switch strings.ToLower(k) {
|
||||
case "title":
|
||||
if val, ok := v.(string); ok {
|
||||
l.Title = val
|
||||
}
|
||||
case "type":
|
||||
if val, ok := v.(string); ok {
|
||||
l.Type = val
|
||||
}
|
||||
case "contentmarkdown", "content_markdown":
|
||||
if val, ok := v.(string); ok {
|
||||
l.ContentMarkdown = val
|
||||
}
|
||||
case "videourl", "video_url":
|
||||
if val, ok := v.(string); ok {
|
||||
l.VideoURL = val
|
||||
}
|
||||
case "audiourl", "audio_url":
|
||||
if val, ok := v.(string); ok {
|
||||
l.AudioURL = val
|
||||
}
|
||||
case "sortorder", "sort_order":
|
||||
switch val := v.(type) {
|
||||
case int:
|
||||
l.SortOrder = val
|
||||
case float64:
|
||||
l.SortOrder = int(val)
|
||||
}
|
||||
case "durationminutes", "duration_minutes":
|
||||
switch val := v.(type) {
|
||||
case int:
|
||||
l.DurationMinutes = val
|
||||
case float64:
|
||||
l.DurationMinutes = int(val)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
l.UpdatedAt = time.Now()
|
||||
return l, nil
|
||||
}
|
||||
|
||||
// DeleteLesson removes a lesson and its quiz questions.
|
||||
func (s *AcademyMemStore) DeleteLesson(id string) error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if _, ok := s.lessons[id]; !ok {
|
||||
return fmt.Errorf("lesson not found: %s", id)
|
||||
}
|
||||
|
||||
// Delete quiz questions belonging to this lesson
|
||||
for qid, q := range s.quizQuestions {
|
||||
if q.LessonID == id {
|
||||
delete(s.quizQuestions, qid)
|
||||
}
|
||||
}
|
||||
|
||||
delete(s.lessons, id)
|
||||
return nil
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Quiz Questions
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// ListQuizQuestions returns all quiz questions for a lesson, sorted by SortOrder ASC.
|
||||
func (s *AcademyMemStore) ListQuizQuestions(lessonID string) []*AcademyQuizQuestionRow {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
var result []*AcademyQuizQuestionRow
|
||||
for _, q := range s.quizQuestions {
|
||||
if q.LessonID == lessonID {
|
||||
result = append(result, q)
|
||||
}
|
||||
}
|
||||
|
||||
sort.Slice(result, func(i, j int) bool {
|
||||
return result[i].SortOrder < result[j].SortOrder
|
||||
})
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// CreateQuizQuestion inserts a new quiz question with auto-generated ID and timestamp.
|
||||
func (s *AcademyMemStore) CreateQuizQuestion(row *AcademyQuizQuestionRow) *AcademyQuizQuestionRow {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
row.ID = generateID()
|
||||
row.CreatedAt = time.Now()
|
||||
s.quizQuestions[row.ID] = row
|
||||
return row
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Enrollments
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// ListEnrollments returns enrollments filtered by tenantID and optionally by courseID.
|
||||
// If courseID is empty, all enrollments for the tenant are returned.
|
||||
func (s *AcademyMemStore) ListEnrollments(tenantID string, courseID string) []*AcademyEnrollmentRow {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
var result []*AcademyEnrollmentRow
|
||||
for _, e := range s.enrollments {
|
||||
if e.TenantID != tenantID {
|
||||
continue
|
||||
}
|
||||
if courseID != "" && e.CourseID != courseID {
|
||||
continue
|
||||
}
|
||||
result = append(result, e)
|
||||
}
|
||||
|
||||
sort.Slice(result, func(i, j int) bool {
|
||||
return result[i].UpdatedAt.After(result[j].UpdatedAt)
|
||||
})
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// GetEnrollment retrieves a single enrollment by ID.
|
||||
func (s *AcademyMemStore) GetEnrollment(id string) (*AcademyEnrollmentRow, error) {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
e, ok := s.enrollments[id]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("enrollment not found: %s", id)
|
||||
}
|
||||
return e, nil
|
||||
}
|
||||
|
||||
// CreateEnrollment inserts a new enrollment with auto-generated ID and timestamps.
|
||||
func (s *AcademyMemStore) CreateEnrollment(row *AcademyEnrollmentRow) *AcademyEnrollmentRow {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
now := time.Now()
|
||||
row.ID = generateID()
|
||||
row.CreatedAt = now
|
||||
row.UpdatedAt = now
|
||||
if row.StartedAt.IsZero() {
|
||||
row.StartedAt = now
|
||||
}
|
||||
s.enrollments[row.ID] = row
|
||||
return row
|
||||
}
|
||||
|
||||
// UpdateEnrollment partially updates an enrollment. Supported keys: Status,
|
||||
// Progress, CompletedAt, CertificateID, Deadline.
|
||||
func (s *AcademyMemStore) UpdateEnrollment(id string, updates map[string]interface{}) (*AcademyEnrollmentRow, error) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
e, ok := s.enrollments[id]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("enrollment not found: %s", id)
|
||||
}
|
||||
|
||||
for k, v := range updates {
|
||||
switch strings.ToLower(k) {
|
||||
case "status":
|
||||
if val, ok := v.(string); ok {
|
||||
e.Status = val
|
||||
}
|
||||
case "progress":
|
||||
switch val := v.(type) {
|
||||
case int:
|
||||
e.Progress = val
|
||||
case float64:
|
||||
e.Progress = int(val)
|
||||
}
|
||||
case "completedat", "completed_at":
|
||||
if val, ok := v.(*time.Time); ok {
|
||||
e.CompletedAt = val
|
||||
} else if val, ok := v.(time.Time); ok {
|
||||
e.CompletedAt = &val
|
||||
}
|
||||
case "certificateid", "certificate_id":
|
||||
if val, ok := v.(string); ok {
|
||||
e.CertificateID = val
|
||||
}
|
||||
case "deadline":
|
||||
if val, ok := v.(time.Time); ok {
|
||||
e.Deadline = val
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
e.UpdatedAt = time.Now()
|
||||
return e, nil
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Certificates
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// GetCertificate retrieves a certificate by ID.
|
||||
func (s *AcademyMemStore) GetCertificate(id string) (*AcademyCertificateRow, error) {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
cert, ok := s.certificates[id]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("certificate not found: %s", id)
|
||||
}
|
||||
return cert, nil
|
||||
}
|
||||
|
||||
// GetCertificateByEnrollment retrieves a certificate by enrollment ID.
|
||||
func (s *AcademyMemStore) GetCertificateByEnrollment(enrollmentID string) (*AcademyCertificateRow, error) {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
for _, cert := range s.certificates {
|
||||
if cert.EnrollmentID == enrollmentID {
|
||||
return cert, nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("certificate not found for enrollment: %s", enrollmentID)
|
||||
}
|
||||
|
||||
// CreateCertificate inserts a new certificate with auto-generated ID.
|
||||
func (s *AcademyMemStore) CreateCertificate(row *AcademyCertificateRow) *AcademyCertificateRow {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
row.ID = generateID()
|
||||
if row.IssuedAt.IsZero() {
|
||||
row.IssuedAt = time.Now()
|
||||
}
|
||||
s.certificates[row.ID] = row
|
||||
return row
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Lesson Progress
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// ListLessonProgress returns all progress entries for an enrollment.
|
||||
func (s *AcademyMemStore) ListLessonProgress(enrollmentID string) []*AcademyLessonProgressRow {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
var result []*AcademyLessonProgressRow
|
||||
for _, p := range s.lessonProgress {
|
||||
if p.EnrollmentID == enrollmentID {
|
||||
result = append(result, p)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// UpsertLessonProgress inserts or updates a lesson progress entry.
|
||||
// Matching is done by EnrollmentID + LessonID composite key.
|
||||
func (s *AcademyMemStore) UpsertLessonProgress(row *AcademyLessonProgressRow) *AcademyLessonProgressRow {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
// Look for existing entry with same enrollment_id + lesson_id
|
||||
for _, p := range s.lessonProgress {
|
||||
if p.EnrollmentID == row.EnrollmentID && p.LessonID == row.LessonID {
|
||||
p.Completed = row.Completed
|
||||
p.QuizScore = row.QuizScore
|
||||
p.CompletedAt = row.CompletedAt
|
||||
return p
|
||||
}
|
||||
}
|
||||
|
||||
// Insert new entry
|
||||
row.ID = generateID()
|
||||
s.lessonProgress[row.ID] = row
|
||||
return row
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Statistics
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// GetStatistics computes aggregate statistics for a tenant.
|
||||
func (s *AcademyMemStore) GetStatistics(tenantID string) *AcademyStatisticsRow {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
stats := &AcademyStatisticsRow{
|
||||
ByCategory: make(map[string]int),
|
||||
ByStatus: make(map[string]int),
|
||||
}
|
||||
|
||||
// Count courses by category
|
||||
for _, c := range s.courses {
|
||||
if c.TenantID != tenantID {
|
||||
continue
|
||||
}
|
||||
stats.TotalCourses++
|
||||
if c.Category != "" {
|
||||
stats.ByCategory[c.Category]++
|
||||
}
|
||||
}
|
||||
|
||||
// Count enrollments and compute completion rate
|
||||
var completedCount int
|
||||
now := time.Now()
|
||||
for _, e := range s.enrollments {
|
||||
if e.TenantID != tenantID {
|
||||
continue
|
||||
}
|
||||
stats.TotalEnrollments++
|
||||
stats.ByStatus[e.Status]++
|
||||
|
||||
if e.Status == "completed" {
|
||||
completedCount++
|
||||
}
|
||||
|
||||
// Overdue: not completed and past deadline
|
||||
if e.Status != "completed" && !e.Deadline.IsZero() && now.After(e.Deadline) {
|
||||
stats.OverdueCount++
|
||||
}
|
||||
}
|
||||
|
||||
if stats.TotalEnrollments > 0 {
|
||||
stats.CompletionRate = float64(completedCount) / float64(stats.TotalEnrollments) * 100.0
|
||||
}
|
||||
|
||||
return stats
|
||||
}
|
||||
@@ -0,0 +1,305 @@
|
||||
-- Migration: Create Academy Tables
|
||||
-- Description: Schema for the Compliance Academy module (courses, lessons, quizzes, enrollments, certificates, progress)
|
||||
|
||||
-- Enable UUID extension if not already enabled
|
||||
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
|
||||
|
||||
-- ============================================================================
|
||||
-- 1. academy_courses - Training courses for compliance education
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE IF NOT EXISTS academy_courses (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
tenant_id VARCHAR(255) NOT NULL,
|
||||
title VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
category VARCHAR(50),
|
||||
passing_score INTEGER DEFAULT 70,
|
||||
duration_minutes INTEGER,
|
||||
required_for_roles JSONB DEFAULT '["all"]',
|
||||
status VARCHAR(50) DEFAULT 'draft',
|
||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes for academy_courses
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_courses_tenant ON academy_courses(tenant_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_courses_status ON academy_courses(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_courses_category ON academy_courses(category);
|
||||
|
||||
-- Auto-update trigger for academy_courses.updated_at
|
||||
CREATE OR REPLACE FUNCTION update_academy_courses_updated_at()
|
||||
RETURNS TRIGGER AS $$
|
||||
BEGIN
|
||||
NEW.updated_at = NOW();
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
DROP TRIGGER IF EXISTS trigger_academy_courses_updated_at ON academy_courses;
|
||||
CREATE TRIGGER trigger_academy_courses_updated_at
|
||||
BEFORE UPDATE ON academy_courses
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_academy_courses_updated_at();
|
||||
|
||||
-- Comments for academy_courses
|
||||
COMMENT ON TABLE academy_courses IS 'Stores compliance training courses per tenant';
|
||||
COMMENT ON COLUMN academy_courses.tenant_id IS 'Identifier for the tenant owning this course';
|
||||
COMMENT ON COLUMN academy_courses.title IS 'Course title displayed to users';
|
||||
COMMENT ON COLUMN academy_courses.category IS 'Course category (e.g. dsgvo, ai-act, security)';
|
||||
COMMENT ON COLUMN academy_courses.passing_score IS 'Minimum score (0-100) required to pass the course';
|
||||
COMMENT ON COLUMN academy_courses.duration_minutes IS 'Estimated total duration of the course in minutes';
|
||||
COMMENT ON COLUMN academy_courses.required_for_roles IS 'JSON array of roles required to complete this course';
|
||||
COMMENT ON COLUMN academy_courses.status IS 'Course status: draft, published, archived';
|
||||
|
||||
-- ============================================================================
|
||||
-- 2. academy_lessons - Individual lessons within a course
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE IF NOT EXISTS academy_lessons (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
course_id UUID NOT NULL REFERENCES academy_courses(id) ON DELETE CASCADE,
|
||||
title VARCHAR(255) NOT NULL,
|
||||
type VARCHAR(20) NOT NULL,
|
||||
content_markdown TEXT,
|
||||
video_url VARCHAR(500),
|
||||
audio_url VARCHAR(500),
|
||||
sort_order INTEGER NOT NULL DEFAULT 0,
|
||||
duration_minutes INTEGER,
|
||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes for academy_lessons
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_lessons_course ON academy_lessons(course_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_lessons_sort ON academy_lessons(course_id, sort_order);
|
||||
|
||||
-- Auto-update trigger for academy_lessons.updated_at
|
||||
CREATE OR REPLACE FUNCTION update_academy_lessons_updated_at()
|
||||
RETURNS TRIGGER AS $$
|
||||
BEGIN
|
||||
NEW.updated_at = NOW();
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
DROP TRIGGER IF EXISTS trigger_academy_lessons_updated_at ON academy_lessons;
|
||||
CREATE TRIGGER trigger_academy_lessons_updated_at
|
||||
BEFORE UPDATE ON academy_lessons
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_academy_lessons_updated_at();
|
||||
|
||||
-- Comments for academy_lessons
|
||||
COMMENT ON TABLE academy_lessons IS 'Individual lessons belonging to a course';
|
||||
COMMENT ON COLUMN academy_lessons.course_id IS 'Foreign key to the parent course';
|
||||
COMMENT ON COLUMN academy_lessons.type IS 'Lesson type: text, video, audio, quiz, interactive';
|
||||
COMMENT ON COLUMN academy_lessons.content_markdown IS 'Lesson content in Markdown format';
|
||||
COMMENT ON COLUMN academy_lessons.video_url IS 'URL to video content (if type is video)';
|
||||
COMMENT ON COLUMN academy_lessons.audio_url IS 'URL to audio content (if type is audio)';
|
||||
COMMENT ON COLUMN academy_lessons.sort_order IS 'Order of the lesson within the course';
|
||||
COMMENT ON COLUMN academy_lessons.duration_minutes IS 'Estimated duration of this lesson in minutes';
|
||||
|
||||
-- ============================================================================
|
||||
-- 3. academy_quiz_questions - Quiz questions attached to lessons
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE IF NOT EXISTS academy_quiz_questions (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
lesson_id UUID NOT NULL REFERENCES academy_lessons(id) ON DELETE CASCADE,
|
||||
question TEXT NOT NULL,
|
||||
options JSONB NOT NULL,
|
||||
correct_option_index INTEGER NOT NULL,
|
||||
explanation TEXT,
|
||||
sort_order INTEGER NOT NULL DEFAULT 0,
|
||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes for academy_quiz_questions
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_quiz_questions_lesson ON academy_quiz_questions(lesson_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_quiz_questions_sort ON academy_quiz_questions(lesson_id, sort_order);
|
||||
|
||||
-- Comments for academy_quiz_questions
|
||||
COMMENT ON TABLE academy_quiz_questions IS 'Quiz questions belonging to a lesson';
|
||||
COMMENT ON COLUMN academy_quiz_questions.lesson_id IS 'Foreign key to the parent lesson';
|
||||
COMMENT ON COLUMN academy_quiz_questions.question IS 'The question text';
|
||||
COMMENT ON COLUMN academy_quiz_questions.options IS 'JSON array of answer options (strings)';
|
||||
COMMENT ON COLUMN academy_quiz_questions.correct_option_index IS 'Zero-based index of the correct option';
|
||||
COMMENT ON COLUMN academy_quiz_questions.explanation IS 'Explanation shown after answering (correct or incorrect)';
|
||||
COMMENT ON COLUMN academy_quiz_questions.sort_order IS 'Order of the question within the lesson quiz';
|
||||
|
||||
-- ============================================================================
|
||||
-- 4. academy_enrollments - User enrollments in courses
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE IF NOT EXISTS academy_enrollments (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
tenant_id VARCHAR(255) NOT NULL,
|
||||
course_id UUID NOT NULL REFERENCES academy_courses(id) ON DELETE CASCADE,
|
||||
user_id VARCHAR(255) NOT NULL,
|
||||
user_name VARCHAR(255),
|
||||
user_email VARCHAR(255),
|
||||
status VARCHAR(20) DEFAULT 'not_started',
|
||||
progress INTEGER DEFAULT 0,
|
||||
started_at TIMESTAMP WITH TIME ZONE,
|
||||
completed_at TIMESTAMP WITH TIME ZONE,
|
||||
certificate_id UUID,
|
||||
deadline TIMESTAMP WITH TIME ZONE,
|
||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes for academy_enrollments
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_enrollments_tenant ON academy_enrollments(tenant_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_enrollments_course ON academy_enrollments(course_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_enrollments_user ON academy_enrollments(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_enrollments_status ON academy_enrollments(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_enrollments_tenant_user ON academy_enrollments(tenant_id, user_id);
|
||||
|
||||
-- Auto-update trigger for academy_enrollments.updated_at
|
||||
CREATE OR REPLACE FUNCTION update_academy_enrollments_updated_at()
|
||||
RETURNS TRIGGER AS $$
|
||||
BEGIN
|
||||
NEW.updated_at = NOW();
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
DROP TRIGGER IF EXISTS trigger_academy_enrollments_updated_at ON academy_enrollments;
|
||||
CREATE TRIGGER trigger_academy_enrollments_updated_at
|
||||
BEFORE UPDATE ON academy_enrollments
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION update_academy_enrollments_updated_at();
|
||||
|
||||
-- Comments for academy_enrollments
|
||||
COMMENT ON TABLE academy_enrollments IS 'Tracks user enrollments and progress in courses';
|
||||
COMMENT ON COLUMN academy_enrollments.tenant_id IS 'Identifier for the tenant';
|
||||
COMMENT ON COLUMN academy_enrollments.course_id IS 'Foreign key to the enrolled course';
|
||||
COMMENT ON COLUMN academy_enrollments.user_id IS 'Identifier of the enrolled user';
|
||||
COMMENT ON COLUMN academy_enrollments.user_name IS 'Display name of the enrolled user';
|
||||
COMMENT ON COLUMN academy_enrollments.user_email IS 'Email address of the enrolled user';
|
||||
COMMENT ON COLUMN academy_enrollments.status IS 'Enrollment status: not_started, in_progress, completed, expired';
|
||||
COMMENT ON COLUMN academy_enrollments.progress IS 'Completion percentage (0-100)';
|
||||
COMMENT ON COLUMN academy_enrollments.certificate_id IS 'Reference to issued certificate (if completed)';
|
||||
COMMENT ON COLUMN academy_enrollments.deadline IS 'Deadline by which the course must be completed';
|
||||
|
||||
-- ============================================================================
|
||||
-- 5. academy_certificates - Certificates issued upon course completion
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE IF NOT EXISTS academy_certificates (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
tenant_id VARCHAR(255) NOT NULL,
|
||||
enrollment_id UUID NOT NULL UNIQUE REFERENCES academy_enrollments(id) ON DELETE CASCADE,
|
||||
course_id UUID NOT NULL REFERENCES academy_courses(id) ON DELETE CASCADE,
|
||||
user_id VARCHAR(255) NOT NULL,
|
||||
user_name VARCHAR(255),
|
||||
course_name VARCHAR(255),
|
||||
score INTEGER,
|
||||
issued_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||
valid_until TIMESTAMP WITH TIME ZONE,
|
||||
pdf_url VARCHAR(500)
|
||||
);
|
||||
|
||||
-- Indexes for academy_certificates
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_certificates_tenant ON academy_certificates(tenant_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_certificates_user ON academy_certificates(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_certificates_course ON academy_certificates(course_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_certificates_enrollment ON academy_certificates(enrollment_id);
|
||||
|
||||
-- Comments for academy_certificates
|
||||
COMMENT ON TABLE academy_certificates IS 'Certificates issued when a user completes a course';
|
||||
COMMENT ON COLUMN academy_certificates.tenant_id IS 'Identifier for the tenant';
|
||||
COMMENT ON COLUMN academy_certificates.enrollment_id IS 'Unique reference to the enrollment (one certificate per enrollment)';
|
||||
COMMENT ON COLUMN academy_certificates.course_id IS 'Foreign key to the completed course';
|
||||
COMMENT ON COLUMN academy_certificates.user_id IS 'Identifier of the certified user';
|
||||
COMMENT ON COLUMN academy_certificates.user_name IS 'Name of the user as printed on the certificate';
|
||||
COMMENT ON COLUMN academy_certificates.course_name IS 'Name of the course as printed on the certificate';
|
||||
COMMENT ON COLUMN academy_certificates.score IS 'Final quiz score achieved (0-100)';
|
||||
COMMENT ON COLUMN academy_certificates.issued_at IS 'Timestamp when the certificate was issued';
|
||||
COMMENT ON COLUMN academy_certificates.valid_until IS 'Expiry date of the certificate (NULL = no expiry)';
|
||||
COMMENT ON COLUMN academy_certificates.pdf_url IS 'URL to the generated certificate PDF';
|
||||
|
||||
-- ============================================================================
|
||||
-- 6. academy_lesson_progress - Per-lesson progress tracking
|
||||
-- ============================================================================
|
||||
|
||||
CREATE TABLE IF NOT EXISTS academy_lesson_progress (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
enrollment_id UUID NOT NULL REFERENCES academy_enrollments(id) ON DELETE CASCADE,
|
||||
lesson_id UUID NOT NULL REFERENCES academy_lessons(id) ON DELETE CASCADE,
|
||||
completed BOOLEAN DEFAULT false,
|
||||
quiz_score INTEGER,
|
||||
completed_at TIMESTAMP WITH TIME ZONE,
|
||||
CONSTRAINT uq_academy_lesson_progress_enrollment_lesson UNIQUE (enrollment_id, lesson_id)
|
||||
);
|
||||
|
||||
-- Indexes for academy_lesson_progress
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_lesson_progress_enrollment ON academy_lesson_progress(enrollment_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_academy_lesson_progress_lesson ON academy_lesson_progress(lesson_id);
|
||||
|
||||
-- Comments for academy_lesson_progress
|
||||
COMMENT ON TABLE academy_lesson_progress IS 'Tracks completion status and quiz scores per lesson per enrollment';
|
||||
COMMENT ON COLUMN academy_lesson_progress.enrollment_id IS 'Foreign key to the enrollment';
|
||||
COMMENT ON COLUMN academy_lesson_progress.lesson_id IS 'Foreign key to the lesson';
|
||||
COMMENT ON COLUMN academy_lesson_progress.completed IS 'Whether the lesson has been completed';
|
||||
COMMENT ON COLUMN academy_lesson_progress.quiz_score IS 'Quiz score for this lesson (0-100), NULL if no quiz';
|
||||
COMMENT ON COLUMN academy_lesson_progress.completed_at IS 'Timestamp when the lesson was completed';
|
||||
|
||||
-- ============================================================================
|
||||
-- Helper: Upsert function for lesson progress (ON CONFLICT handling)
|
||||
-- ============================================================================
|
||||
|
||||
CREATE OR REPLACE FUNCTION upsert_academy_lesson_progress(
|
||||
p_enrollment_id UUID,
|
||||
p_lesson_id UUID,
|
||||
p_completed BOOLEAN,
|
||||
p_quiz_score INTEGER DEFAULT NULL
|
||||
)
|
||||
RETURNS academy_lesson_progress AS $$
|
||||
DECLARE
|
||||
result academy_lesson_progress;
|
||||
BEGIN
|
||||
INSERT INTO academy_lesson_progress (enrollment_id, lesson_id, completed, quiz_score, completed_at)
|
||||
VALUES (
|
||||
p_enrollment_id,
|
||||
p_lesson_id,
|
||||
p_completed,
|
||||
p_quiz_score,
|
||||
CASE WHEN p_completed THEN NOW() ELSE NULL END
|
||||
)
|
||||
ON CONFLICT ON CONSTRAINT uq_academy_lesson_progress_enrollment_lesson
|
||||
DO UPDATE SET
|
||||
completed = EXCLUDED.completed,
|
||||
quiz_score = COALESCE(EXCLUDED.quiz_score, academy_lesson_progress.quiz_score),
|
||||
completed_at = CASE
|
||||
WHEN EXCLUDED.completed AND academy_lesson_progress.completed_at IS NULL THEN NOW()
|
||||
WHEN NOT EXCLUDED.completed THEN NULL
|
||||
ELSE academy_lesson_progress.completed_at
|
||||
END
|
||||
RETURNING * INTO result;
|
||||
|
||||
RETURN result;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION upsert_academy_lesson_progress IS 'Insert or update lesson progress with ON CONFLICT handling on the unique (enrollment_id, lesson_id) constraint';
|
||||
|
||||
-- ============================================================================
|
||||
-- Helper: Cleanup function for expired certificates
|
||||
-- ============================================================================
|
||||
|
||||
CREATE OR REPLACE FUNCTION cleanup_expired_academy_certificates(days_past_expiry INTEGER DEFAULT 0)
|
||||
RETURNS INTEGER AS $$
|
||||
DECLARE
|
||||
deleted_count INTEGER;
|
||||
BEGIN
|
||||
DELETE FROM academy_certificates
|
||||
WHERE valid_until IS NOT NULL
|
||||
AND valid_until < NOW() - (days_past_expiry || ' days')::INTERVAL;
|
||||
|
||||
GET DIAGNOSTICS deleted_count = ROW_COUNT;
|
||||
RETURN deleted_count;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
COMMENT ON FUNCTION cleanup_expired_academy_certificates IS 'Removes certificates that have expired beyond the specified number of days';
|
||||
173
admin-v2/ai-compliance-sdk/internal/db/postgres.go
Normal file
173
admin-v2/ai-compliance-sdk/internal/db/postgres.go
Normal file
@@ -0,0 +1,173 @@
|
||||
package db
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/jackc/pgx/v5/pgxpool"
|
||||
)
|
||||
|
||||
// Pool wraps a pgxpool.Pool with SDK-specific methods
|
||||
type Pool struct {
|
||||
*pgxpool.Pool
|
||||
}
|
||||
|
||||
// SDKState represents the state stored in the database
|
||||
type SDKState struct {
|
||||
ID string `json:"id"`
|
||||
TenantID string `json:"tenant_id"`
|
||||
UserID string `json:"user_id,omitempty"`
|
||||
State json.RawMessage `json:"state"`
|
||||
Version int `json:"version"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
// NewPostgresPool creates a new database connection pool
|
||||
func NewPostgresPool(connectionString string) (*Pool, error) {
|
||||
config, err := pgxpool.ParseConfig(connectionString)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse connection string: %w", err)
|
||||
}
|
||||
|
||||
config.MaxConns = 10
|
||||
config.MinConns = 2
|
||||
config.MaxConnLifetime = 1 * time.Hour
|
||||
config.MaxConnIdleTime = 30 * time.Minute
|
||||
|
||||
pool, err := pgxpool.NewWithConfig(context.Background(), config)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create connection pool: %w", err)
|
||||
}
|
||||
|
||||
// Test connection
|
||||
if err := pool.Ping(context.Background()); err != nil {
|
||||
return nil, fmt.Errorf("failed to ping database: %w", err)
|
||||
}
|
||||
|
||||
return &Pool{Pool: pool}, nil
|
||||
}
|
||||
|
||||
// GetState retrieves state for a tenant
|
||||
func (p *Pool) GetState(ctx context.Context, tenantID string) (*SDKState, error) {
|
||||
query := `
|
||||
SELECT id, tenant_id, user_id, state, version, created_at, updated_at
|
||||
FROM sdk_states
|
||||
WHERE tenant_id = $1
|
||||
`
|
||||
|
||||
var state SDKState
|
||||
err := p.QueryRow(ctx, query, tenantID).Scan(
|
||||
&state.ID,
|
||||
&state.TenantID,
|
||||
&state.UserID,
|
||||
&state.State,
|
||||
&state.Version,
|
||||
&state.CreatedAt,
|
||||
&state.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &state, nil
|
||||
}
|
||||
|
||||
// SaveState saves or updates state for a tenant with optimistic locking
|
||||
func (p *Pool) SaveState(ctx context.Context, tenantID string, userID string, state json.RawMessage, expectedVersion *int) (*SDKState, error) {
|
||||
query := `
|
||||
INSERT INTO sdk_states (tenant_id, user_id, state, version)
|
||||
VALUES ($1, $2, $3, 1)
|
||||
ON CONFLICT (tenant_id) DO UPDATE SET
|
||||
state = $3,
|
||||
user_id = COALESCE($2, sdk_states.user_id),
|
||||
version = sdk_states.version + 1,
|
||||
updated_at = NOW()
|
||||
WHERE ($4::int IS NULL OR sdk_states.version = $4)
|
||||
RETURNING id, tenant_id, user_id, state, version, created_at, updated_at
|
||||
`
|
||||
|
||||
var result SDKState
|
||||
err := p.QueryRow(ctx, query, tenantID, userID, state, expectedVersion).Scan(
|
||||
&result.ID,
|
||||
&result.TenantID,
|
||||
&result.UserID,
|
||||
&result.State,
|
||||
&result.Version,
|
||||
&result.CreatedAt,
|
||||
&result.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &result, nil
|
||||
}
|
||||
|
||||
// DeleteState deletes state for a tenant
|
||||
func (p *Pool) DeleteState(ctx context.Context, tenantID string) error {
|
||||
query := `DELETE FROM sdk_states WHERE tenant_id = $1`
|
||||
_, err := p.Exec(ctx, query, tenantID)
|
||||
return err
|
||||
}
|
||||
|
||||
// InMemoryStore provides an in-memory fallback when database is not available
|
||||
type InMemoryStore struct {
|
||||
states map[string]*SDKState
|
||||
}
|
||||
|
||||
// NewInMemoryStore creates a new in-memory store
|
||||
func NewInMemoryStore() *InMemoryStore {
|
||||
return &InMemoryStore{
|
||||
states: make(map[string]*SDKState),
|
||||
}
|
||||
}
|
||||
|
||||
// GetState retrieves state from memory
|
||||
func (s *InMemoryStore) GetState(tenantID string) (*SDKState, error) {
|
||||
state, ok := s.states[tenantID]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("state not found")
|
||||
}
|
||||
return state, nil
|
||||
}
|
||||
|
||||
// SaveState saves state to memory
|
||||
func (s *InMemoryStore) SaveState(tenantID string, userID string, state json.RawMessage, expectedVersion *int) (*SDKState, error) {
|
||||
existing, exists := s.states[tenantID]
|
||||
|
||||
// Optimistic locking check
|
||||
if expectedVersion != nil && exists && existing.Version != *expectedVersion {
|
||||
return nil, fmt.Errorf("version conflict")
|
||||
}
|
||||
|
||||
now := time.Now()
|
||||
version := 1
|
||||
createdAt := now
|
||||
|
||||
if exists {
|
||||
version = existing.Version + 1
|
||||
createdAt = existing.CreatedAt
|
||||
}
|
||||
|
||||
newState := &SDKState{
|
||||
ID: fmt.Sprintf("%s-%d", tenantID, time.Now().UnixNano()),
|
||||
TenantID: tenantID,
|
||||
UserID: userID,
|
||||
State: state,
|
||||
Version: version,
|
||||
CreatedAt: createdAt,
|
||||
UpdatedAt: now,
|
||||
}
|
||||
|
||||
s.states[tenantID] = newState
|
||||
return newState, nil
|
||||
}
|
||||
|
||||
// DeleteState deletes state from memory
|
||||
func (s *InMemoryStore) DeleteState(tenantID string) error {
|
||||
delete(s.states, tenantID)
|
||||
return nil
|
||||
}
|
||||
371
admin-v2/ai-compliance-sdk/internal/gci/engine.go
Normal file
371
admin-v2/ai-compliance-sdk/internal/gci/engine.go
Normal file
@@ -0,0 +1,371 @@
|
||||
package gci
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Engine calculates the GCI score
|
||||
type Engine struct{}
|
||||
|
||||
// NewEngine creates a new GCI calculation engine
|
||||
func NewEngine() *Engine {
|
||||
return &Engine{}
|
||||
}
|
||||
|
||||
// Calculate computes the full GCI result for a tenant
|
||||
func (e *Engine) Calculate(tenantID string, profileID string) *GCIResult {
|
||||
now := time.Now()
|
||||
profile := GetProfile(profileID)
|
||||
auditTrail := []AuditEntry{}
|
||||
|
||||
// Step 1: Get module data (mock for now)
|
||||
modules := MockModuleData(tenantID)
|
||||
certDates := MockCertificateData()
|
||||
|
||||
// Step 2: Calculate Level 1 - Module Scores with validity
|
||||
for i := range modules {
|
||||
m := &modules[i]
|
||||
if m.Assigned > 0 {
|
||||
m.RawScore = float64(m.Completed) / float64(m.Assigned) * 100.0
|
||||
}
|
||||
// Apply validity factor
|
||||
if validUntil, ok := certDates[m.ModuleID]; ok {
|
||||
m.ValidityFactor = CalculateValidityFactor(validUntil, now)
|
||||
} else {
|
||||
m.ValidityFactor = 1.0 // No certificate tracking = assume valid
|
||||
}
|
||||
m.FinalScore = m.RawScore * m.ValidityFactor
|
||||
|
||||
if m.ValidityFactor < 1.0 {
|
||||
auditTrail = append(auditTrail, AuditEntry{
|
||||
Timestamp: now,
|
||||
Factor: "validity_decay",
|
||||
Description: fmt.Sprintf("Modul '%s': Gueltigkeitsfaktor %.2f (Zertifikat laeuft ab/abgelaufen)", m.ModuleName, m.ValidityFactor),
|
||||
Value: m.ValidityFactor,
|
||||
Impact: "negative",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Step 3: Calculate Level 2 - Risk-Weighted Scores per area
|
||||
areaModules := map[string][]ModuleScore{
|
||||
"dsgvo": {},
|
||||
"nis2": {},
|
||||
"iso27001": {},
|
||||
"ai_act": {},
|
||||
}
|
||||
for _, m := range modules {
|
||||
if _, ok := areaModules[m.Category]; ok {
|
||||
areaModules[m.Category] = append(areaModules[m.Category], m)
|
||||
}
|
||||
}
|
||||
|
||||
level2Areas := []RiskWeightedScore{}
|
||||
areaNames := map[string]string{
|
||||
"dsgvo": "DSGVO",
|
||||
"nis2": "NIS2",
|
||||
"iso27001": "ISO 27001",
|
||||
"ai_act": "EU AI Act",
|
||||
}
|
||||
|
||||
for areaID, mods := range areaModules {
|
||||
rws := RiskWeightedScore{
|
||||
AreaID: areaID,
|
||||
AreaName: areaNames[areaID],
|
||||
Modules: mods,
|
||||
}
|
||||
for _, m := range mods {
|
||||
rws.WeightedSum += m.FinalScore * m.RiskWeight
|
||||
rws.TotalWeight += m.RiskWeight
|
||||
}
|
||||
if rws.TotalWeight > 0 {
|
||||
rws.AreaScore = rws.WeightedSum / rws.TotalWeight
|
||||
}
|
||||
level2Areas = append(level2Areas, rws)
|
||||
}
|
||||
|
||||
// Step 4: Calculate Level 3 - Regulation Area Scores
|
||||
areaScores := []RegulationAreaScore{}
|
||||
for _, rws := range level2Areas {
|
||||
weight := profile.Weights[rws.AreaID]
|
||||
completedCount := 0
|
||||
for _, m := range rws.Modules {
|
||||
if m.Completed >= m.Assigned && m.Assigned > 0 {
|
||||
completedCount++
|
||||
}
|
||||
}
|
||||
ras := RegulationAreaScore{
|
||||
RegulationID: rws.AreaID,
|
||||
RegulationName: rws.AreaName,
|
||||
Score: math.Round(rws.AreaScore*100) / 100,
|
||||
Weight: weight,
|
||||
WeightedScore: rws.AreaScore * weight,
|
||||
ModuleCount: len(rws.Modules),
|
||||
CompletedCount: completedCount,
|
||||
}
|
||||
areaScores = append(areaScores, ras)
|
||||
|
||||
auditTrail = append(auditTrail, AuditEntry{
|
||||
Timestamp: now,
|
||||
Factor: "area_score",
|
||||
Description: fmt.Sprintf("Bereich '%s': Score %.1f, Gewicht %.0f%%", rws.AreaName, rws.AreaScore, weight*100),
|
||||
Value: rws.AreaScore,
|
||||
Impact: "neutral",
|
||||
})
|
||||
}
|
||||
|
||||
// Step 5: Calculate raw GCI
|
||||
rawGCI := 0.0
|
||||
totalWeight := 0.0
|
||||
for _, ras := range areaScores {
|
||||
rawGCI += ras.WeightedScore
|
||||
totalWeight += ras.Weight
|
||||
}
|
||||
if totalWeight > 0 {
|
||||
rawGCI = rawGCI / totalWeight
|
||||
}
|
||||
|
||||
// Step 6: Apply Criticality Multiplier
|
||||
criticalityMult := calculateCriticalityMultiplier(modules)
|
||||
auditTrail = append(auditTrail, AuditEntry{
|
||||
Timestamp: now,
|
||||
Factor: "criticality_multiplier",
|
||||
Description: fmt.Sprintf("Kritikalitaetsmultiplikator: %.3f", criticalityMult),
|
||||
Value: criticalityMult,
|
||||
Impact: func() string {
|
||||
if criticalityMult < 1.0 {
|
||||
return "negative"
|
||||
}
|
||||
return "neutral"
|
||||
}(),
|
||||
})
|
||||
|
||||
// Step 7: Apply Incident Adjustment
|
||||
openInc, critInc := MockIncidentData()
|
||||
incidentAdj := calculateIncidentAdjustment(openInc, critInc)
|
||||
auditTrail = append(auditTrail, AuditEntry{
|
||||
Timestamp: now,
|
||||
Factor: "incident_adjustment",
|
||||
Description: fmt.Sprintf("Vorfallsanpassung: %.3f (%d offen, %d kritisch)", incidentAdj, openInc, critInc),
|
||||
Value: incidentAdj,
|
||||
Impact: "negative",
|
||||
})
|
||||
|
||||
// Step 8: Final GCI
|
||||
finalGCI := rawGCI * criticalityMult * incidentAdj
|
||||
finalGCI = math.Max(0, math.Min(100, math.Round(finalGCI*10)/10))
|
||||
|
||||
// Step 9: Determine Maturity Level
|
||||
maturity := determineMaturityLevel(finalGCI)
|
||||
|
||||
auditTrail = append(auditTrail, AuditEntry{
|
||||
Timestamp: now,
|
||||
Factor: "final_gci",
|
||||
Description: fmt.Sprintf("GCI-Endergebnis: %.1f → Reifegrad: %s", finalGCI, MaturityLabels[maturity]),
|
||||
Value: finalGCI,
|
||||
Impact: "neutral",
|
||||
})
|
||||
|
||||
return &GCIResult{
|
||||
TenantID: tenantID,
|
||||
GCIScore: finalGCI,
|
||||
MaturityLevel: maturity,
|
||||
MaturityLabel: MaturityLabels[maturity],
|
||||
CalculatedAt: now,
|
||||
Profile: profileID,
|
||||
AreaScores: areaScores,
|
||||
CriticalityMult: criticalityMult,
|
||||
IncidentAdj: incidentAdj,
|
||||
AuditTrail: auditTrail,
|
||||
}
|
||||
}
|
||||
|
||||
// CalculateBreakdown returns the full 4-level breakdown
|
||||
func (e *Engine) CalculateBreakdown(tenantID string, profileID string) *GCIBreakdown {
|
||||
result := e.Calculate(tenantID, profileID)
|
||||
modules := MockModuleData(tenantID)
|
||||
certDates := MockCertificateData()
|
||||
now := time.Now()
|
||||
|
||||
// Recalculate module scores for the breakdown
|
||||
for i := range modules {
|
||||
m := &modules[i]
|
||||
if m.Assigned > 0 {
|
||||
m.RawScore = float64(m.Completed) / float64(m.Assigned) * 100.0
|
||||
}
|
||||
if validUntil, ok := certDates[m.ModuleID]; ok {
|
||||
m.ValidityFactor = CalculateValidityFactor(validUntil, now)
|
||||
} else {
|
||||
m.ValidityFactor = 1.0
|
||||
}
|
||||
m.FinalScore = m.RawScore * m.ValidityFactor
|
||||
}
|
||||
|
||||
// Build Level 2 areas
|
||||
areaModules := map[string][]ModuleScore{}
|
||||
for _, m := range modules {
|
||||
areaModules[m.Category] = append(areaModules[m.Category], m)
|
||||
}
|
||||
|
||||
areaNames := map[string]string{"dsgvo": "DSGVO", "nis2": "NIS2", "iso27001": "ISO 27001", "ai_act": "EU AI Act"}
|
||||
level2 := []RiskWeightedScore{}
|
||||
for areaID, mods := range areaModules {
|
||||
rws := RiskWeightedScore{AreaID: areaID, AreaName: areaNames[areaID], Modules: mods}
|
||||
for _, m := range mods {
|
||||
rws.WeightedSum += m.FinalScore * m.RiskWeight
|
||||
rws.TotalWeight += m.RiskWeight
|
||||
}
|
||||
if rws.TotalWeight > 0 {
|
||||
rws.AreaScore = rws.WeightedSum / rws.TotalWeight
|
||||
}
|
||||
level2 = append(level2, rws)
|
||||
}
|
||||
|
||||
return &GCIBreakdown{
|
||||
GCIResult: *result,
|
||||
Level1Modules: modules,
|
||||
Level2Areas: level2,
|
||||
}
|
||||
}
|
||||
|
||||
// GetHistory returns historical GCI snapshots
|
||||
func (e *Engine) GetHistory(tenantID string) []GCISnapshot {
|
||||
// Add current score to history
|
||||
result := e.Calculate(tenantID, "default")
|
||||
history := MockGCIHistory(tenantID)
|
||||
current := GCISnapshot{
|
||||
TenantID: tenantID,
|
||||
Score: result.GCIScore,
|
||||
MaturityLevel: result.MaturityLevel,
|
||||
AreaScores: make(map[string]float64),
|
||||
CalculatedAt: result.CalculatedAt,
|
||||
}
|
||||
for _, as := range result.AreaScores {
|
||||
current.AreaScores[as.RegulationID] = as.Score
|
||||
}
|
||||
history = append(history, current)
|
||||
return history
|
||||
}
|
||||
|
||||
// GetMatrix returns the compliance matrix (roles x regulations)
|
||||
func (e *Engine) GetMatrix(tenantID string) []ComplianceMatrixEntry {
|
||||
modules := MockModuleData(tenantID)
|
||||
|
||||
roles := []struct {
|
||||
ID string
|
||||
Name string
|
||||
}{
|
||||
{"management", "Geschaeftsfuehrung"},
|
||||
{"it_security", "IT-Sicherheit / CISO"},
|
||||
{"data_protection", "Datenschutz / DSB"},
|
||||
{"hr", "Personalwesen"},
|
||||
{"general", "Allgemeine Mitarbeiter"},
|
||||
}
|
||||
|
||||
// Define which modules are relevant per role
|
||||
roleModules := map[string][]string{
|
||||
"management": {"dsgvo-grundlagen", "nis2-management", "ai-governance", "iso-isms"},
|
||||
"it_security": {"nis2-risikomanagement", "nis2-incident-response", "iso-zugangssteuerung", "iso-kryptografie", "ai-hochrisiko"},
|
||||
"data_protection": {"dsgvo-grundlagen", "dsgvo-betroffenenrechte", "dsgvo-tom", "dsgvo-dsfa", "dsgvo-auftragsverarbeitung"},
|
||||
"hr": {"dsgvo-grundlagen", "dsgvo-betroffenenrechte", "nis2-management"},
|
||||
"general": {"dsgvo-grundlagen", "nis2-risikomanagement", "ai-risikokategorien", "ai-transparenz"},
|
||||
}
|
||||
|
||||
moduleMap := map[string]ModuleScore{}
|
||||
for _, m := range modules {
|
||||
moduleMap[m.ModuleID] = m
|
||||
}
|
||||
|
||||
entries := []ComplianceMatrixEntry{}
|
||||
for _, role := range roles {
|
||||
entry := ComplianceMatrixEntry{
|
||||
Role: role.ID,
|
||||
RoleName: role.Name,
|
||||
Regulations: map[string]float64{},
|
||||
}
|
||||
|
||||
regScores := map[string][]float64{}
|
||||
requiredModuleIDs := roleModules[role.ID]
|
||||
entry.RequiredModules = len(requiredModuleIDs)
|
||||
|
||||
for _, modID := range requiredModuleIDs {
|
||||
if m, ok := moduleMap[modID]; ok {
|
||||
score := 0.0
|
||||
if m.Assigned > 0 {
|
||||
score = float64(m.Completed) / float64(m.Assigned) * 100
|
||||
}
|
||||
regScores[m.Category] = append(regScores[m.Category], score)
|
||||
if m.Completed >= m.Assigned && m.Assigned > 0 {
|
||||
entry.CompletedModules++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
totalScore := 0.0
|
||||
count := 0
|
||||
for reg, scores := range regScores {
|
||||
sum := 0.0
|
||||
for _, s := range scores {
|
||||
sum += s
|
||||
}
|
||||
avg := sum / float64(len(scores))
|
||||
entry.Regulations[reg] = math.Round(avg*10) / 10
|
||||
totalScore += avg
|
||||
count++
|
||||
}
|
||||
if count > 0 {
|
||||
entry.OverallScore = math.Round(totalScore/float64(count)*10) / 10
|
||||
}
|
||||
|
||||
entries = append(entries, entry)
|
||||
}
|
||||
|
||||
return entries
|
||||
}
|
||||
|
||||
// Helper functions
|
||||
|
||||
func calculateCriticalityMultiplier(modules []ModuleScore) float64 {
|
||||
criticalModules := 0
|
||||
criticalLow := 0
|
||||
for _, m := range modules {
|
||||
if m.RiskWeight >= 2.5 {
|
||||
criticalModules++
|
||||
if m.FinalScore < 50 {
|
||||
criticalLow++
|
||||
}
|
||||
}
|
||||
}
|
||||
if criticalModules == 0 {
|
||||
return 1.0
|
||||
}
|
||||
// Reduce score if critical modules have low completion
|
||||
ratio := float64(criticalLow) / float64(criticalModules)
|
||||
return 1.0 - (ratio * 0.15) // max 15% reduction
|
||||
}
|
||||
|
||||
func calculateIncidentAdjustment(openIncidents, criticalIncidents int) float64 {
|
||||
adj := 1.0
|
||||
// Each open incident reduces by 1%
|
||||
adj -= float64(openIncidents) * 0.01
|
||||
// Each critical incident reduces by additional 3%
|
||||
adj -= float64(criticalIncidents) * 0.03
|
||||
return math.Max(0.8, adj) // minimum 80% (max 20% reduction)
|
||||
}
|
||||
|
||||
func determineMaturityLevel(score float64) string {
|
||||
switch {
|
||||
case score >= 90:
|
||||
return MaturityOptimized
|
||||
case score >= 75:
|
||||
return MaturityManaged
|
||||
case score >= 60:
|
||||
return MaturityDefined
|
||||
case score >= 40:
|
||||
return MaturityReactive
|
||||
default:
|
||||
return MaturityHighRisk
|
||||
}
|
||||
}
|
||||
188
admin-v2/ai-compliance-sdk/internal/gci/iso_gap_analysis.go
Normal file
188
admin-v2/ai-compliance-sdk/internal/gci/iso_gap_analysis.go
Normal file
@@ -0,0 +1,188 @@
|
||||
package gci
|
||||
|
||||
import "math"
|
||||
|
||||
// ISOGapAnalysis represents the complete ISO 27001 gap analysis
|
||||
type ISOGapAnalysis struct {
|
||||
TenantID string `json:"tenant_id"`
|
||||
TotalControls int `json:"total_controls"`
|
||||
CoveredFull int `json:"covered_full"`
|
||||
CoveredPartial int `json:"covered_partial"`
|
||||
NotCovered int `json:"not_covered"`
|
||||
CoveragePercent float64 `json:"coverage_percent"`
|
||||
CategorySummaries []ISOCategorySummary `json:"category_summaries"`
|
||||
ControlDetails []ISOControlDetail `json:"control_details"`
|
||||
Gaps []ISOGap `json:"gaps"`
|
||||
}
|
||||
|
||||
// ISOControlDetail shows coverage status for a single control
|
||||
type ISOControlDetail struct {
|
||||
Control ISOControl `json:"control"`
|
||||
CoverageLevel string `json:"coverage_level"` // full, partial, none
|
||||
CoveredBy []string `json:"covered_by"` // module IDs
|
||||
Score float64 `json:"score"` // 0-100
|
||||
}
|
||||
|
||||
// ISOGap represents an identified gap in ISO coverage
|
||||
type ISOGap struct {
|
||||
ControlID string `json:"control_id"`
|
||||
ControlName string `json:"control_name"`
|
||||
Category string `json:"category"`
|
||||
Priority string `json:"priority"` // high, medium, low
|
||||
Recommendation string `json:"recommendation"`
|
||||
}
|
||||
|
||||
// CalculateISOGapAnalysis performs the ISO 27001 gap analysis
|
||||
func CalculateISOGapAnalysis(tenantID string) *ISOGapAnalysis {
|
||||
modules := MockModuleData(tenantID)
|
||||
moduleMap := map[string]ModuleScore{}
|
||||
for _, m := range modules {
|
||||
moduleMap[m.ModuleID] = m
|
||||
}
|
||||
|
||||
// Build reverse mapping: control -> modules covering it
|
||||
controlCoverage := map[string][]string{}
|
||||
controlCoverageLevel := map[string]string{}
|
||||
for _, mapping := range DefaultISOModuleMappings {
|
||||
for _, controlID := range mapping.ISOControls {
|
||||
controlCoverage[controlID] = append(controlCoverage[controlID], mapping.ModuleID)
|
||||
// Use the highest coverage level
|
||||
existingLevel := controlCoverageLevel[controlID]
|
||||
if mapping.CoverageLevel == "full" || existingLevel == "" {
|
||||
controlCoverageLevel[controlID] = mapping.CoverageLevel
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Analyze each control
|
||||
details := []ISOControlDetail{}
|
||||
gaps := []ISOGap{}
|
||||
coveredFull := 0
|
||||
coveredPartial := 0
|
||||
notCovered := 0
|
||||
|
||||
categoryCounts := map[string]*ISOCategorySummary{
|
||||
"A.5": {CategoryID: "A.5", CategoryName: "Organisatorische Massnahmen"},
|
||||
"A.6": {CategoryID: "A.6", CategoryName: "Personelle Massnahmen"},
|
||||
"A.7": {CategoryID: "A.7", CategoryName: "Physische Massnahmen"},
|
||||
"A.8": {CategoryID: "A.8", CategoryName: "Technologische Massnahmen"},
|
||||
}
|
||||
|
||||
for _, control := range ISOControls {
|
||||
coveredBy := controlCoverage[control.ID]
|
||||
level := controlCoverageLevel[control.ID]
|
||||
|
||||
if len(coveredBy) == 0 {
|
||||
level = "none"
|
||||
}
|
||||
|
||||
// Calculate score based on module completion
|
||||
score := 0.0
|
||||
if len(coveredBy) > 0 {
|
||||
scoreSum := 0.0
|
||||
count := 0
|
||||
for _, modID := range coveredBy {
|
||||
if m, ok := moduleMap[modID]; ok && m.Assigned > 0 {
|
||||
scoreSum += float64(m.Completed) / float64(m.Assigned) * 100
|
||||
count++
|
||||
}
|
||||
}
|
||||
if count > 0 {
|
||||
score = scoreSum / float64(count)
|
||||
}
|
||||
// Adjust for coverage level
|
||||
if level == "partial" {
|
||||
score *= 0.7 // partial coverage reduces effective score
|
||||
}
|
||||
}
|
||||
|
||||
detail := ISOControlDetail{
|
||||
Control: control,
|
||||
CoverageLevel: level,
|
||||
CoveredBy: coveredBy,
|
||||
Score: math.Round(score*10) / 10,
|
||||
}
|
||||
details = append(details, detail)
|
||||
|
||||
// Count by category
|
||||
cat := categoryCounts[control.CategoryID]
|
||||
if cat != nil {
|
||||
cat.TotalControls++
|
||||
switch level {
|
||||
case "full":
|
||||
coveredFull++
|
||||
cat.CoveredFull++
|
||||
case "partial":
|
||||
coveredPartial++
|
||||
cat.CoveredPartial++
|
||||
default:
|
||||
notCovered++
|
||||
cat.NotCovered++
|
||||
// Generate gap recommendation
|
||||
gap := ISOGap{
|
||||
ControlID: control.ID,
|
||||
ControlName: control.Name,
|
||||
Category: control.Category,
|
||||
Priority: determineGapPriority(control),
|
||||
Recommendation: generateGapRecommendation(control),
|
||||
}
|
||||
gaps = append(gaps, gap)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
totalControls := len(ISOControls)
|
||||
coveragePercent := 0.0
|
||||
if totalControls > 0 {
|
||||
coveragePercent = math.Round(float64(coveredFull+coveredPartial)/float64(totalControls)*100*10) / 10
|
||||
}
|
||||
|
||||
summaries := []ISOCategorySummary{}
|
||||
for _, catID := range []string{"A.5", "A.6", "A.7", "A.8"} {
|
||||
if cat, ok := categoryCounts[catID]; ok {
|
||||
summaries = append(summaries, *cat)
|
||||
}
|
||||
}
|
||||
|
||||
return &ISOGapAnalysis{
|
||||
TenantID: tenantID,
|
||||
TotalControls: totalControls,
|
||||
CoveredFull: coveredFull,
|
||||
CoveredPartial: coveredPartial,
|
||||
NotCovered: notCovered,
|
||||
CoveragePercent: coveragePercent,
|
||||
CategorySummaries: summaries,
|
||||
ControlDetails: details,
|
||||
Gaps: gaps,
|
||||
}
|
||||
}
|
||||
|
||||
func determineGapPriority(control ISOControl) string {
|
||||
// High priority for access, incident, and data protection controls
|
||||
highPriority := map[string]bool{
|
||||
"A.5.15": true, "A.5.17": true, "A.5.24": true, "A.5.26": true,
|
||||
"A.5.34": true, "A.8.2": true, "A.8.5": true, "A.8.7": true,
|
||||
"A.8.10": true, "A.8.20": true,
|
||||
}
|
||||
if highPriority[control.ID] {
|
||||
return "high"
|
||||
}
|
||||
// Medium for organizational and people controls
|
||||
if control.CategoryID == "A.5" || control.CategoryID == "A.6" {
|
||||
return "medium"
|
||||
}
|
||||
return "low"
|
||||
}
|
||||
|
||||
func generateGapRecommendation(control ISOControl) string {
|
||||
recommendations := map[string]string{
|
||||
"organizational": "Erstellen Sie eine Richtlinie und weisen Sie Verantwortlichkeiten zu fuer: " + control.Name,
|
||||
"people": "Implementieren Sie Schulungen und Prozesse fuer: " + control.Name,
|
||||
"physical": "Definieren Sie physische Sicherheitsmassnahmen fuer: " + control.Name,
|
||||
"technological": "Implementieren Sie technische Kontrollen fuer: " + control.Name,
|
||||
}
|
||||
if rec, ok := recommendations[control.Category]; ok {
|
||||
return rec
|
||||
}
|
||||
return "Massnahmen implementieren fuer: " + control.Name
|
||||
}
|
||||
207
admin-v2/ai-compliance-sdk/internal/gci/iso_mapping.go
Normal file
207
admin-v2/ai-compliance-sdk/internal/gci/iso_mapping.go
Normal file
@@ -0,0 +1,207 @@
|
||||
package gci
|
||||
|
||||
// ISOControl represents an ISO 27001:2022 Annex A control
|
||||
type ISOControl struct {
|
||||
ID string `json:"id"` // e.g. "A.5.1"
|
||||
Name string `json:"name"`
|
||||
Category string `json:"category"` // organizational, people, physical, technological
|
||||
CategoryID string `json:"category_id"` // A.5, A.6, A.7, A.8
|
||||
Description string `json:"description"`
|
||||
}
|
||||
|
||||
// ISOModuleMapping maps a course/module to ISO controls
|
||||
type ISOModuleMapping struct {
|
||||
ModuleID string `json:"module_id"`
|
||||
ModuleName string `json:"module_name"`
|
||||
ISOControls []string `json:"iso_controls"` // control IDs
|
||||
CoverageLevel string `json:"coverage_level"` // full, partial, none
|
||||
}
|
||||
|
||||
// ISO 27001:2022 Annex A controls (representative selection)
|
||||
var ISOControls = []ISOControl{
|
||||
// A.5 Organizational Controls (37 controls, showing key ones)
|
||||
{ID: "A.5.1", Name: "Informationssicherheitsrichtlinien", Category: "organizational", CategoryID: "A.5", Description: "Informationssicherheitsleitlinie und themenspezifische Richtlinien"},
|
||||
{ID: "A.5.2", Name: "Rollen und Verantwortlichkeiten", Category: "organizational", CategoryID: "A.5", Description: "Definition und Zuweisung von Informationssicherheitsrollen"},
|
||||
{ID: "A.5.3", Name: "Aufgabentrennung", Category: "organizational", CategoryID: "A.5", Description: "Trennung von konfligierenden Aufgaben und Verantwortlichkeiten"},
|
||||
{ID: "A.5.4", Name: "Managementverantwortung", Category: "organizational", CategoryID: "A.5", Description: "Fuehrungskraefte muessen Sicherheitsrichtlinien einhalten und durchsetzen"},
|
||||
{ID: "A.5.5", Name: "Kontakt mit Behoerden", Category: "organizational", CategoryID: "A.5", Description: "Pflege von Kontakten zu relevanten Aufsichtsbehoerden"},
|
||||
{ID: "A.5.6", Name: "Kontakt mit Interessengruppen", Category: "organizational", CategoryID: "A.5", Description: "Kontakt zu Fachgruppen und Sicherheitsforen"},
|
||||
{ID: "A.5.7", Name: "Bedrohungsintelligenz", Category: "organizational", CategoryID: "A.5", Description: "Sammlung und Analyse von Bedrohungsinformationen"},
|
||||
{ID: "A.5.8", Name: "Informationssicherheit im Projektmanagement", Category: "organizational", CategoryID: "A.5", Description: "Integration von Sicherheit in Projektmanagement"},
|
||||
{ID: "A.5.9", Name: "Inventar der Informationswerte", Category: "organizational", CategoryID: "A.5", Description: "Inventarisierung und Verwaltung von Informationswerten"},
|
||||
{ID: "A.5.10", Name: "Zuleassige Nutzung", Category: "organizational", CategoryID: "A.5", Description: "Regeln fuer die zuleassige Nutzung von Informationswerten"},
|
||||
{ID: "A.5.11", Name: "Rueckgabe von Werten", Category: "organizational", CategoryID: "A.5", Description: "Rueckgabe von Werten bei Beendigung"},
|
||||
{ID: "A.5.12", Name: "Klassifizierung von Informationen", Category: "organizational", CategoryID: "A.5", Description: "Klassifizierungsschema fuer Informationen"},
|
||||
{ID: "A.5.13", Name: "Kennzeichnung von Informationen", Category: "organizational", CategoryID: "A.5", Description: "Kennzeichnung gemaess Klassifizierung"},
|
||||
{ID: "A.5.14", Name: "Informationsuebertragung", Category: "organizational", CategoryID: "A.5", Description: "Regeln fuer sichere Informationsuebertragung"},
|
||||
{ID: "A.5.15", Name: "Zugangssteuerung", Category: "organizational", CategoryID: "A.5", Description: "Zugangssteuerungsrichtlinie"},
|
||||
{ID: "A.5.16", Name: "Identitaetsmanagement", Category: "organizational", CategoryID: "A.5", Description: "Verwaltung des Lebenszyklus von Identitaeten"},
|
||||
{ID: "A.5.17", Name: "Authentifizierungsinformationen", Category: "organizational", CategoryID: "A.5", Description: "Verwaltung von Authentifizierungsinformationen"},
|
||||
{ID: "A.5.18", Name: "Zugriffsrechte", Category: "organizational", CategoryID: "A.5", Description: "Vergabe, Pruefung und Entzug von Zugriffsrechten"},
|
||||
{ID: "A.5.19", Name: "Informationssicherheit in Lieferantenbeziehungen", Category: "organizational", CategoryID: "A.5", Description: "Sicherheitsanforderungen an Lieferanten"},
|
||||
{ID: "A.5.20", Name: "Informationssicherheit in Lieferantenvereinbarungen", Category: "organizational", CategoryID: "A.5", Description: "Sicherheitsklauseln in Vertraegen"},
|
||||
{ID: "A.5.21", Name: "IKT-Lieferkette", Category: "organizational", CategoryID: "A.5", Description: "Management der IKT-Lieferkette"},
|
||||
{ID: "A.5.22", Name: "Ueberwachung von Lieferantenservices", Category: "organizational", CategoryID: "A.5", Description: "Ueberwachung und Pruefung von Lieferantenservices"},
|
||||
{ID: "A.5.23", Name: "Cloud-Sicherheit", Category: "organizational", CategoryID: "A.5", Description: "Informationssicherheit fuer Cloud-Dienste"},
|
||||
{ID: "A.5.24", Name: "Vorfallsmanagement - Planung", Category: "organizational", CategoryID: "A.5", Description: "Planung und Vorbereitung des Vorfallsmanagements"},
|
||||
{ID: "A.5.25", Name: "Vorfallsbeurteilung", Category: "organizational", CategoryID: "A.5", Description: "Beurteilung und Entscheidung ueber Sicherheitsereignisse"},
|
||||
{ID: "A.5.26", Name: "Vorfallsreaktion", Category: "organizational", CategoryID: "A.5", Description: "Reaktion auf Sicherheitsvorfaelle"},
|
||||
{ID: "A.5.27", Name: "Aus Vorfaellen lernen", Category: "organizational", CategoryID: "A.5", Description: "Lessons Learned aus Sicherheitsvorfaellen"},
|
||||
{ID: "A.5.28", Name: "Beweissicherung", Category: "organizational", CategoryID: "A.5", Description: "Identifikation und Sicherung von Beweisen"},
|
||||
{ID: "A.5.29", Name: "Informationssicherheit bei Stoerungen", Category: "organizational", CategoryID: "A.5", Description: "Sicherheit waehrend Stoerungen und Krisen"},
|
||||
{ID: "A.5.30", Name: "IKT-Bereitschaft fuer Business Continuity", Category: "organizational", CategoryID: "A.5", Description: "IKT-Bereitschaft zur Unterstuetzung der Geschaeftskontinuitaet"},
|
||||
{ID: "A.5.31", Name: "Rechtliche Anforderungen", Category: "organizational", CategoryID: "A.5", Description: "Einhaltung rechtlicher und vertraglicher Anforderungen"},
|
||||
{ID: "A.5.32", Name: "Geistige Eigentumsrechte", Category: "organizational", CategoryID: "A.5", Description: "Schutz geistigen Eigentums"},
|
||||
{ID: "A.5.33", Name: "Schutz von Aufzeichnungen", Category: "organizational", CategoryID: "A.5", Description: "Schutz von Aufzeichnungen vor Verlust und Manipulation"},
|
||||
{ID: "A.5.34", Name: "Datenschutz und PII", Category: "organizational", CategoryID: "A.5", Description: "Datenschutz und Schutz personenbezogener Daten"},
|
||||
{ID: "A.5.35", Name: "Unabhaengige Ueberpruefung", Category: "organizational", CategoryID: "A.5", Description: "Unabhaengige Ueberpruefung der Informationssicherheit"},
|
||||
{ID: "A.5.36", Name: "Richtlinienkonformitaet", Category: "organizational", CategoryID: "A.5", Description: "Einhaltung von Richtlinien und Standards"},
|
||||
{ID: "A.5.37", Name: "Dokumentierte Betriebsverfahren", Category: "organizational", CategoryID: "A.5", Description: "Dokumentation von Betriebsverfahren"},
|
||||
|
||||
// A.6 People Controls (8 controls)
|
||||
{ID: "A.6.1", Name: "Ueberpruefen", Category: "people", CategoryID: "A.6", Description: "Hintergrundpruefungen vor der Einstellung"},
|
||||
{ID: "A.6.2", Name: "Beschaeftigungsbedingungen", Category: "people", CategoryID: "A.6", Description: "Sicherheitsanforderungen in Arbeitsvertraegen"},
|
||||
{ID: "A.6.3", Name: "Sensibilisierung und Schulung", Category: "people", CategoryID: "A.6", Description: "Awareness-Programme und Schulungen"},
|
||||
{ID: "A.6.4", Name: "Disziplinarverfahren", Category: "people", CategoryID: "A.6", Description: "Formales Disziplinarverfahren"},
|
||||
{ID: "A.6.5", Name: "Verantwortlichkeiten nach Beendigung", Category: "people", CategoryID: "A.6", Description: "Sicherheitspflichten nach Beendigung des Beschaeftigungsverhaeltnisses"},
|
||||
{ID: "A.6.6", Name: "Vertraulichkeitsvereinbarungen", Category: "people", CategoryID: "A.6", Description: "Vertraulichkeits- und Geheimhaltungsvereinbarungen"},
|
||||
{ID: "A.6.7", Name: "Remote-Arbeit", Category: "people", CategoryID: "A.6", Description: "Sicherheitsmassnahmen fuer Remote-Arbeit"},
|
||||
{ID: "A.6.8", Name: "Meldung von Sicherheitsereignissen", Category: "people", CategoryID: "A.6", Description: "Mechanismen zur Meldung von Sicherheitsereignissen"},
|
||||
|
||||
// A.7 Physical Controls (14 controls, showing key ones)
|
||||
{ID: "A.7.1", Name: "Physische Sicherheitsperimeter", Category: "physical", CategoryID: "A.7", Description: "Definition physischer Sicherheitszonen"},
|
||||
{ID: "A.7.2", Name: "Physischer Zutritt", Category: "physical", CategoryID: "A.7", Description: "Zutrittskontrolle zu Sicherheitszonen"},
|
||||
{ID: "A.7.3", Name: "Sicherung von Bueros und Raeumen", Category: "physical", CategoryID: "A.7", Description: "Physische Sicherheit fuer Bueros und Raeume"},
|
||||
{ID: "A.7.4", Name: "Physische Sicherheitsueberwachung", Category: "physical", CategoryID: "A.7", Description: "Ueberwachung physischer Sicherheit"},
|
||||
{ID: "A.7.5", Name: "Schutz vor Umweltgefahren", Category: "physical", CategoryID: "A.7", Description: "Schutz gegen natuerliche und menschgemachte Gefahren"},
|
||||
{ID: "A.7.6", Name: "Arbeit in Sicherheitszonen", Category: "physical", CategoryID: "A.7", Description: "Regeln fuer das Arbeiten in Sicherheitszonen"},
|
||||
{ID: "A.7.7", Name: "Aufgeraemter Schreibtisch", Category: "physical", CategoryID: "A.7", Description: "Clean-Desk und Clear-Screen Richtlinie"},
|
||||
{ID: "A.7.8", Name: "Geraeteplatzierung", Category: "physical", CategoryID: "A.7", Description: "Platzierung und Schutz von Geraeten"},
|
||||
{ID: "A.7.9", Name: "Sicherheit von Geraeten ausserhalb", Category: "physical", CategoryID: "A.7", Description: "Sicherheit von Geraeten ausserhalb der Raeumlichkeiten"},
|
||||
{ID: "A.7.10", Name: "Speichermedien", Category: "physical", CategoryID: "A.7", Description: "Verwaltung von Speichermedien"},
|
||||
{ID: "A.7.11", Name: "Versorgungseinrichtungen", Category: "physical", CategoryID: "A.7", Description: "Schutz vor Ausfaellen der Versorgungseinrichtungen"},
|
||||
{ID: "A.7.12", Name: "Verkabelungssicherheit", Category: "physical", CategoryID: "A.7", Description: "Schutz der Verkabelung"},
|
||||
{ID: "A.7.13", Name: "Instandhaltung von Geraeten", Category: "physical", CategoryID: "A.7", Description: "Korrekte Instandhaltung von Geraeten"},
|
||||
{ID: "A.7.14", Name: "Sichere Entsorgung", Category: "physical", CategoryID: "A.7", Description: "Sichere Entsorgung oder Wiederverwendung"},
|
||||
|
||||
// A.8 Technological Controls (34 controls, showing key ones)
|
||||
{ID: "A.8.1", Name: "Endbenutzergeraete", Category: "technological", CategoryID: "A.8", Description: "Sicherheit von Endbenutzergeraeten"},
|
||||
{ID: "A.8.2", Name: "Privilegierte Zugriffsrechte", Category: "technological", CategoryID: "A.8", Description: "Verwaltung privilegierter Zugriffsrechte"},
|
||||
{ID: "A.8.3", Name: "Informationszugangsbeschraenkung", Category: "technological", CategoryID: "A.8", Description: "Beschraenkung des Zugangs zu Informationen"},
|
||||
{ID: "A.8.4", Name: "Zugang zu Quellcode", Category: "technological", CategoryID: "A.8", Description: "Sicherer Zugang zu Quellcode"},
|
||||
{ID: "A.8.5", Name: "Sichere Authentifizierung", Category: "technological", CategoryID: "A.8", Description: "Sichere Authentifizierungstechnologien"},
|
||||
{ID: "A.8.6", Name: "Kapazitaetsmanagement", Category: "technological", CategoryID: "A.8", Description: "Ueberwachung und Anpassung der Kapazitaet"},
|
||||
{ID: "A.8.7", Name: "Schutz gegen Malware", Category: "technological", CategoryID: "A.8", Description: "Schutz vor Schadprogrammen"},
|
||||
{ID: "A.8.8", Name: "Management technischer Schwachstellen", Category: "technological", CategoryID: "A.8", Description: "Identifikation und Behebung von Schwachstellen"},
|
||||
{ID: "A.8.9", Name: "Konfigurationsmanagement", Category: "technological", CategoryID: "A.8", Description: "Sichere Konfiguration von Systemen"},
|
||||
{ID: "A.8.10", Name: "Datensicherung", Category: "technological", CategoryID: "A.8", Description: "Erstellen und Testen von Datensicherungen"},
|
||||
{ID: "A.8.11", Name: "Datenredundanz", Category: "technological", CategoryID: "A.8", Description: "Redundanz von Informationsverarbeitungseinrichtungen"},
|
||||
{ID: "A.8.12", Name: "Protokollierung", Category: "technological", CategoryID: "A.8", Description: "Aufzeichnung und Ueberwachung von Aktivitaeten"},
|
||||
{ID: "A.8.13", Name: "Ueberwachung von Aktivitaeten", Category: "technological", CategoryID: "A.8", Description: "Ueberwachung von Netzwerken und Systemen"},
|
||||
{ID: "A.8.14", Name: "Zeitsynchronisation", Category: "technological", CategoryID: "A.8", Description: "Synchronisation von Uhren"},
|
||||
{ID: "A.8.15", Name: "Nutzung privilegierter Hilfsprogramme", Category: "technological", CategoryID: "A.8", Description: "Einschraenkung privilegierter Hilfsprogramme"},
|
||||
{ID: "A.8.16", Name: "Softwareinstallation", Category: "technological", CategoryID: "A.8", Description: "Kontrolle der Softwareinstallation"},
|
||||
{ID: "A.8.17", Name: "Netzwerksicherheit", Category: "technological", CategoryID: "A.8", Description: "Sicherheit von Netzwerken"},
|
||||
{ID: "A.8.18", Name: "Netzwerksegmentierung", Category: "technological", CategoryID: "A.8", Description: "Segmentierung von Netzwerken"},
|
||||
{ID: "A.8.19", Name: "Webfilterung", Category: "technological", CategoryID: "A.8", Description: "Filterung des Webzugangs"},
|
||||
{ID: "A.8.20", Name: "Kryptografie", Category: "technological", CategoryID: "A.8", Description: "Einsatz kryptografischer Massnahmen"},
|
||||
{ID: "A.8.21", Name: "Sichere Entwicklung", Category: "technological", CategoryID: "A.8", Description: "Sichere Entwicklungslebenszyklus"},
|
||||
{ID: "A.8.22", Name: "Sicherheitsanforderungen bei Applikationen", Category: "technological", CategoryID: "A.8", Description: "Sicherheitsanforderungen bei Anwendungen"},
|
||||
{ID: "A.8.23", Name: "Sichere Systemarchitektur", Category: "technological", CategoryID: "A.8", Description: "Sicherheitsprinzipien in der Systemarchitektur"},
|
||||
{ID: "A.8.24", Name: "Sicheres Programmieren", Category: "technological", CategoryID: "A.8", Description: "Sichere Programmierpraktiken"},
|
||||
{ID: "A.8.25", Name: "Sicherheitstests", Category: "technological", CategoryID: "A.8", Description: "Sicherheitstests in der Entwicklung und Abnahme"},
|
||||
{ID: "A.8.26", Name: "Auslagerung der Entwicklung", Category: "technological", CategoryID: "A.8", Description: "Ueberwachung ausgelagerter Entwicklung"},
|
||||
{ID: "A.8.27", Name: "Trennung von Umgebungen", Category: "technological", CategoryID: "A.8", Description: "Trennung von Entwicklungs-, Test- und Produktionsumgebungen"},
|
||||
{ID: "A.8.28", Name: "Aenderungsmanagement", Category: "technological", CategoryID: "A.8", Description: "Formales Aenderungsmanagement"},
|
||||
{ID: "A.8.29", Name: "Sicherheitstests in der Abnahme", Category: "technological", CategoryID: "A.8", Description: "Durchfuehrung von Sicherheitstests vor Abnahme"},
|
||||
{ID: "A.8.30", Name: "Datenloeschung", Category: "technological", CategoryID: "A.8", Description: "Sichere Datenloeschung"},
|
||||
{ID: "A.8.31", Name: "Datenmaskierung", Category: "technological", CategoryID: "A.8", Description: "Techniken zur Datenmaskierung"},
|
||||
{ID: "A.8.32", Name: "Verhinderung von Datenverlust", Category: "technological", CategoryID: "A.8", Description: "DLP-Massnahmen"},
|
||||
{ID: "A.8.33", Name: "Testinformationen", Category: "technological", CategoryID: "A.8", Description: "Schutz von Testinformationen"},
|
||||
{ID: "A.8.34", Name: "Audit-Informationssysteme", Category: "technological", CategoryID: "A.8", Description: "Schutz von Audit-Tools und -systemen"},
|
||||
}
|
||||
|
||||
// Default mappings: which modules cover which ISO controls
|
||||
var DefaultISOModuleMappings = []ISOModuleMapping{
|
||||
{
|
||||
ModuleID: "iso-isms", ModuleName: "ISMS Grundlagen",
|
||||
ISOControls: []string{"A.5.1", "A.5.2", "A.5.3", "A.5.4", "A.5.35", "A.5.36"},
|
||||
CoverageLevel: "full",
|
||||
},
|
||||
{
|
||||
ModuleID: "iso-risikobewertung", ModuleName: "Risikobewertung",
|
||||
ISOControls: []string{"A.5.7", "A.5.8", "A.5.9", "A.5.10", "A.5.12", "A.5.13"},
|
||||
CoverageLevel: "full",
|
||||
},
|
||||
{
|
||||
ModuleID: "iso-zugangssteuerung", ModuleName: "Zugangssteuerung",
|
||||
ISOControls: []string{"A.5.15", "A.5.16", "A.5.17", "A.5.18", "A.8.2", "A.8.3", "A.8.5"},
|
||||
CoverageLevel: "full",
|
||||
},
|
||||
{
|
||||
ModuleID: "iso-kryptografie", ModuleName: "Kryptografie",
|
||||
ISOControls: []string{"A.8.20", "A.8.21", "A.8.24"},
|
||||
CoverageLevel: "partial",
|
||||
},
|
||||
{
|
||||
ModuleID: "iso-physisch", ModuleName: "Physische Sicherheit",
|
||||
ISOControls: []string{"A.7.1", "A.7.2", "A.7.3", "A.7.4", "A.7.5", "A.7.7", "A.7.8"},
|
||||
CoverageLevel: "full",
|
||||
},
|
||||
{
|
||||
ModuleID: "dsgvo-tom", ModuleName: "Technisch-Organisatorische Massnahmen",
|
||||
ISOControls: []string{"A.5.34", "A.8.10", "A.8.12", "A.8.30", "A.8.31"},
|
||||
CoverageLevel: "partial",
|
||||
},
|
||||
{
|
||||
ModuleID: "nis2-incident-response", ModuleName: "NIS2 Incident Response",
|
||||
ISOControls: []string{"A.5.24", "A.5.25", "A.5.26", "A.5.27", "A.5.28", "A.6.8"},
|
||||
CoverageLevel: "full",
|
||||
},
|
||||
{
|
||||
ModuleID: "nis2-supply-chain", ModuleName: "NIS2 Lieferkettensicherheit",
|
||||
ISOControls: []string{"A.5.19", "A.5.20", "A.5.21", "A.5.22", "A.5.23"},
|
||||
CoverageLevel: "full",
|
||||
},
|
||||
{
|
||||
ModuleID: "nis2-risikomanagement", ModuleName: "NIS2 Risikomanagement",
|
||||
ISOControls: []string{"A.5.29", "A.5.30", "A.8.6", "A.8.7", "A.8.8", "A.8.9"},
|
||||
CoverageLevel: "partial",
|
||||
},
|
||||
{
|
||||
ModuleID: "dsgvo-grundlagen", ModuleName: "DSGVO Grundlagen",
|
||||
ISOControls: []string{"A.5.31", "A.5.34", "A.6.2", "A.6.3"},
|
||||
CoverageLevel: "partial",
|
||||
},
|
||||
}
|
||||
|
||||
// GetISOControlByID returns a control by its ID
|
||||
func GetISOControlByID(id string) (ISOControl, bool) {
|
||||
for _, c := range ISOControls {
|
||||
if c.ID == id {
|
||||
return c, true
|
||||
}
|
||||
}
|
||||
return ISOControl{}, false
|
||||
}
|
||||
|
||||
// GetISOControlsByCategory returns all controls in a category
|
||||
func GetISOControlsByCategory(categoryID string) []ISOControl {
|
||||
var result []ISOControl
|
||||
for _, c := range ISOControls {
|
||||
if c.CategoryID == categoryID {
|
||||
result = append(result, c)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// ISOCategorySummary provides a summary per ISO category
|
||||
type ISOCategorySummary struct {
|
||||
CategoryID string `json:"category_id"`
|
||||
CategoryName string `json:"category_name"`
|
||||
TotalControls int `json:"total_controls"`
|
||||
CoveredFull int `json:"covered_full"`
|
||||
CoveredPartial int `json:"covered_partial"`
|
||||
NotCovered int `json:"not_covered"`
|
||||
}
|
||||
74
admin-v2/ai-compliance-sdk/internal/gci/mock_data.go
Normal file
74
admin-v2/ai-compliance-sdk/internal/gci/mock_data.go
Normal file
@@ -0,0 +1,74 @@
|
||||
package gci
|
||||
|
||||
import "time"
|
||||
|
||||
// MockModuleData provides fallback data when academy store is empty
|
||||
func MockModuleData(tenantID string) []ModuleScore {
|
||||
return []ModuleScore{
|
||||
// DSGVO modules
|
||||
{ModuleID: "dsgvo-grundlagen", ModuleName: "DSGVO Grundlagen", Assigned: 25, Completed: 22, Category: "dsgvo", RiskWeight: 2.0},
|
||||
{ModuleID: "dsgvo-betroffenenrechte", ModuleName: "Betroffenenrechte", Assigned: 25, Completed: 18, Category: "dsgvo", RiskWeight: 2.5},
|
||||
{ModuleID: "dsgvo-tom", ModuleName: "Technisch-Organisatorische Massnahmen", Assigned: 20, Completed: 17, Category: "dsgvo", RiskWeight: 2.5},
|
||||
{ModuleID: "dsgvo-dsfa", ModuleName: "Datenschutz-Folgenabschaetzung", Assigned: 15, Completed: 10, Category: "dsgvo", RiskWeight: 2.0},
|
||||
{ModuleID: "dsgvo-auftragsverarbeitung", ModuleName: "Auftragsverarbeitung", Assigned: 20, Completed: 16, Category: "dsgvo", RiskWeight: 2.0},
|
||||
|
||||
// NIS2 modules
|
||||
{ModuleID: "nis2-risikomanagement", ModuleName: "NIS2 Risikomanagement", Assigned: 15, Completed: 11, Category: "nis2", RiskWeight: 3.0},
|
||||
{ModuleID: "nis2-incident-response", ModuleName: "NIS2 Incident Response", Assigned: 15, Completed: 9, Category: "nis2", RiskWeight: 3.0},
|
||||
{ModuleID: "nis2-supply-chain", ModuleName: "NIS2 Lieferkettensicherheit", Assigned: 10, Completed: 6, Category: "nis2", RiskWeight: 2.0},
|
||||
{ModuleID: "nis2-management", ModuleName: "NIS2 Geschaeftsleitungspflicht", Assigned: 10, Completed: 8, Category: "nis2", RiskWeight: 3.0},
|
||||
|
||||
// ISO 27001 modules
|
||||
{ModuleID: "iso-isms", ModuleName: "ISMS Grundlagen", Assigned: 20, Completed: 16, Category: "iso27001", RiskWeight: 2.0},
|
||||
{ModuleID: "iso-risikobewertung", ModuleName: "Risikobewertung", Assigned: 15, Completed: 12, Category: "iso27001", RiskWeight: 2.0},
|
||||
{ModuleID: "iso-zugangssteuerung", ModuleName: "Zugangssteuerung", Assigned: 20, Completed: 18, Category: "iso27001", RiskWeight: 2.0},
|
||||
{ModuleID: "iso-kryptografie", ModuleName: "Kryptografie", Assigned: 10, Completed: 7, Category: "iso27001", RiskWeight: 1.5},
|
||||
{ModuleID: "iso-physisch", ModuleName: "Physische Sicherheit", Assigned: 10, Completed: 9, Category: "iso27001", RiskWeight: 1.0},
|
||||
|
||||
// AI Act modules
|
||||
{ModuleID: "ai-risikokategorien", ModuleName: "KI-Risikokategorien", Assigned: 15, Completed: 12, Category: "ai_act", RiskWeight: 2.5},
|
||||
{ModuleID: "ai-transparenz", ModuleName: "KI-Transparenzpflichten", Assigned: 15, Completed: 10, Category: "ai_act", RiskWeight: 2.0},
|
||||
{ModuleID: "ai-hochrisiko", ModuleName: "Hochrisiko-KI-Systeme", Assigned: 10, Completed: 6, Category: "ai_act", RiskWeight: 2.5},
|
||||
{ModuleID: "ai-governance", ModuleName: "KI-Governance", Assigned: 10, Completed: 7, Category: "ai_act", RiskWeight: 2.0},
|
||||
}
|
||||
}
|
||||
|
||||
// MockCertificateData provides mock certificate validity dates
|
||||
func MockCertificateData() map[string]time.Time {
|
||||
now := time.Now()
|
||||
return map[string]time.Time{
|
||||
"dsgvo-grundlagen": now.AddDate(0, 8, 0), // valid 8 months
|
||||
"dsgvo-betroffenenrechte": now.AddDate(0, 3, 0), // expiring in 3 months
|
||||
"dsgvo-tom": now.AddDate(0, 10, 0), // valid
|
||||
"dsgvo-dsfa": now.AddDate(0, -1, 0), // expired 1 month ago
|
||||
"dsgvo-auftragsverarbeitung": now.AddDate(0, 6, 0),
|
||||
"nis2-risikomanagement": now.AddDate(0, 5, 0),
|
||||
"nis2-incident-response": now.AddDate(0, 2, 0), // expiring soon
|
||||
"nis2-supply-chain": now.AddDate(0, -2, 0), // expired 2 months
|
||||
"nis2-management": now.AddDate(0, 9, 0),
|
||||
"iso-isms": now.AddDate(1, 0, 0),
|
||||
"iso-risikobewertung": now.AddDate(0, 4, 0),
|
||||
"iso-zugangssteuerung": now.AddDate(0, 11, 0),
|
||||
"iso-kryptografie": now.AddDate(0, 1, 0), // expiring in 1 month
|
||||
"iso-physisch": now.AddDate(0, 7, 0),
|
||||
"ai-risikokategorien": now.AddDate(0, 6, 0),
|
||||
"ai-transparenz": now.AddDate(0, 3, 0),
|
||||
"ai-hochrisiko": now.AddDate(0, -3, 0), // expired 3 months
|
||||
"ai-governance": now.AddDate(0, 5, 0),
|
||||
}
|
||||
}
|
||||
|
||||
// MockIncidentData returns mock incident counts for adjustment
|
||||
func MockIncidentData() (openIncidents int, criticalIncidents int) {
|
||||
return 3, 1
|
||||
}
|
||||
|
||||
// MockGCIHistory returns mock historical GCI snapshots
|
||||
func MockGCIHistory(tenantID string) []GCISnapshot {
|
||||
now := time.Now()
|
||||
return []GCISnapshot{
|
||||
{TenantID: tenantID, Score: 58.2, MaturityLevel: MaturityReactive, AreaScores: map[string]float64{"dsgvo": 62, "nis2": 48, "iso27001": 60, "ai_act": 55}, CalculatedAt: now.AddDate(0, -3, 0)},
|
||||
{TenantID: tenantID, Score: 62.5, MaturityLevel: MaturityDefined, AreaScores: map[string]float64{"dsgvo": 65, "nis2": 55, "iso27001": 63, "ai_act": 58}, CalculatedAt: now.AddDate(0, -2, 0)},
|
||||
{TenantID: tenantID, Score: 67.8, MaturityLevel: MaturityDefined, AreaScores: map[string]float64{"dsgvo": 70, "nis2": 60, "iso27001": 68, "ai_act": 62}, CalculatedAt: now.AddDate(0, -1, 0)},
|
||||
}
|
||||
}
|
||||
104
admin-v2/ai-compliance-sdk/internal/gci/models.go
Normal file
104
admin-v2/ai-compliance-sdk/internal/gci/models.go
Normal file
@@ -0,0 +1,104 @@
|
||||
package gci
|
||||
|
||||
import "time"
|
||||
|
||||
// Level 1: Module Score
|
||||
type ModuleScore struct {
|
||||
ModuleID string `json:"module_id"`
|
||||
ModuleName string `json:"module_name"`
|
||||
Assigned int `json:"assigned"`
|
||||
Completed int `json:"completed"`
|
||||
RawScore float64 `json:"raw_score"` // completions/assigned
|
||||
ValidityFactor float64 `json:"validity_factor"` // 0.0-1.0
|
||||
FinalScore float64 `json:"final_score"` // RawScore * ValidityFactor
|
||||
RiskWeight float64 `json:"risk_weight"` // module criticality weight
|
||||
Category string `json:"category"` // dsgvo, nis2, iso27001, ai_act
|
||||
}
|
||||
|
||||
// Level 2: Risk-weighted Module Score per regulation area
|
||||
type RiskWeightedScore struct {
|
||||
AreaID string `json:"area_id"`
|
||||
AreaName string `json:"area_name"`
|
||||
Modules []ModuleScore `json:"modules"`
|
||||
WeightedSum float64 `json:"weighted_sum"`
|
||||
TotalWeight float64 `json:"total_weight"`
|
||||
AreaScore float64 `json:"area_score"` // WeightedSum / TotalWeight
|
||||
}
|
||||
|
||||
// Level 3: Regulation Area Score
|
||||
type RegulationAreaScore struct {
|
||||
RegulationID string `json:"regulation_id"` // dsgvo, nis2, iso27001, ai_act
|
||||
RegulationName string `json:"regulation_name"` // Display name
|
||||
Score float64 `json:"score"` // 0-100
|
||||
Weight float64 `json:"weight"` // regulation weight in GCI
|
||||
WeightedScore float64 `json:"weighted_score"` // Score * Weight
|
||||
ModuleCount int `json:"module_count"`
|
||||
CompletedCount int `json:"completed_count"`
|
||||
}
|
||||
|
||||
// Level 4: GCI Result
|
||||
type GCIResult struct {
|
||||
TenantID string `json:"tenant_id"`
|
||||
GCIScore float64 `json:"gci_score"` // 0-100
|
||||
MaturityLevel string `json:"maturity_level"` // Optimized, Managed, Defined, Reactive, HighRisk
|
||||
MaturityLabel string `json:"maturity_label"` // German label
|
||||
CalculatedAt time.Time `json:"calculated_at"`
|
||||
Profile string `json:"profile"` // default, nis2_relevant, ki_nutzer
|
||||
AreaScores []RegulationAreaScore `json:"area_scores"`
|
||||
CriticalityMult float64 `json:"criticality_multiplier"`
|
||||
IncidentAdj float64 `json:"incident_adjustment"`
|
||||
AuditTrail []AuditEntry `json:"audit_trail"`
|
||||
}
|
||||
|
||||
// GCI Breakdown with all 4 levels
|
||||
type GCIBreakdown struct {
|
||||
GCIResult
|
||||
Level1Modules []ModuleScore `json:"level1_modules"`
|
||||
Level2Areas []RiskWeightedScore `json:"level2_areas"`
|
||||
}
|
||||
|
||||
// MaturityLevel constants
|
||||
const (
|
||||
MaturityOptimized = "OPTIMIZED"
|
||||
MaturityManaged = "MANAGED"
|
||||
MaturityDefined = "DEFINED"
|
||||
MaturityReactive = "REACTIVE"
|
||||
MaturityHighRisk = "HIGH_RISK"
|
||||
)
|
||||
|
||||
// Maturity level labels (German)
|
||||
var MaturityLabels = map[string]string{
|
||||
MaturityOptimized: "Optimiert",
|
||||
MaturityManaged: "Gesteuert",
|
||||
MaturityDefined: "Definiert",
|
||||
MaturityReactive: "Reaktiv",
|
||||
MaturityHighRisk: "Hohes Risiko",
|
||||
}
|
||||
|
||||
// AuditEntry for score transparency
|
||||
type AuditEntry struct {
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Factor string `json:"factor"`
|
||||
Description string `json:"description"`
|
||||
Value float64 `json:"value"`
|
||||
Impact string `json:"impact"` // positive, negative, neutral
|
||||
}
|
||||
|
||||
// ComplianceMatrixEntry maps roles to regulations
|
||||
type ComplianceMatrixEntry struct {
|
||||
Role string `json:"role"`
|
||||
RoleName string `json:"role_name"`
|
||||
Regulations map[string]float64 `json:"regulations"` // regulation_id -> score
|
||||
OverallScore float64 `json:"overall_score"`
|
||||
RequiredModules int `json:"required_modules"`
|
||||
CompletedModules int `json:"completed_modules"`
|
||||
}
|
||||
|
||||
// GCI History snapshot
|
||||
type GCISnapshot struct {
|
||||
TenantID string `json:"tenant_id"`
|
||||
Score float64 `json:"score"`
|
||||
MaturityLevel string `json:"maturity_level"`
|
||||
AreaScores map[string]float64 `json:"area_scores"`
|
||||
CalculatedAt time.Time `json:"calculated_at"`
|
||||
}
|
||||
118
admin-v2/ai-compliance-sdk/internal/gci/nis2_roles.go
Normal file
118
admin-v2/ai-compliance-sdk/internal/gci/nis2_roles.go
Normal file
@@ -0,0 +1,118 @@
|
||||
package gci
|
||||
|
||||
// NIS2Role defines a NIS2 role classification
|
||||
type NIS2Role struct {
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
MandatoryModules []string `json:"mandatory_modules"`
|
||||
Priority int `json:"priority"` // 1=highest
|
||||
}
|
||||
|
||||
// NIS2RoleAssignment represents a user's NIS2 role
|
||||
type NIS2RoleAssignment struct {
|
||||
TenantID string `json:"tenant_id"`
|
||||
UserID string `json:"user_id"`
|
||||
UserName string `json:"user_name"`
|
||||
RoleID string `json:"role_id"`
|
||||
RoleName string `json:"role_name"`
|
||||
AssignedAt string `json:"assigned_at"`
|
||||
}
|
||||
|
||||
// NIS2 role definitions
|
||||
var NIS2Roles = map[string]NIS2Role{
|
||||
"N1": {
|
||||
ID: "N1",
|
||||
Name: "Geschaeftsleitung",
|
||||
Description: "Leitungsorgane mit persoenlicher Haftung gemaess NIS2 Art. 20",
|
||||
Priority: 1,
|
||||
MandatoryModules: []string{
|
||||
"nis2-management",
|
||||
"nis2-risikomanagement",
|
||||
"dsgvo-grundlagen",
|
||||
"iso-isms",
|
||||
},
|
||||
},
|
||||
"N2": {
|
||||
ID: "N2",
|
||||
Name: "IT-Sicherheit / CISO",
|
||||
Description: "Verantwortliche fuer IT-Sicherheit und Cybersecurity",
|
||||
Priority: 2,
|
||||
MandatoryModules: []string{
|
||||
"nis2-risikomanagement",
|
||||
"nis2-incident-response",
|
||||
"nis2-supply-chain",
|
||||
"iso-zugangssteuerung",
|
||||
"iso-kryptografie",
|
||||
},
|
||||
},
|
||||
"N3": {
|
||||
ID: "N3",
|
||||
Name: "Kritische Funktionen",
|
||||
Description: "Mitarbeiter in kritischen Geschaeftsprozessen",
|
||||
Priority: 3,
|
||||
MandatoryModules: []string{
|
||||
"nis2-risikomanagement",
|
||||
"nis2-incident-response",
|
||||
"dsgvo-tom",
|
||||
"iso-zugangssteuerung",
|
||||
},
|
||||
},
|
||||
"N4": {
|
||||
ID: "N4",
|
||||
Name: "Allgemeine Mitarbeiter",
|
||||
Description: "Alle Mitarbeiter mit IT-Zugang",
|
||||
Priority: 4,
|
||||
MandatoryModules: []string{
|
||||
"nis2-risikomanagement",
|
||||
"dsgvo-grundlagen",
|
||||
"iso-isms",
|
||||
},
|
||||
},
|
||||
"N5": {
|
||||
ID: "N5",
|
||||
Name: "Incident Response Team",
|
||||
Description: "Mitglieder des IRT/CSIRT gemaess NIS2 Art. 21",
|
||||
Priority: 2,
|
||||
MandatoryModules: []string{
|
||||
"nis2-incident-response",
|
||||
"nis2-risikomanagement",
|
||||
"nis2-supply-chain",
|
||||
"iso-zugangssteuerung",
|
||||
"iso-kryptografie",
|
||||
"iso-isms",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// GetNIS2Role returns a NIS2 role by ID
|
||||
func GetNIS2Role(roleID string) (NIS2Role, bool) {
|
||||
r, ok := NIS2Roles[roleID]
|
||||
return r, ok
|
||||
}
|
||||
|
||||
// ListNIS2Roles returns all NIS2 roles sorted by priority
|
||||
func ListNIS2Roles() []NIS2Role {
|
||||
roles := []NIS2Role{}
|
||||
// Return in priority order
|
||||
order := []string{"N1", "N2", "N5", "N3", "N4"}
|
||||
for _, id := range order {
|
||||
if r, ok := NIS2Roles[id]; ok {
|
||||
roles = append(roles, r)
|
||||
}
|
||||
}
|
||||
return roles
|
||||
}
|
||||
|
||||
// MockNIS2RoleAssignments returns mock role assignments
|
||||
func MockNIS2RoleAssignments(tenantID string) []NIS2RoleAssignment {
|
||||
return []NIS2RoleAssignment{
|
||||
{TenantID: tenantID, UserID: "user-001", UserName: "Dr. Schmidt", RoleID: "N1", RoleName: "Geschaeftsleitung", AssignedAt: "2025-06-01"},
|
||||
{TenantID: tenantID, UserID: "user-002", UserName: "M. Weber", RoleID: "N2", RoleName: "IT-Sicherheit / CISO", AssignedAt: "2025-06-01"},
|
||||
{TenantID: tenantID, UserID: "user-003", UserName: "S. Mueller", RoleID: "N5", RoleName: "Incident Response Team", AssignedAt: "2025-07-15"},
|
||||
{TenantID: tenantID, UserID: "user-004", UserName: "K. Fischer", RoleID: "N3", RoleName: "Kritische Funktionen", AssignedAt: "2025-08-01"},
|
||||
{TenantID: tenantID, UserID: "user-005", UserName: "L. Braun", RoleID: "N3", RoleName: "Kritische Funktionen", AssignedAt: "2025-08-01"},
|
||||
{TenantID: tenantID, UserID: "user-006", UserName: "A. Schwarz", RoleID: "N4", RoleName: "Allgemeine Mitarbeiter", AssignedAt: "2025-09-01"},
|
||||
{TenantID: tenantID, UserID: "user-007", UserName: "T. Wagner", RoleID: "N4", RoleName: "Allgemeine Mitarbeiter", AssignedAt: "2025-09-01"},
|
||||
}
|
||||
}
|
||||
147
admin-v2/ai-compliance-sdk/internal/gci/nis2_scoring.go
Normal file
147
admin-v2/ai-compliance-sdk/internal/gci/nis2_scoring.go
Normal file
@@ -0,0 +1,147 @@
|
||||
package gci
|
||||
|
||||
import "math"
|
||||
|
||||
// NIS2Score represents the NIS2-specific compliance score
|
||||
type NIS2Score struct {
|
||||
TenantID string `json:"tenant_id"`
|
||||
OverallScore float64 `json:"overall_score"`
|
||||
MaturityLevel string `json:"maturity_level"`
|
||||
MaturityLabel string `json:"maturity_label"`
|
||||
AreaScores []NIS2AreaScore `json:"area_scores"`
|
||||
RoleCompliance []NIS2RoleScore `json:"role_compliance"`
|
||||
}
|
||||
|
||||
// NIS2AreaScore represents a NIS2 compliance area
|
||||
type NIS2AreaScore struct {
|
||||
AreaID string `json:"area_id"`
|
||||
AreaName string `json:"area_name"`
|
||||
Score float64 `json:"score"`
|
||||
Weight float64 `json:"weight"`
|
||||
ModuleIDs []string `json:"module_ids"`
|
||||
}
|
||||
|
||||
// NIS2RoleScore represents completion per NIS2 role
|
||||
type NIS2RoleScore struct {
|
||||
RoleID string `json:"role_id"`
|
||||
RoleName string `json:"role_name"`
|
||||
AssignedUsers int `json:"assigned_users"`
|
||||
CompletionRate float64 `json:"completion_rate"`
|
||||
MandatoryTotal int `json:"mandatory_total"`
|
||||
MandatoryDone int `json:"mandatory_done"`
|
||||
}
|
||||
|
||||
// NIS2 scoring areas with weights
|
||||
// NIS2Score = 25% Management + 25% Incident + 30% IT Security + 20% Supply Chain
|
||||
var nis2Areas = []struct {
|
||||
ID string
|
||||
Name string
|
||||
Weight float64
|
||||
ModuleIDs []string
|
||||
}{
|
||||
{
|
||||
ID: "management", Name: "Management & Governance", Weight: 0.25,
|
||||
ModuleIDs: []string{"nis2-management", "dsgvo-grundlagen", "iso-isms"},
|
||||
},
|
||||
{
|
||||
ID: "incident", Name: "Vorfallsbehandlung", Weight: 0.25,
|
||||
ModuleIDs: []string{"nis2-incident-response"},
|
||||
},
|
||||
{
|
||||
ID: "it_security", Name: "IT-Sicherheit", Weight: 0.30,
|
||||
ModuleIDs: []string{"nis2-risikomanagement", "iso-zugangssteuerung", "iso-kryptografie"},
|
||||
},
|
||||
{
|
||||
ID: "supply_chain", Name: "Lieferkettensicherheit", Weight: 0.20,
|
||||
ModuleIDs: []string{"nis2-supply-chain", "dsgvo-auftragsverarbeitung"},
|
||||
},
|
||||
}
|
||||
|
||||
// CalculateNIS2Score computes the NIS2-specific compliance score
|
||||
func CalculateNIS2Score(tenantID string) *NIS2Score {
|
||||
modules := MockModuleData(tenantID)
|
||||
moduleMap := map[string]ModuleScore{}
|
||||
for _, m := range modules {
|
||||
moduleMap[m.ModuleID] = m
|
||||
}
|
||||
|
||||
areaScores := []NIS2AreaScore{}
|
||||
totalWeighted := 0.0
|
||||
|
||||
for _, area := range nis2Areas {
|
||||
areaScore := NIS2AreaScore{
|
||||
AreaID: area.ID,
|
||||
AreaName: area.Name,
|
||||
Weight: area.Weight,
|
||||
ModuleIDs: area.ModuleIDs,
|
||||
}
|
||||
|
||||
scoreSum := 0.0
|
||||
count := 0
|
||||
for _, modID := range area.ModuleIDs {
|
||||
if m, ok := moduleMap[modID]; ok {
|
||||
if m.Assigned > 0 {
|
||||
scoreSum += float64(m.Completed) / float64(m.Assigned) * 100
|
||||
}
|
||||
count++
|
||||
}
|
||||
}
|
||||
if count > 0 {
|
||||
areaScore.Score = math.Round(scoreSum/float64(count)*10) / 10
|
||||
}
|
||||
totalWeighted += areaScore.Score * areaScore.Weight
|
||||
areaScores = append(areaScores, areaScore)
|
||||
}
|
||||
|
||||
overallScore := math.Round(totalWeighted*10) / 10
|
||||
|
||||
// Calculate role compliance
|
||||
roleAssignments := MockNIS2RoleAssignments(tenantID)
|
||||
roleScores := calculateNIS2RoleScores(roleAssignments, moduleMap)
|
||||
|
||||
return &NIS2Score{
|
||||
TenantID: tenantID,
|
||||
OverallScore: overallScore,
|
||||
MaturityLevel: determineMaturityLevel(overallScore),
|
||||
MaturityLabel: MaturityLabels[determineMaturityLevel(overallScore)],
|
||||
AreaScores: areaScores,
|
||||
RoleCompliance: roleScores,
|
||||
}
|
||||
}
|
||||
|
||||
func calculateNIS2RoleScores(assignments []NIS2RoleAssignment, moduleMap map[string]ModuleScore) []NIS2RoleScore {
|
||||
// Count users per role
|
||||
roleCounts := map[string]int{}
|
||||
for _, a := range assignments {
|
||||
roleCounts[a.RoleID]++
|
||||
}
|
||||
|
||||
scores := []NIS2RoleScore{}
|
||||
for roleID, role := range NIS2Roles {
|
||||
rs := NIS2RoleScore{
|
||||
RoleID: roleID,
|
||||
RoleName: role.Name,
|
||||
AssignedUsers: roleCounts[roleID],
|
||||
MandatoryTotal: len(role.MandatoryModules),
|
||||
}
|
||||
|
||||
completionSum := 0.0
|
||||
for _, modID := range role.MandatoryModules {
|
||||
if m, ok := moduleMap[modID]; ok {
|
||||
if m.Assigned > 0 {
|
||||
rate := float64(m.Completed) / float64(m.Assigned)
|
||||
completionSum += rate
|
||||
if rate >= 0.8 { // 80%+ = considered done
|
||||
rs.MandatoryDone++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if rs.MandatoryTotal > 0 {
|
||||
rs.CompletionRate = math.Round(completionSum/float64(rs.MandatoryTotal)*100*10) / 10
|
||||
}
|
||||
scores = append(scores, rs)
|
||||
}
|
||||
|
||||
return scores
|
||||
}
|
||||
59
admin-v2/ai-compliance-sdk/internal/gci/validity.go
Normal file
59
admin-v2/ai-compliance-sdk/internal/gci/validity.go
Normal file
@@ -0,0 +1,59 @@
|
||||
package gci
|
||||
|
||||
import (
|
||||
"math"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
// GracePeriodDays is the number of days after expiry during which
|
||||
// the certificate still contributes (with declining factor)
|
||||
GracePeriodDays = 180
|
||||
|
||||
// DecayStartDays is how many days before expiry the linear decay begins
|
||||
DecayStartDays = 180
|
||||
)
|
||||
|
||||
// CalculateValidityFactor computes the validity factor for a certificate
|
||||
// based on its expiry date.
|
||||
//
|
||||
// Rules:
|
||||
// - Certificate not yet expiring (>6 months): factor = 1.0
|
||||
// - Certificate expiring within 6 months: linear decay from 1.0 to 0.5
|
||||
// - Certificate expired: linear decay from 0.5 to 0.0 over grace period
|
||||
// - Certificate expired beyond grace period: factor = 0.0
|
||||
func CalculateValidityFactor(validUntil time.Time, now time.Time) float64 {
|
||||
daysUntilExpiry := validUntil.Sub(now).Hours() / 24.0
|
||||
|
||||
if daysUntilExpiry > float64(DecayStartDays) {
|
||||
// Not yet in decay window
|
||||
return 1.0
|
||||
}
|
||||
|
||||
if daysUntilExpiry > 0 {
|
||||
// In pre-expiry decay window: linear from 1.0 to 0.5
|
||||
fraction := daysUntilExpiry / float64(DecayStartDays)
|
||||
return 0.5 + 0.5*fraction
|
||||
}
|
||||
|
||||
// Certificate is expired
|
||||
daysExpired := -daysUntilExpiry
|
||||
if daysExpired > float64(GracePeriodDays) {
|
||||
return 0.0
|
||||
}
|
||||
|
||||
// In grace period: linear from 0.5 to 0.0
|
||||
fraction := 1.0 - (daysExpired / float64(GracePeriodDays))
|
||||
return math.Max(0, 0.5*fraction)
|
||||
}
|
||||
|
||||
// IsExpired returns true if the certificate is past its validity date
|
||||
func IsExpired(validUntil time.Time, now time.Time) bool {
|
||||
return now.After(validUntil)
|
||||
}
|
||||
|
||||
// IsExpiringSoon returns true if the certificate expires within the decay window
|
||||
func IsExpiringSoon(validUntil time.Time, now time.Time) bool {
|
||||
daysUntil := validUntil.Sub(now).Hours() / 24.0
|
||||
return daysUntil > 0 && daysUntil <= float64(DecayStartDays)
|
||||
}
|
||||
78
admin-v2/ai-compliance-sdk/internal/gci/weights.go
Normal file
78
admin-v2/ai-compliance-sdk/internal/gci/weights.go
Normal file
@@ -0,0 +1,78 @@
|
||||
package gci
|
||||
|
||||
// WeightProfile defines regulation weights for different compliance profiles
|
||||
type WeightProfile struct {
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
Weights map[string]float64 `json:"weights"` // regulation_id -> weight (0.0-1.0)
|
||||
}
|
||||
|
||||
// Default weight profiles
|
||||
var DefaultProfiles = map[string]WeightProfile{
|
||||
"default": {
|
||||
ID: "default",
|
||||
Name: "Standard",
|
||||
Description: "Ausgewogenes Profil fuer allgemeine Compliance",
|
||||
Weights: map[string]float64{
|
||||
"dsgvo": 0.30,
|
||||
"nis2": 0.25,
|
||||
"iso27001": 0.25,
|
||||
"ai_act": 0.20,
|
||||
},
|
||||
},
|
||||
"nis2_relevant": {
|
||||
ID: "nis2_relevant",
|
||||
Name: "NIS2-relevant",
|
||||
Description: "Fuer Betreiber kritischer Infrastrukturen",
|
||||
Weights: map[string]float64{
|
||||
"dsgvo": 0.25,
|
||||
"nis2": 0.35,
|
||||
"iso27001": 0.25,
|
||||
"ai_act": 0.15,
|
||||
},
|
||||
},
|
||||
"ki_nutzer": {
|
||||
ID: "ki_nutzer",
|
||||
Name: "KI-Nutzer",
|
||||
Description: "Fuer Organisationen mit KI-Einsatz",
|
||||
Weights: map[string]float64{
|
||||
"dsgvo": 0.25,
|
||||
"nis2": 0.25,
|
||||
"iso27001": 0.20,
|
||||
"ai_act": 0.30,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// ModuleRiskWeights defines risk criticality per module type
|
||||
var ModuleRiskWeights = map[string]float64{
|
||||
"incident_response": 3.0,
|
||||
"management_awareness": 3.0,
|
||||
"data_protection": 2.5,
|
||||
"it_security": 2.5,
|
||||
"supply_chain": 2.0,
|
||||
"risk_assessment": 2.0,
|
||||
"access_control": 2.0,
|
||||
"business_continuity": 2.0,
|
||||
"employee_training": 1.5,
|
||||
"documentation": 1.5,
|
||||
"physical_security": 1.0,
|
||||
"general": 1.0,
|
||||
}
|
||||
|
||||
// GetProfile returns a weight profile by ID, defaulting to "default"
|
||||
func GetProfile(profileID string) WeightProfile {
|
||||
if p, ok := DefaultProfiles[profileID]; ok {
|
||||
return p
|
||||
}
|
||||
return DefaultProfiles["default"]
|
||||
}
|
||||
|
||||
// GetModuleRiskWeight returns the risk weight for a module category
|
||||
func GetModuleRiskWeight(category string) float64 {
|
||||
if w, ok := ModuleRiskWeights[category]; ok {
|
||||
return w
|
||||
}
|
||||
return 1.0
|
||||
}
|
||||
384
admin-v2/ai-compliance-sdk/internal/llm/service.go
Normal file
384
admin-v2/ai-compliance-sdk/internal/llm/service.go
Normal file
@@ -0,0 +1,384 @@
|
||||
package llm
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// SearchResult matches the RAG service result structure
|
||||
type SearchResult struct {
|
||||
ID string `json:"id"`
|
||||
Content string `json:"content"`
|
||||
Source string `json:"source"`
|
||||
Score float64 `json:"score"`
|
||||
Metadata map[string]string `json:"metadata,omitempty"`
|
||||
}
|
||||
|
||||
// Service provides LLM functionality for document generation
|
||||
type Service struct {
|
||||
apiKey string
|
||||
model string
|
||||
}
|
||||
|
||||
// NewService creates a new LLM service
|
||||
func NewService(apiKey string) *Service {
|
||||
model := "claude-3-5-sonnet-20241022"
|
||||
if apiKey == "" {
|
||||
model = "mock"
|
||||
}
|
||||
return &Service{
|
||||
apiKey: apiKey,
|
||||
model: model,
|
||||
}
|
||||
}
|
||||
|
||||
// GetModel returns the current model name
|
||||
func (s *Service) GetModel() string {
|
||||
return s.model
|
||||
}
|
||||
|
||||
// GenerateDSFA generates a Data Protection Impact Assessment
|
||||
func (s *Service) GenerateDSFA(ctx context.Context, context map[string]interface{}, ragSources []SearchResult) (string, int, error) {
|
||||
if s.apiKey == "" {
|
||||
return "", 0, fmt.Errorf("LLM not configured")
|
||||
}
|
||||
|
||||
// Build prompt with context and RAG sources
|
||||
_ = s.buildDSFAPrompt(context, ragSources)
|
||||
|
||||
// In production, this would call the Anthropic API
|
||||
// response, err := s.callAnthropicAPI(ctx, prompt)
|
||||
// if err != nil {
|
||||
// return "", 0, err
|
||||
// }
|
||||
|
||||
// For now, simulate a response
|
||||
content := s.generateDSFAContent(context, ragSources)
|
||||
tokensUsed := len(strings.Split(content, " ")) * 2 // Rough estimate
|
||||
|
||||
return content, tokensUsed, nil
|
||||
}
|
||||
|
||||
// GenerateTOM generates Technical and Organizational Measures
|
||||
func (s *Service) GenerateTOM(ctx context.Context, context map[string]interface{}, ragSources []SearchResult) (string, int, error) {
|
||||
if s.apiKey == "" {
|
||||
return "", 0, fmt.Errorf("LLM not configured")
|
||||
}
|
||||
|
||||
content := s.generateTOMContent(context, ragSources)
|
||||
tokensUsed := len(strings.Split(content, " ")) * 2
|
||||
|
||||
return content, tokensUsed, nil
|
||||
}
|
||||
|
||||
// GenerateVVT generates a Processing Activity Register
|
||||
func (s *Service) GenerateVVT(ctx context.Context, context map[string]interface{}, ragSources []SearchResult) (string, int, error) {
|
||||
if s.apiKey == "" {
|
||||
return "", 0, fmt.Errorf("LLM not configured")
|
||||
}
|
||||
|
||||
content := s.generateVVTContent(context, ragSources)
|
||||
tokensUsed := len(strings.Split(content, " ")) * 2
|
||||
|
||||
return content, tokensUsed, nil
|
||||
}
|
||||
|
||||
// GenerateGutachten generates an expert opinion/assessment
|
||||
func (s *Service) GenerateGutachten(ctx context.Context, context map[string]interface{}, ragSources []SearchResult) (string, int, error) {
|
||||
if s.apiKey == "" {
|
||||
return "", 0, fmt.Errorf("LLM not configured")
|
||||
}
|
||||
|
||||
content := s.generateGutachtenContent(context, ragSources)
|
||||
tokensUsed := len(strings.Split(content, " ")) * 2
|
||||
|
||||
return content, tokensUsed, nil
|
||||
}
|
||||
|
||||
// buildDSFAPrompt builds the prompt for DSFA generation
|
||||
func (s *Service) buildDSFAPrompt(context map[string]interface{}, ragSources []SearchResult) string {
|
||||
var sb strings.Builder
|
||||
|
||||
sb.WriteString("Du bist ein Datenschutz-Experte und erstellst eine Datenschutz-Folgenabschätzung (DSFA) gemäß Art. 35 DSGVO.\n\n")
|
||||
|
||||
// Add context
|
||||
if useCaseName, ok := context["useCaseName"].(string); ok {
|
||||
sb.WriteString(fmt.Sprintf("Use Case: %s\n", useCaseName))
|
||||
}
|
||||
if description, ok := context["description"].(string); ok {
|
||||
sb.WriteString(fmt.Sprintf("Beschreibung: %s\n", description))
|
||||
}
|
||||
|
||||
// Add RAG context
|
||||
if len(ragSources) > 0 {
|
||||
sb.WriteString("\nRelevante rechtliche Grundlagen:\n")
|
||||
for _, source := range ragSources {
|
||||
sb.WriteString(fmt.Sprintf("- %s (%s)\n", source.Content[:min(200, len(source.Content))], source.Source))
|
||||
}
|
||||
}
|
||||
|
||||
sb.WriteString("\nErstelle eine vollständige DSFA mit allen erforderlichen Abschnitten.")
|
||||
|
||||
return sb.String()
|
||||
}
|
||||
|
||||
// Content generation functions (would be replaced by actual LLM calls in production)
|
||||
func (s *Service) generateDSFAContent(context map[string]interface{}, ragSources []SearchResult) string {
|
||||
useCaseName := "KI-gestützte Datenverarbeitung"
|
||||
if name, ok := context["useCaseName"].(string); ok {
|
||||
useCaseName = name
|
||||
}
|
||||
|
||||
return fmt.Sprintf(`# Datenschutz-Folgenabschätzung (DSFA)
|
||||
|
||||
## Use Case: %s
|
||||
|
||||
## 1. Systematische Beschreibung der Verarbeitungsvorgänge
|
||||
|
||||
Die geplante Verarbeitung umfasst die Analyse von Daten mittels KI-gestützter Systeme.
|
||||
|
||||
### 1.1 Verarbeitungszwecke
|
||||
- Automatisierte Analyse und Verarbeitung
|
||||
- Optimierung von Geschäftsprozessen
|
||||
- Qualitätssicherung
|
||||
|
||||
### 1.2 Rechtsgrundlage
|
||||
Gemäß Art. 6 Abs. 1 lit. f DSGVO basiert die Verarbeitung auf dem berechtigten Interesse des Verantwortlichen.
|
||||
|
||||
### 1.3 Kategorien verarbeiteter Daten
|
||||
- Nutzungsdaten
|
||||
- Metadaten
|
||||
- Aggregierte Analysedaten
|
||||
|
||||
## 2. Bewertung der Notwendigkeit und Verhältnismäßigkeit
|
||||
|
||||
### 2.1 Notwendigkeit
|
||||
Die Verarbeitung ist erforderlich, um die definierten Geschäftsziele zu erreichen.
|
||||
|
||||
### 2.2 Verhältnismäßigkeit
|
||||
Alternative Methoden wurden geprüft. Die gewählte Verarbeitungsmethode stellt den geringsten Eingriff bei gleichem Nutzen dar.
|
||||
|
||||
## 3. Risikobewertung
|
||||
|
||||
### 3.1 Identifizierte Risiken
|
||||
|
||||
| Risiko | Wahrscheinlichkeit | Schwere | Gesamtbewertung |
|
||||
|--------|-------------------|---------|-----------------|
|
||||
| Unbefugter Zugriff | Mittel | Hoch | HOCH |
|
||||
| Datenverlust | Niedrig | Hoch | MITTEL |
|
||||
| Fehlinterpretation | Mittel | Mittel | MITTEL |
|
||||
|
||||
### 3.2 Maßnahmen zur Risikominderung
|
||||
|
||||
1. **Technische Maßnahmen**
|
||||
- Verschlüsselung (AES-256)
|
||||
- Zugriffskontrollen
|
||||
- Audit-Logging
|
||||
|
||||
2. **Organisatorische Maßnahmen**
|
||||
- Schulungen
|
||||
- Dokumentation
|
||||
- Regelmäßige Überprüfungen
|
||||
|
||||
## 4. Genehmigungsstatus
|
||||
|
||||
| Rolle | Status | Datum |
|
||||
|-------|--------|-------|
|
||||
| Projektleiter | AUSSTEHEND | - |
|
||||
| DSB | AUSSTEHEND | - |
|
||||
| Geschäftsführung | AUSSTEHEND | - |
|
||||
|
||||
---
|
||||
*Generiert mit KI-Unterstützung. Manuelle Überprüfung erforderlich.*
|
||||
`, useCaseName)
|
||||
}
|
||||
|
||||
func (s *Service) generateTOMContent(context map[string]interface{}, ragSources []SearchResult) string {
|
||||
return `# Technische und Organisatorische Maßnahmen (TOMs)
|
||||
|
||||
## 1. Vertraulichkeit (Art. 32 Abs. 1 lit. b DSGVO)
|
||||
|
||||
### 1.1 Zutrittskontrolle
|
||||
- [ ] Alarmanlage installiert
|
||||
- [ ] Chipkarten-System aktiv
|
||||
- [ ] Besucherprotokoll geführt
|
||||
|
||||
### 1.2 Zugangskontrolle
|
||||
- [ ] Starke Passwort-Policy (12+ Zeichen)
|
||||
- [ ] MFA aktiviert
|
||||
- [ ] Automatische Bildschirmsperre
|
||||
|
||||
### 1.3 Zugriffskontrolle
|
||||
- [ ] Rollenbasierte Berechtigungen
|
||||
- [ ] Need-to-know Prinzip
|
||||
- [ ] Quartalsweise Berechtigungsüberprüfung
|
||||
|
||||
## 2. Integrität (Art. 32 Abs. 1 lit. b DSGVO)
|
||||
|
||||
### 2.1 Weitergabekontrolle
|
||||
- [ ] TLS 1.3 für alle Übertragungen
|
||||
- [ ] E-Mail-Verschlüsselung
|
||||
- [ ] Sichere File-Transfer-Protokolle
|
||||
|
||||
### 2.2 Eingabekontrolle
|
||||
- [ ] Vollständiges Audit-Logging
|
||||
- [ ] Benutzeridentifikation bei Änderungen
|
||||
- [ ] Unveränderliche Protokolle
|
||||
|
||||
## 3. Verfügbarkeit (Art. 32 Abs. 1 lit. c DSGVO)
|
||||
|
||||
### 3.1 Verfügbarkeitskontrolle
|
||||
- [ ] Tägliche Backups
|
||||
- [ ] Georedundante Speicherung
|
||||
- [ ] USV-System
|
||||
- [ ] Dokumentierter Notfallplan
|
||||
|
||||
### 3.2 Wiederherstellung
|
||||
- [ ] RPO: 1 Stunde
|
||||
- [ ] RTO: 4 Stunden
|
||||
- [ ] Jährliche Wiederherstellungstests
|
||||
|
||||
## 4. Belastbarkeit
|
||||
|
||||
- [ ] DDoS-Schutz implementiert
|
||||
- [ ] Lastverteilung aktiv
|
||||
- [ ] Skalierbare Infrastruktur
|
||||
|
||||
---
|
||||
*Generiert mit KI-Unterstützung. Manuelle Überprüfung erforderlich.*
|
||||
`
|
||||
}
|
||||
|
||||
func (s *Service) generateVVTContent(context map[string]interface{}, ragSources []SearchResult) string {
|
||||
return `# Verzeichnis der Verarbeitungstätigkeiten (Art. 30 DSGVO)
|
||||
|
||||
## Verarbeitungstätigkeit Nr. 1
|
||||
|
||||
### Stammdaten
|
||||
|
||||
| Feld | Wert |
|
||||
|------|------|
|
||||
| **Bezeichnung** | KI-gestützte Datenanalyse |
|
||||
| **Verantwortlicher** | [Unternehmen] |
|
||||
| **DSB** | [Name, Kontakt] |
|
||||
| **Abteilung** | IT / Data Science |
|
||||
|
||||
### Verarbeitungsdetails
|
||||
|
||||
| Feld | Wert |
|
||||
|------|------|
|
||||
| **Zweck** | Optimierung von Geschäftsprozessen durch KI-Analyse |
|
||||
| **Rechtsgrundlage** | Art. 6 Abs. 1 lit. f DSGVO |
|
||||
| **Betroffene Kategorien** | Kunden, Mitarbeiter, Geschäftspartner |
|
||||
| **Datenkategorien** | Nutzungsdaten, Metadaten, Analyseergebnisse |
|
||||
|
||||
### Empfänger
|
||||
|
||||
| Kategorie | Beispiele |
|
||||
|-----------|-----------|
|
||||
| Intern | IT-Abteilung, Management |
|
||||
| Auftragsverarbeiter | Cloud-Provider (mit AVV) |
|
||||
| Dritte | Keine |
|
||||
|
||||
### Drittlandtransfer
|
||||
|
||||
| Frage | Antwort |
|
||||
|-------|---------|
|
||||
| Übermittlung in Drittländer? | Nein / Ja |
|
||||
| Falls ja, Garantien | [Standardvertragsklauseln / Angemessenheitsbeschluss] |
|
||||
|
||||
### Löschfristen
|
||||
|
||||
| Datenkategorie | Frist | Grundlage |
|
||||
|----------------|-------|-----------|
|
||||
| Nutzungsdaten | 12 Monate | Betriebliche Notwendigkeit |
|
||||
| Analyseergebnisse | 36 Monate | Geschäftszweck |
|
||||
| Audit-Logs | 10 Jahre | Handelsrechtlich |
|
||||
|
||||
### Technisch-Organisatorische Maßnahmen
|
||||
|
||||
Verweis auf TOM-Dokument Version 1.0
|
||||
|
||||
---
|
||||
*Generiert mit KI-Unterstützung. Manuelle Überprüfung erforderlich.*
|
||||
`
|
||||
}
|
||||
|
||||
func (s *Service) generateGutachtenContent(context map[string]interface{}, ragSources []SearchResult) string {
|
||||
return `# Compliance-Gutachten
|
||||
|
||||
## Management Summary
|
||||
|
||||
Das geprüfte System erfüllt die wesentlichen Anforderungen der anwendbaren Regulierungen. Es bestehen Optimierungspotenziale, die priorisiert adressiert werden sollten.
|
||||
|
||||
## 1. Prüfungsumfang
|
||||
|
||||
### 1.1 Geprüfte Regulierungen
|
||||
- DSGVO (EU 2016/679)
|
||||
- AI Act (EU 2024/...)
|
||||
- NIS2 (EU 2022/2555)
|
||||
|
||||
### 1.2 Prüfungsmethodik
|
||||
- Dokumentenprüfung
|
||||
- Technische Analyse
|
||||
- Interviews mit Stakeholdern
|
||||
|
||||
## 2. Ergebnisse
|
||||
|
||||
### 2.1 DSGVO-Konformität
|
||||
|
||||
| Bereich | Bewertung | Handlungsbedarf |
|
||||
|---------|-----------|-----------------|
|
||||
| Rechtmäßigkeit | ✓ Erfüllt | Gering |
|
||||
| Transparenz | ◐ Teilweise | Mittel |
|
||||
| Datensicherheit | ✓ Erfüllt | Gering |
|
||||
| Betroffenenrechte | ◐ Teilweise | Mittel |
|
||||
|
||||
### 2.2 AI Act-Konformität
|
||||
|
||||
| Bereich | Bewertung | Handlungsbedarf |
|
||||
|---------|-----------|-----------------|
|
||||
| Risikoklassifizierung | ✓ Erfüllt | Keiner |
|
||||
| Dokumentation | ◐ Teilweise | Mittel |
|
||||
| Human Oversight | ✓ Erfüllt | Gering |
|
||||
|
||||
### 2.3 NIS2-Konformität
|
||||
|
||||
| Bereich | Bewertung | Handlungsbedarf |
|
||||
|---------|-----------|-----------------|
|
||||
| Risikomanagement | ✓ Erfüllt | Gering |
|
||||
| Incident Reporting | ◐ Teilweise | Hoch |
|
||||
| Supply Chain | ○ Nicht erfüllt | Kritisch |
|
||||
|
||||
## 3. Empfehlungen
|
||||
|
||||
### Kritisch (sofort)
|
||||
1. Supply-Chain-Risikomanagement implementieren
|
||||
2. Incident-Reporting-Prozess etablieren
|
||||
|
||||
### Hoch (< 3 Monate)
|
||||
3. Transparenzdokumentation vervollständigen
|
||||
4. Betroffenenrechte-Portal optimieren
|
||||
|
||||
### Mittel (< 6 Monate)
|
||||
5. AI Act Dokumentation erweitern
|
||||
6. Schulungsmaßnahmen durchführen
|
||||
|
||||
## 4. Fazit
|
||||
|
||||
Das System zeigt einen guten Compliance-Stand mit klar definierten Verbesserungsbereichen. Bei Umsetzung der Empfehlungen ist eine vollständige Konformität erreichbar.
|
||||
|
||||
---
|
||||
*Erstellt: [Datum]*
|
||||
*Gutachter: [Name]*
|
||||
*Version: 1.0*
|
||||
`
|
||||
}
|
||||
|
||||
func min(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
}
|
||||
return b
|
||||
}
|
||||
208
admin-v2/ai-compliance-sdk/internal/rag/service.go
Normal file
208
admin-v2/ai-compliance-sdk/internal/rag/service.go
Normal file
@@ -0,0 +1,208 @@
|
||||
package rag
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// SearchResult represents a search result from the RAG system
|
||||
type SearchResult struct {
|
||||
ID string `json:"id"`
|
||||
Content string `json:"content"`
|
||||
Source string `json:"source"`
|
||||
Score float64 `json:"score"`
|
||||
Metadata map[string]string `json:"metadata,omitempty"`
|
||||
}
|
||||
|
||||
// CorpusStatus represents the status of the legal corpus
|
||||
type CorpusStatus struct {
|
||||
Status string `json:"status"`
|
||||
Collections []string `json:"collections"`
|
||||
Documents int `json:"documents"`
|
||||
LastUpdated string `json:"lastUpdated,omitempty"`
|
||||
}
|
||||
|
||||
// Service provides RAG functionality
|
||||
type Service struct {
|
||||
qdrantURL string
|
||||
// client *qdrant.Client // Would be actual Qdrant client in production
|
||||
}
|
||||
|
||||
// NewService creates a new RAG service
|
||||
func NewService(qdrantURL string) (*Service, error) {
|
||||
if qdrantURL == "" {
|
||||
return nil, fmt.Errorf("qdrant URL is required")
|
||||
}
|
||||
|
||||
// In production, this would initialize the Qdrant client
|
||||
// client, err := qdrant.NewClient(qdrantURL)
|
||||
// if err != nil {
|
||||
// return nil, err
|
||||
// }
|
||||
|
||||
return &Service{
|
||||
qdrantURL: qdrantURL,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Search performs semantic search on the legal corpus
|
||||
func (s *Service) Search(ctx context.Context, query string, topK int, collection string, filter string) ([]SearchResult, error) {
|
||||
// In production, this would:
|
||||
// 1. Generate embedding for the query using an embedding model (e.g., BGE-M3)
|
||||
// 2. Search Qdrant for similar vectors
|
||||
// 3. Return the results
|
||||
|
||||
// For now, return mock results that simulate a real RAG response
|
||||
results := s.getMockSearchResults(query, topK)
|
||||
return results, nil
|
||||
}
|
||||
|
||||
// GetCorpusStatus returns the status of the legal corpus
|
||||
func (s *Service) GetCorpusStatus(ctx context.Context) (*CorpusStatus, error) {
|
||||
// In production, this would query Qdrant for collection info
|
||||
return &CorpusStatus{
|
||||
Status: "ready",
|
||||
Collections: []string{
|
||||
"legal_corpus",
|
||||
"dsgvo_articles",
|
||||
"ai_act_articles",
|
||||
"nis2_articles",
|
||||
},
|
||||
Documents: 1500,
|
||||
LastUpdated: "2026-02-01T00:00:00Z",
|
||||
}, nil
|
||||
}
|
||||
|
||||
// IndexDocument indexes a new document into the corpus
|
||||
func (s *Service) IndexDocument(ctx context.Context, collection string, id string, content string, metadata map[string]string) error {
|
||||
// In production, this would:
|
||||
// 1. Generate embedding for the content
|
||||
// 2. Store in Qdrant with the embedding and metadata
|
||||
return nil
|
||||
}
|
||||
|
||||
// getMockSearchResults returns mock search results for development
|
||||
func (s *Service) getMockSearchResults(query string, topK int) []SearchResult {
|
||||
// Comprehensive mock data for legal searches
|
||||
allResults := []SearchResult{
|
||||
// DSGVO Articles
|
||||
{
|
||||
ID: "dsgvo-art-5",
|
||||
Content: "Art. 5 DSGVO - Grundsaetze fuer die Verarbeitung personenbezogener Daten\n\n(1) Personenbezogene Daten muessen:\na) auf rechtmaessige Weise, nach Treu und Glauben und in einer fuer die betroffene Person nachvollziehbaren Weise verarbeitet werden (Rechtmaessigkeit, Verarbeitung nach Treu und Glauben, Transparenz);\nb) fuer festgelegte, eindeutige und legitime Zwecke erhoben werden und duerfen nicht in einer mit diesen Zwecken nicht zu vereinbarenden Weise weiterverarbeitet werden (Zweckbindung);\nc) dem Zweck angemessen und erheblich sowie auf das fuer die Zwecke der Verarbeitung notwendige Mass beschraenkt sein (Datenminimierung);",
|
||||
Source: "DSGVO",
|
||||
Score: 0.95,
|
||||
Metadata: map[string]string{
|
||||
"article": "5",
|
||||
"regulation": "DSGVO",
|
||||
"category": "grundsaetze",
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "dsgvo-art-6",
|
||||
Content: "Art. 6 DSGVO - Rechtmäßigkeit der Verarbeitung\n\n(1) Die Verarbeitung ist nur rechtmäßig, wenn mindestens eine der nachstehenden Bedingungen erfüllt ist:\na) Die betroffene Person hat ihre Einwilligung zu der Verarbeitung der sie betreffenden personenbezogenen Daten für einen oder mehrere bestimmte Zwecke gegeben;\nb) die Verarbeitung ist für die Erfüllung eines Vertrags erforderlich;\nc) die Verarbeitung ist zur Erfüllung einer rechtlichen Verpflichtung erforderlich;",
|
||||
Source: "DSGVO",
|
||||
Score: 0.92,
|
||||
Metadata: map[string]string{
|
||||
"article": "6",
|
||||
"regulation": "DSGVO",
|
||||
"category": "rechtsgrundlage",
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "dsgvo-art-30",
|
||||
Content: "Art. 30 DSGVO - Verzeichnis von Verarbeitungstätigkeiten\n\n(1) Jeder Verantwortliche und gegebenenfalls sein Vertreter führen ein Verzeichnis aller Verarbeitungstätigkeiten, die ihrer Zuständigkeit unterliegen. Dieses Verzeichnis enthält sämtliche folgenden Angaben:\na) den Namen und die Kontaktdaten des Verantwortlichen;\nb) die Zwecke der Verarbeitung;\nc) eine Beschreibung der Kategorien betroffener Personen und der Kategorien personenbezogener Daten;",
|
||||
Source: "DSGVO",
|
||||
Score: 0.89,
|
||||
Metadata: map[string]string{
|
||||
"article": "30",
|
||||
"regulation": "DSGVO",
|
||||
"category": "dokumentation",
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "dsgvo-art-32",
|
||||
Content: "Art. 32 DSGVO - Sicherheit der Verarbeitung\n\n(1) Unter Berücksichtigung des Stands der Technik, der Implementierungskosten und der Art, des Umfangs, der Umstände und der Zwecke der Verarbeitung sowie der unterschiedlichen Eintrittswahrscheinlichkeit und Schwere des Risikos für die Rechte und Freiheiten natürlicher Personen treffen der Verantwortliche und der Auftragsverarbeiter geeignete technische und organisatorische Maßnahmen, um ein dem Risiko angemessenes Schutzniveau zu gewährleisten.",
|
||||
Source: "DSGVO",
|
||||
Score: 0.88,
|
||||
Metadata: map[string]string{
|
||||
"article": "32",
|
||||
"regulation": "DSGVO",
|
||||
"category": "sicherheit",
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "dsgvo-art-35",
|
||||
Content: "Art. 35 DSGVO - Datenschutz-Folgenabschätzung\n\n(1) Hat eine Form der Verarbeitung, insbesondere bei Verwendung neuer Technologien, aufgrund der Art, des Umfangs, der Umstände und der Zwecke der Verarbeitung voraussichtlich ein hohes Risiko für die Rechte und Freiheiten natürlicher Personen zur Folge, so führt der Verantwortliche vorab eine Abschätzung der Folgen der vorgesehenen Verarbeitungsvorgänge für den Schutz personenbezogener Daten durch.",
|
||||
Source: "DSGVO",
|
||||
Score: 0.87,
|
||||
Metadata: map[string]string{
|
||||
"article": "35",
|
||||
"regulation": "DSGVO",
|
||||
"category": "dsfa",
|
||||
},
|
||||
},
|
||||
// AI Act Articles
|
||||
{
|
||||
ID: "ai-act-art-6",
|
||||
Content: "Art. 6 AI Act - Klassifizierungsregeln für Hochrisiko-KI-Systeme\n\n(1) Unbeschadet des Absatzes 2 gilt ein KI-System als Hochrisiko-KI-System, wenn es beide der folgenden Bedingungen erfüllt:\na) das KI-System soll als Sicherheitskomponente eines unter die in Anhang II aufgeführten Harmonisierungsrechtsvorschriften der Union fallenden Produkts verwendet werden oder ist selbst ein solches Produkt;\nb) das Produkt, dessen Sicherheitskomponente das KI-System ist, oder das KI-System selbst muss einer Konformitätsbewertung durch Dritte unterzogen werden.",
|
||||
Source: "AI Act",
|
||||
Score: 0.91,
|
||||
Metadata: map[string]string{
|
||||
"article": "6",
|
||||
"regulation": "AI_ACT",
|
||||
"category": "klassifizierung",
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "ai-act-art-9",
|
||||
Content: "Art. 9 AI Act - Risikomanagement\n\n(1) Für Hochrisiko-KI-Systeme wird ein Risikomanagementsystem eingerichtet, umgesetzt, dokumentiert und aufrechterhalten. Das Risikomanagementsystem ist ein kontinuierlicher iterativer Prozess, der während des gesamten Lebenszyklus eines Hochrisiko-KI-Systems geplant und durchgeführt wird und einer regelmäßigen systematischen Aktualisierung bedarf.",
|
||||
Source: "AI Act",
|
||||
Score: 0.85,
|
||||
Metadata: map[string]string{
|
||||
"article": "9",
|
||||
"regulation": "AI_ACT",
|
||||
"category": "risikomanagement",
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "ai-act-art-52",
|
||||
Content: "Art. 52 AI Act - Transparenzpflichten für bestimmte KI-Systeme\n\n(1) Die Anbieter stellen sicher, dass KI-Systeme, die für die Interaktion mit natürlichen Personen bestimmt sind, so konzipiert und entwickelt werden, dass die betreffenden natürlichen Personen darüber informiert werden, dass sie mit einem KI-System interagieren, es sei denn, dies ist aus den Umständen und dem Nutzungskontext offensichtlich.",
|
||||
Source: "AI Act",
|
||||
Score: 0.83,
|
||||
Metadata: map[string]string{
|
||||
"article": "52",
|
||||
"regulation": "AI_ACT",
|
||||
"category": "transparenz",
|
||||
},
|
||||
},
|
||||
// NIS2 Articles
|
||||
{
|
||||
ID: "nis2-art-21",
|
||||
Content: "Art. 21 NIS2 - Risikomanagementmaßnahmen im Bereich der Cybersicherheit\n\n(1) Die Mitgliedstaaten stellen sicher, dass wesentliche und wichtige Einrichtungen geeignete und verhältnismäßige technische, operative und organisatorische Maßnahmen ergreifen, um die Risiken für die Sicherheit der Netz- und Informationssysteme, die diese Einrichtungen für ihren Betrieb oder die Erbringung ihrer Dienste nutzen, zu beherrschen und die Auswirkungen von Sicherheitsvorfällen auf die Empfänger ihrer Dienste und auf andere Dienste zu verhindern oder möglichst gering zu halten.",
|
||||
Source: "NIS2",
|
||||
Score: 0.86,
|
||||
Metadata: map[string]string{
|
||||
"article": "21",
|
||||
"regulation": "NIS2",
|
||||
"category": "risikomanagement",
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: "nis2-art-23",
|
||||
Content: "Art. 23 NIS2 - Meldepflichten\n\n(1) Jeder Mitgliedstaat stellt sicher, dass wesentliche und wichtige Einrichtungen jeden Sicherheitsvorfall, der erhebliche Auswirkungen auf die Erbringung ihrer Dienste hat, unverzüglich dem zuständigen CSIRT oder gegebenenfalls der zuständigen Behörde melden.",
|
||||
Source: "NIS2",
|
||||
Score: 0.81,
|
||||
Metadata: map[string]string{
|
||||
"article": "23",
|
||||
"regulation": "NIS2",
|
||||
"category": "meldepflicht",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Return top K results
|
||||
if topK > len(allResults) {
|
||||
topK = len(allResults)
|
||||
}
|
||||
return allResults[:topK]
|
||||
}
|
||||
@@ -273,6 +273,52 @@ Dein Ziel ist die rechtzeitige Erkennung und Kommunikation relevanter Ereignisse
|
||||
createdAt: '2024-12-01T00:00:00Z',
|
||||
updatedAt: '2025-01-12T02:00:00Z'
|
||||
},
|
||||
'compliance-advisor': {
|
||||
id: 'compliance-advisor',
|
||||
name: 'Compliance Advisor',
|
||||
description: 'DSGVO/Compliance-Berater fuer SDK-Nutzer',
|
||||
soulFile: 'compliance-advisor.soul.md',
|
||||
soulContent: `# Compliance Advisor Agent
|
||||
|
||||
## Identitaet
|
||||
Du bist der BreakPilot Compliance-Berater. Du hilfst Nutzern des AI Compliance SDK,
|
||||
Datenschutz- und Compliance-Fragen in verstaendlicher Sprache zu beantworten.
|
||||
Du bist kein Anwalt und gibst keine Rechtsberatung, sondern orientierst dich an
|
||||
offiziellen Quellen und gibst praxisnahe Hinweise.
|
||||
|
||||
## Kernprinzipien
|
||||
- **Quellenbasiert**: Verweise immer auf konkrete Rechtsgrundlagen (DSGVO-Artikel, BDSG-Paragraphen)
|
||||
- **Verstaendlich**: Erklaere rechtliche Konzepte in einfacher, praxisnaher Sprache
|
||||
- **Ehrlich**: Bei Unsicherheit empfehle professionelle Rechtsberatung
|
||||
- **Kontextbewusst**: Nutze das RAG-System fuer aktuelle Rechtstexte und Leitfaeden
|
||||
- **Scope-bewusst**: Nutze alle verfuegbaren RAG-Quellen AUSSER NIBIS-Dokumenten
|
||||
|
||||
## Kompetenzbereich
|
||||
- DSGVO Art. 1-99 + Erwaegsgruende
|
||||
- BDSG (Bundesdatenschutzgesetz)
|
||||
- AI Act (EU KI-Verordnung)
|
||||
- TTDSG, ePrivacy-Richtlinie
|
||||
- DSK-Kurzpapiere (Nr. 1-20)
|
||||
- SDM V3.0, BSI-Grundschutz, BSI-TR-03161
|
||||
- EDPB Guidelines, Bundes-/Laender-Muss-Listen
|
||||
- ISO 27001/27701 (Ueberblick)
|
||||
|
||||
## Kommunikationsstil
|
||||
- Sachlich, aber verstaendlich
|
||||
- Deutsch als Hauptsprache
|
||||
- Strukturierte Antworten mit Quellenangabe
|
||||
- Praxisbeispiele wo hilfreich`,
|
||||
color: '#6366f1',
|
||||
status: 'running',
|
||||
activeSessions: 0,
|
||||
totalProcessed: 0,
|
||||
avgResponseTime: 0,
|
||||
errorRate: 0,
|
||||
lastRestart: new Date().toISOString(),
|
||||
version: '1.0.0',
|
||||
createdAt: new Date().toISOString(),
|
||||
updatedAt: new Date().toISOString()
|
||||
},
|
||||
'orchestrator': {
|
||||
id: 'orchestrator',
|
||||
name: 'Orchestrator',
|
||||
|
||||
@@ -94,6 +94,19 @@ const mockAgents: AgentConfig[] = [
|
||||
totalProcessed: 8934,
|
||||
avgResponseTime: 12,
|
||||
lastActivity: 'just now'
|
||||
},
|
||||
{
|
||||
id: 'compliance-advisor',
|
||||
name: 'Compliance Advisor',
|
||||
description: 'DSGVO/Compliance-Berater fuer SDK-Nutzer',
|
||||
soulFile: 'compliance-advisor.soul.md',
|
||||
color: '#6366f1',
|
||||
icon: 'message',
|
||||
status: 'running',
|
||||
activeSessions: 0,
|
||||
totalProcessed: 0,
|
||||
avgResponseTime: 0,
|
||||
lastActivity: new Date().toISOString()
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
396
admin-v2/app/(admin)/ai/gpu/page.tsx
Normal file
396
admin-v2/app/(admin)/ai/gpu/page.tsx
Normal file
@@ -0,0 +1,396 @@
|
||||
'use client'
|
||||
|
||||
/**
|
||||
* GPU Infrastructure Admin Page
|
||||
*
|
||||
* vast.ai GPU Management for LLM Processing
|
||||
* Part of KI-Werkzeuge
|
||||
*/
|
||||
|
||||
import { useEffect, useState, useCallback } from 'react'
|
||||
import { PagePurpose } from '@/components/common/PagePurpose'
|
||||
import { AIToolsSidebarResponsive } from '@/components/ai/AIToolsSidebar'
|
||||
|
||||
interface VastStatus {
|
||||
instance_id: number | null
|
||||
status: string
|
||||
gpu_name: string | null
|
||||
dph_total: number | null
|
||||
endpoint_base_url: string | null
|
||||
last_activity: string | null
|
||||
auto_shutdown_in_minutes: number | null
|
||||
total_runtime_hours: number | null
|
||||
total_cost_usd: number | null
|
||||
account_credit: number | null
|
||||
account_total_spend: number | null
|
||||
session_runtime_minutes: number | null
|
||||
session_cost_usd: number | null
|
||||
message: string | null
|
||||
error?: string
|
||||
}
|
||||
|
||||
export default function GPUInfrastructurePage() {
|
||||
const [status, setStatus] = useState<VastStatus | null>(null)
|
||||
const [loading, setLoading] = useState(true)
|
||||
const [actionLoading, setActionLoading] = useState<string | null>(null)
|
||||
const [error, setError] = useState<string | null>(null)
|
||||
const [message, setMessage] = useState<string | null>(null)
|
||||
|
||||
const API_PROXY = '/api/admin/gpu'
|
||||
|
||||
const fetchStatus = useCallback(async () => {
|
||||
setLoading(true)
|
||||
setError(null)
|
||||
|
||||
try {
|
||||
const response = await fetch(API_PROXY)
|
||||
const data = await response.json()
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || `HTTP ${response.status}`)
|
||||
}
|
||||
|
||||
setStatus(data)
|
||||
} catch (err) {
|
||||
setError(err instanceof Error ? err.message : 'Verbindungsfehler')
|
||||
setStatus({
|
||||
instance_id: null,
|
||||
status: 'error',
|
||||
gpu_name: null,
|
||||
dph_total: null,
|
||||
endpoint_base_url: null,
|
||||
last_activity: null,
|
||||
auto_shutdown_in_minutes: null,
|
||||
total_runtime_hours: null,
|
||||
total_cost_usd: null,
|
||||
account_credit: null,
|
||||
account_total_spend: null,
|
||||
session_runtime_minutes: null,
|
||||
session_cost_usd: null,
|
||||
message: 'Verbindung fehlgeschlagen'
|
||||
})
|
||||
} finally {
|
||||
setLoading(false)
|
||||
}
|
||||
}, [])
|
||||
|
||||
useEffect(() => {
|
||||
fetchStatus()
|
||||
}, [fetchStatus])
|
||||
|
||||
useEffect(() => {
|
||||
const interval = setInterval(fetchStatus, 30000)
|
||||
return () => clearInterval(interval)
|
||||
}, [fetchStatus])
|
||||
|
||||
const powerOn = async () => {
|
||||
setActionLoading('on')
|
||||
setError(null)
|
||||
setMessage(null)
|
||||
|
||||
try {
|
||||
const response = await fetch(API_PROXY, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ action: 'on' }),
|
||||
})
|
||||
|
||||
const data = await response.json()
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || data.detail || 'Aktion fehlgeschlagen')
|
||||
}
|
||||
|
||||
setMessage('Start angefordert')
|
||||
setTimeout(fetchStatus, 3000)
|
||||
setTimeout(fetchStatus, 10000)
|
||||
} catch (err) {
|
||||
setError(err instanceof Error ? err.message : 'Fehler beim Starten')
|
||||
fetchStatus()
|
||||
} finally {
|
||||
setActionLoading(null)
|
||||
}
|
||||
}
|
||||
|
||||
const powerOff = async () => {
|
||||
setActionLoading('off')
|
||||
setError(null)
|
||||
setMessage(null)
|
||||
|
||||
try {
|
||||
const response = await fetch(API_PROXY, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ action: 'off' }),
|
||||
})
|
||||
|
||||
const data = await response.json()
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || data.detail || 'Aktion fehlgeschlagen')
|
||||
}
|
||||
|
||||
setMessage('Stop angefordert')
|
||||
setTimeout(fetchStatus, 3000)
|
||||
setTimeout(fetchStatus, 10000)
|
||||
} catch (err) {
|
||||
setError(err instanceof Error ? err.message : 'Fehler beim Stoppen')
|
||||
fetchStatus()
|
||||
} finally {
|
||||
setActionLoading(null)
|
||||
}
|
||||
}
|
||||
|
||||
const getStatusBadge = (s: string) => {
|
||||
const baseClasses = 'px-3 py-1 rounded-full text-sm font-semibold uppercase'
|
||||
switch (s) {
|
||||
case 'running':
|
||||
return `${baseClasses} bg-green-100 text-green-800`
|
||||
case 'stopped':
|
||||
case 'exited':
|
||||
return `${baseClasses} bg-red-100 text-red-800`
|
||||
case 'loading':
|
||||
case 'scheduling':
|
||||
case 'creating':
|
||||
case 'starting...':
|
||||
case 'stopping...':
|
||||
return `${baseClasses} bg-yellow-100 text-yellow-800`
|
||||
default:
|
||||
return `${baseClasses} bg-slate-100 text-slate-600`
|
||||
}
|
||||
}
|
||||
|
||||
const getCreditColor = (credit: number | null) => {
|
||||
if (credit === null) return 'text-slate-500'
|
||||
if (credit < 5) return 'text-red-600'
|
||||
if (credit < 15) return 'text-yellow-600'
|
||||
return 'text-green-600'
|
||||
}
|
||||
|
||||
return (
|
||||
<div>
|
||||
{/* Page Purpose */}
|
||||
<PagePurpose
|
||||
title="GPU Infrastruktur"
|
||||
purpose="Verwalten Sie die vast.ai GPU-Instanzen fuer LLM-Verarbeitung und OCR. Starten/Stoppen Sie GPUs bei Bedarf und ueberwachen Sie Kosten in Echtzeit."
|
||||
audience={['DevOps', 'Entwickler', 'System-Admins']}
|
||||
architecture={{
|
||||
services: ['vast.ai API', 'Ollama', 'VLLM'],
|
||||
databases: ['PostgreSQL (Logs)'],
|
||||
}}
|
||||
relatedPages={[
|
||||
{ name: 'LLM Vergleich', href: '/ai/llm-compare', description: 'KI-Provider testen' },
|
||||
{ name: 'Test Quality (BQAS)', href: '/ai/test-quality', description: 'Golden Suite & Tests' },
|
||||
{ name: 'Magic Help', href: '/ai/magic-help', description: 'TrOCR Testing' },
|
||||
]}
|
||||
collapsible={true}
|
||||
defaultCollapsed={true}
|
||||
/>
|
||||
|
||||
{/* KI-Werkzeuge Sidebar */}
|
||||
<AIToolsSidebarResponsive currentTool="gpu" />
|
||||
|
||||
{/* Status Cards */}
|
||||
<div className="bg-white rounded-xl border border-slate-200 p-6 mb-6">
|
||||
<div className="grid grid-cols-2 md:grid-cols-3 lg:grid-cols-6 gap-6">
|
||||
<div>
|
||||
<div className="text-sm text-slate-500 mb-2">Status</div>
|
||||
{loading ? (
|
||||
<span className="px-3 py-1 rounded-full text-sm font-semibold bg-slate-100 text-slate-600">
|
||||
Laden...
|
||||
</span>
|
||||
) : (
|
||||
<span className={getStatusBadge(
|
||||
actionLoading === 'on' ? 'starting...' :
|
||||
actionLoading === 'off' ? 'stopping...' :
|
||||
status?.status || 'unknown'
|
||||
)}>
|
||||
{actionLoading === 'on' ? 'starting...' :
|
||||
actionLoading === 'off' ? 'stopping...' :
|
||||
status?.status || 'unbekannt'}
|
||||
</span>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<div className="text-sm text-slate-500 mb-2">GPU</div>
|
||||
<div className="font-semibold text-slate-900">
|
||||
{status?.gpu_name || '-'}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<div className="text-sm text-slate-500 mb-2">Kosten/h</div>
|
||||
<div className="font-semibold text-slate-900">
|
||||
{status?.dph_total ? `$${status.dph_total.toFixed(3)}` : '-'}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<div className="text-sm text-slate-500 mb-2">Auto-Stop</div>
|
||||
<div className="font-semibold text-slate-900">
|
||||
{status && status.auto_shutdown_in_minutes !== null
|
||||
? `${status.auto_shutdown_in_minutes} min`
|
||||
: '-'}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<div className="text-sm text-slate-500 mb-2">Budget</div>
|
||||
<div className={`font-bold text-lg ${getCreditColor(status?.account_credit ?? null)}`}>
|
||||
{status && status.account_credit !== null
|
||||
? `$${status.account_credit.toFixed(2)}`
|
||||
: '-'}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div>
|
||||
<div className="text-sm text-slate-500 mb-2">Session</div>
|
||||
<div className="font-semibold text-slate-900">
|
||||
{status && status.session_runtime_minutes !== null && status.session_cost_usd !== null
|
||||
? `${Math.round(status.session_runtime_minutes)} min / $${status.session_cost_usd.toFixed(3)}`
|
||||
: '-'}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Buttons */}
|
||||
<div className="flex items-center gap-4 mt-6 pt-6 border-t border-slate-200">
|
||||
<button
|
||||
onClick={powerOn}
|
||||
disabled={actionLoading !== null || status?.status === 'running'}
|
||||
className="px-6 py-2 bg-orange-600 text-white rounded-lg font-medium hover:bg-orange-700 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
|
||||
>
|
||||
Starten
|
||||
</button>
|
||||
<button
|
||||
onClick={powerOff}
|
||||
disabled={actionLoading !== null || status?.status !== 'running'}
|
||||
className="px-6 py-2 bg-red-600 text-white rounded-lg font-medium hover:bg-red-700 disabled:opacity-50 disabled:cursor-not-allowed transition-colors"
|
||||
>
|
||||
Stoppen
|
||||
</button>
|
||||
<button
|
||||
onClick={fetchStatus}
|
||||
disabled={loading}
|
||||
className="px-4 py-2 border border-slate-300 text-slate-700 rounded-lg font-medium hover:bg-slate-50 disabled:opacity-50 transition-colors"
|
||||
>
|
||||
{loading ? 'Aktualisiere...' : 'Aktualisieren'}
|
||||
</button>
|
||||
|
||||
{message && (
|
||||
<span className="ml-4 text-sm text-green-600 font-medium">{message}</span>
|
||||
)}
|
||||
{error && (
|
||||
<span className="ml-4 text-sm text-red-600 font-medium">{error}</span>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Extended Stats */}
|
||||
<div className="grid grid-cols-1 lg:grid-cols-2 gap-6 mb-6">
|
||||
<div className="bg-white rounded-xl border border-slate-200 p-6">
|
||||
<h3 className="font-semibold text-slate-900 mb-4">Kosten-Uebersicht</h3>
|
||||
<div className="space-y-4">
|
||||
<div className="flex justify-between items-center">
|
||||
<span className="text-slate-600">Session Laufzeit</span>
|
||||
<span className="font-semibold">
|
||||
{status && status.session_runtime_minutes !== null
|
||||
? `${Math.round(status.session_runtime_minutes)} Minuten`
|
||||
: '-'}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between items-center">
|
||||
<span className="text-slate-600">Session Kosten</span>
|
||||
<span className="font-semibold">
|
||||
{status && status.session_cost_usd !== null
|
||||
? `$${status.session_cost_usd.toFixed(4)}`
|
||||
: '-'}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between items-center pt-4 border-t border-slate-100">
|
||||
<span className="text-slate-600">Gesamtlaufzeit</span>
|
||||
<span className="font-semibold">
|
||||
{status && status.total_runtime_hours !== null
|
||||
? `${status.total_runtime_hours.toFixed(1)} Stunden`
|
||||
: '-'}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between items-center">
|
||||
<span className="text-slate-600">Gesamtkosten</span>
|
||||
<span className="font-semibold">
|
||||
{status && status.total_cost_usd !== null
|
||||
? `$${status.total_cost_usd.toFixed(2)}`
|
||||
: '-'}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between items-center">
|
||||
<span className="text-slate-600">vast.ai Ausgaben</span>
|
||||
<span className="font-semibold">
|
||||
{status && status.account_total_spend !== null
|
||||
? `$${status.account_total_spend.toFixed(2)}`
|
||||
: '-'}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div className="bg-white rounded-xl border border-slate-200 p-6">
|
||||
<h3 className="font-semibold text-slate-900 mb-4">Instanz-Details</h3>
|
||||
<div className="space-y-4">
|
||||
<div className="flex justify-between items-center">
|
||||
<span className="text-slate-600">Instanz ID</span>
|
||||
<span className="font-mono text-sm">
|
||||
{status?.instance_id || '-'}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between items-center">
|
||||
<span className="text-slate-600">GPU</span>
|
||||
<span className="font-semibold">
|
||||
{status?.gpu_name || '-'}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between items-center">
|
||||
<span className="text-slate-600">Stundensatz</span>
|
||||
<span className="font-semibold">
|
||||
{status?.dph_total ? `$${status.dph_total.toFixed(4)}/h` : '-'}
|
||||
</span>
|
||||
</div>
|
||||
<div className="flex justify-between items-center">
|
||||
<span className="text-slate-600">Letzte Aktivitaet</span>
|
||||
<span className="text-sm">
|
||||
{status?.last_activity
|
||||
? new Date(status.last_activity).toLocaleString('de-DE')
|
||||
: '-'}
|
||||
</span>
|
||||
</div>
|
||||
{status?.endpoint_base_url && status.status === 'running' && (
|
||||
<div className="pt-4 border-t border-slate-100">
|
||||
<div className="text-slate-600 text-sm mb-1">Endpoint</div>
|
||||
<code className="text-xs bg-slate-100 px-2 py-1 rounded block overflow-x-auto">
|
||||
{status.endpoint_base_url}
|
||||
</code>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Info */}
|
||||
<div className="bg-violet-50 border border-violet-200 rounded-xl p-4">
|
||||
<div className="flex gap-3">
|
||||
<svg className="w-5 h-5 text-violet-600 flex-shrink-0 mt-0.5" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M13 16h-1v-4h-1m1-4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||
</svg>
|
||||
<div>
|
||||
<h4 className="font-semibold text-violet-900">Auto-Shutdown</h4>
|
||||
<p className="text-sm text-violet-800 mt-1">
|
||||
Die GPU-Instanz wird automatisch gestoppt, wenn sie laengere Zeit inaktiv ist.
|
||||
Der Status wird alle 30 Sekunden automatisch aktualisiert.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
@@ -12,6 +12,7 @@
|
||||
|
||||
import { useState, useEffect, useCallback } from 'react'
|
||||
import { PagePurpose } from '@/components/common/PagePurpose'
|
||||
import { AIToolsSidebarResponsive } from '@/components/ai/AIToolsSidebar'
|
||||
|
||||
interface LLMResponse {
|
||||
provider: string
|
||||
@@ -210,21 +211,24 @@ export default function LLMComparePage() {
|
||||
{/* Page Purpose */}
|
||||
<PagePurpose
|
||||
title="LLM Vergleich"
|
||||
purpose="Vergleichen Sie Antworten verschiedener KI-Provider (OpenAI, Claude, Self-hosted) fuer Qualitaetssicherung. Optimieren Sie Parameter und System Prompts fuer beste Ergebnisse."
|
||||
purpose="Vergleichen Sie Antworten verschiedener KI-Provider (OpenAI, Claude, Self-hosted) fuer Qualitaetssicherung. Optimieren Sie Parameter und System Prompts fuer beste Ergebnisse. Standalone-Werkzeug ohne direkten Datenfluss zur KI-Pipeline."
|
||||
audience={['Entwickler', 'Data Scientists', 'QA']}
|
||||
architecture={{
|
||||
services: ['llm-gateway (Python)', 'Ollama', 'OpenAI API', 'Claude API'],
|
||||
databases: ['PostgreSQL (History)', 'Qdrant (RAG)'],
|
||||
}}
|
||||
relatedPages={[
|
||||
{ name: 'RAG Management', href: '/ai/rag', description: 'Training Data verwalten' },
|
||||
{ name: 'GPU Infrastruktur', href: '/infrastructure/gpu', description: 'GPU-Ressourcen' },
|
||||
{ name: 'OCR-Labeling', href: '/ai/ocr-labeling', description: 'Handschrift-Training' },
|
||||
{ name: 'Test Quality (BQAS)', href: '/ai/test-quality', description: 'Golden Suite & Synthetic Tests' },
|
||||
{ name: 'GPU Infrastruktur', href: '/ai/gpu', description: 'GPU-Ressourcen verwalten' },
|
||||
{ name: 'Agent Management', href: '/ai/agents', description: 'Multi-Agent System' },
|
||||
]}
|
||||
collapsible={true}
|
||||
defaultCollapsed={true}
|
||||
/>
|
||||
|
||||
{/* KI-Werkzeuge Sidebar */}
|
||||
<AIToolsSidebarResponsive currentTool="llm-compare" />
|
||||
|
||||
<div className="grid grid-cols-1 lg:grid-cols-3 gap-6">
|
||||
{/* Left Column: Input & Settings */}
|
||||
<div className="lg:col-span-1 space-y-4">
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user