fix: Restore all files lost during destructive rebase

A previous `git pull --rebase origin main` dropped 177 local commits,
losing 3400+ files across admin-v2, backend, studio-v2, website,
klausur-service, and many other services. The partial restore attempt
(660295e2) only recovered some files.

This commit restores all missing files from pre-rebase ref 98933f5e
while preserving post-rebase additions (night-scheduler, night-mode UI,
NightModeWidget dashboard integration).

Restored features include:
- AI Module Sidebar (FAB), OCR Labeling, OCR Compare
- GPU Dashboard, RAG Pipeline, Magic Help
- Klausur-Korrektur (8 files), Abitur-Archiv (5+ files)
- Companion, Zeugnisse-Crawler, Screen Flow
- Full backend, studio-v2, website, klausur-service
- All compliance SDKs, agent-core, voice-service
- CI/CD configs, documentation, scripts

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Benjamin Admin
2026-02-09 09:51:32 +01:00
parent f7487ee240
commit bfdaf63ba9
2009 changed files with 749983 additions and 1731 deletions

View File

@@ -0,0 +1,247 @@
# Refactoring-Strategie für große Dateien
## Übersicht
Dieses Dokument beschreibt die Strategie für das automatisierte Refactoring
der größten Dateien im BreakPilot-Projekt mittels Qwen2.5:32B auf dem Mac Mini.
## Priorisierte Dateien
| Prio | Datei | Zeilen | Sprache | Strategie |
|------|-------|--------|---------|-----------|
| 1 | `backend/frontend/static/js/studio.js` | 9.787 | JS | In Module aufteilen |
| 2 | `website/components/admin/SystemInfoSection.tsx` | 5.690 | TSX | Subkomponenten extrahieren |
| 3 | `backend/frontend/modules/companion.py` | 5.513 | Python | Klassen extrahieren |
| 4 | `backend/classroom_api.py` | 4.467 | Python | Router aufteilen |
| 5 | `backend/frontend/school.py` | 3.732 | Python | Services extrahieren |
| 6 | `backend/frontend/components/admin_panel.py` | 3.434 | Python | Komponenten separieren |
| 7 | `backend/ai_processor.py` | 2.999 | Python | Pipeline-Module |
| 8 | `website/app/admin/rag/page.tsx` | 2.964 | TSX | Hooks/Komponenten |
| 9 | `backend/frontend/modules/alerts.py` | 2.902 | Python | Alert-Typen separieren |
| 10 | `backend/frontend/meetings.py` | 2.847 | Python | Services extrahieren |
## Detailstrategien
### 1. studio.js (9.787 Zeilen) - HÖCHSTE PRIORITÄT
**Problem:** Monolithisches Admin-Panel JavaScript
**Ziel-Struktur:**
```
backend/frontend/static/js/
├── studio/
│ ├── index.js # Entry point, imports
│ ├── api.js # API-Calls
│ ├── components/
│ │ ├── dashboard.js # Dashboard-Logik
│ │ ├── documents.js # Dokument-Verwaltung
│ │ ├── users.js # User-Management
│ │ └── settings.js # Einstellungen
│ ├── utils/
│ │ ├── validators.js # Validierungsfunktionen
│ │ ├── formatters.js # Formatierung
│ │ └── helpers.js # Hilfsfunktionen
│ └── state.js # State-Management
└── studio.js # Legacy-Wrapper (für Rückwärtskompatibilität)
```
**Chunk-Aufteilung:**
- Chunk 1 (Zeilen 1-800): Initialisierung, Globals, Utils
- Chunk 2 (Zeilen 801-1600): API-Funktionen
- Chunk 3 (Zeilen 1601-2400): Dashboard-Komponenten
- Chunk 4 (Zeilen 2401-3200): Dokument-Management
- Chunk 5 (Zeilen 3201-4000): User-Management
- Chunk 6 (Zeilen 4001-4800): Einstellungen
- Chunk 7 (Zeilen 4801-5600): Event-Handler
- Chunk 8 (Zeilen 5601-6400): Render-Funktionen
- Chunk 9 (Zeilen 6401-7200): Modals/Dialoge
- Chunk 10 (Zeilen 7201-8000): Tabellen/Listen
- Chunk 11 (Zeilen 8001-8800): Formulare
- Chunk 12 (Zeilen 8801-9787): Navigation/Footer
### 2. SystemInfoSection.tsx (5.690 Zeilen)
**Problem:** Zu große React-Komponente mit vielen Subkomponenten inline
**Ziel-Struktur:**
```
website/components/admin/
├── SystemInfoSection/
│ ├── index.tsx # Hauptkomponente
│ ├── SystemOverview.tsx # Übersichtskarten
│ ├── ContainerStatus.tsx # Docker-Status
│ ├── DatabaseInfo.tsx # DB-Statistiken
│ ├── ServiceHealth.tsx # Service-Health
│ ├── ResourceUsage.tsx # CPU/RAM/Disk
│ ├── LogViewer.tsx # Log-Anzeige
│ └── hooks/
│ ├── useSystemInfo.ts # Daten-Fetching
│ └── useHealthCheck.ts # Health-Polling
└── SystemInfoSection.tsx # Legacy-Wrapper
```
### 3. companion.py (5.513 Zeilen)
**Problem:** Monolithischer AI-Companion-Code
**Ziel-Struktur:**
```
backend/frontend/modules/companion/
├── __init__.py
├── core.py # CompanionCore-Klasse
├── conversation.py # Konversations-Handler
├── memory.py # Gedächtnis-Management
├── prompts.py # Prompt-Templates
├── tools.py # Tool-Integration
├── handlers/
│ ├── text.py # Text-Handler
│ ├── image.py # Bild-Handler
│ └── voice.py # Voice-Handler
└── utils.py # Hilfsfunktionen
```
### 4. classroom_api.py (4.467 Zeilen)
**Problem:** Zu viele Endpoints in einer Datei
**Ziel-Struktur:**
```
backend/api/
├── classroom/
│ ├── __init__.py # Router-Sammlung
│ ├── courses.py # Kurs-Endpoints
│ ├── students.py # Schüler-Endpoints
│ ├── teachers.py # Lehrer-Endpoints
│ ├── assignments.py # Aufgaben-Endpoints
│ ├── grades.py # Noten-Endpoints
│ └── materials.py # Material-Endpoints
└── classroom_api.py # Legacy-Wrapper
```
## Refactoring-Regeln für Qwen
### Allgemeine Regeln
1. **Keine Breaking Changes**
- Alle öffentlichen APIs beibehalten
- Legacy-Wrapper für Rückwärtskompatibilität
2. **Imports beibehalten**
- Relative Imports für neue Module
- Absolute Imports nur bei externen Dependencies
3. **Type Hints hinzufügen**
- Python: Type Hints für alle Parameter/Returns
- TypeScript: Explizite Types statt `any`
4. **Dokumentation**
- Docstrings für alle Klassen/Funktionen
- JSDoc für JavaScript-Funktionen
### Python-spezifisch
```python
# VORHER (schlecht)
def process(data):
# 500 Zeilen Code...
pass
# NACHHER (gut)
def process(data: Dict[str, Any]) -> ProcessResult:
"""
Verarbeitet die eingehenden Daten.
Args:
data: Dictionary mit Eingabedaten
Returns:
ProcessResult mit Verarbeitungsergebnis
"""
validated = _validate_input(data)
transformed = _transform_data(validated)
return _create_result(transformed)
```
### JavaScript/TypeScript-spezifisch
```typescript
// VORHER (schlecht)
function handleClick(e) {
// 200 Zeilen Code...
}
// NACHHER (gut)
interface ClickHandlerResult {
success: boolean;
data?: unknown;
error?: string;
}
/**
* Behandelt Click-Events auf dem Dashboard
* @param event - Das Click-Event
* @returns Ergebnis der Verarbeitung
*/
function handleDashboardClick(event: MouseEvent): ClickHandlerResult {
const validatedEvent = validateClickEvent(event);
const result = processClickAction(validatedEvent);
return formatResult(result);
}
```
## Ausführungsplan
### Phase 1: Vorbereitung
1. [x] Orchestrator-Script erstellt
2. [x] Strategie-Dokument erstellt
3. [ ] Qwen2.5:32B Download abwarten
4. [ ] Ollama-API-Verbindung testen
### Phase 2: Refactoring (Reihenfolge)
1. [ ] `classroom_api.py` (kleiner, gut testbar)
2. [ ] `ai_processor.py` (wichtig für KI-Features)
3. [ ] `companion.py` (großer Impact)
4. [ ] `SystemInfoSection.tsx` (Frontend-Test)
5. [ ] `studio.js` (größte Datei, zuletzt)
### Phase 3: Validierung
1. [ ] Tests ausführen nach jedem Refactoring
2. [ ] Manuelle Code-Review durch Claude
3. [ ] Integration in Hauptbranch
## Kommandos
```bash
# Status prüfen
python scripts/qwen_refactor_orchestrator.py --status
# Ollama-Verbindung testen
python scripts/qwen_refactor_orchestrator.py --check-ollama
# Einzelne Datei refaktorieren
python scripts/qwen_refactor_orchestrator.py --file backend/classroom_api.py
# Alle großen Dateien refaktorieren
python scripts/qwen_refactor_orchestrator.py --all-large-files
# Tests für refaktorierte Datei
python scripts/qwen_refactor_orchestrator.py --run-tests backend/classroom_api.py
```
## Risiken und Mitigationen
| Risiko | Mitigation |
|--------|------------|
| Qwen-Kontext zu klein für Chunks | Chunk-Größe auf 800 Zeilen begrenzt |
| Breaking Changes | Legacy-Wrapper für alle Dateien |
| Tests schlagen fehl | Rollback auf Original möglich |
| Qwen-Halluzinationen | Claude-Review vor Integration |
| Netzwerk-Probleme | Retry-Logik im Orchestrator |
## Erfolgsmetriken
- [ ] Alle 10 Dateien unter 1000 Zeilen
- [ ] 100% Test-Coverage erhalten
- [ ] Keine Breaking Changes
- [ ] Verbesserte Code-Dokumentation
- [ ] Kürzere Build-Zeiten (durch besseres Tree-Shaking)

17
scripts/backup-cron.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/bin/bash
# Automatisches Backup-Skript für Cron
# Füge zu crontab hinzu: 0 2 * * * /path/to/backup-cron.sh
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(dirname "$SCRIPT_DIR")"
cd "$PROJECT_DIR"
# Backup ausführen und Log schreiben
export BACKUP_DIR="$PROJECT_DIR/backups"
"$SCRIPT_DIR/backup.sh" >> "$PROJECT_DIR/backups/backup.log" 2>&1
# Bei Fehler eine Benachrichtigung senden (optional)
if [ $? -ne 0 ]; then
echo "Backup failed at $(date)" >> "$PROJECT_DIR/backups/backup.log"
fi

61
scripts/backup.sh Executable file
View File

@@ -0,0 +1,61 @@
#!/bin/bash
# BreakPilot Database Backup Script
# Erstellt automatische Backups der PostgreSQL-Datenbank
set -e
# Konfiguration
BACKUP_DIR="${BACKUP_DIR:-./backups}"
CONTAINER_NAME="${CONTAINER_NAME:-breakpilot-pwa-postgres}"
DB_USER="${DB_USER:-breakpilot}"
DB_NAME="${DB_NAME:-breakpilot_db}"
RETENTION_DAYS="${RETENTION_DAYS:-30}"
# Timestamp für Dateinamen
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
BACKUP_FILE="${BACKUP_DIR}/breakpilot_${TIMESTAMP}.sql.gz"
# Farben für Output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
echo -e "${GREEN}=== BreakPilot Database Backup ===${NC}"
echo "Timestamp: $(date)"
echo ""
# Backup-Verzeichnis erstellen
mkdir -p "$BACKUP_DIR"
# Prüfen ob Container läuft
if ! docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"; then
echo -e "${RED}Error: Container '$CONTAINER_NAME' is not running${NC}"
exit 1
fi
echo -e "${YELLOW}Creating backup...${NC}"
# Backup erstellen (komprimiert)
docker exec "$CONTAINER_NAME" pg_dump -U "$DB_USER" "$DB_NAME" | gzip > "$BACKUP_FILE"
# Größe des Backups
BACKUP_SIZE=$(ls -lh "$BACKUP_FILE" | awk '{print $5}')
echo -e "${GREEN}✓ Backup created: $BACKUP_FILE ($BACKUP_SIZE)${NC}"
# Alte Backups löschen
echo ""
echo -e "${YELLOW}Cleaning up old backups (older than $RETENTION_DAYS days)...${NC}"
DELETED_COUNT=$(find "$BACKUP_DIR" -name "breakpilot_*.sql.gz" -mtime +$RETENTION_DAYS -delete -print | wc -l)
echo -e "${GREEN}✓ Deleted $DELETED_COUNT old backup(s)${NC}"
# Statistik
echo ""
echo "=== Backup Statistics ==="
TOTAL_BACKUPS=$(ls -1 "$BACKUP_DIR"/breakpilot_*.sql.gz 2>/dev/null | wc -l)
TOTAL_SIZE=$(du -sh "$BACKUP_DIR" 2>/dev/null | awk '{print $1}')
echo "Total backups: $TOTAL_BACKUPS"
echo "Total size: $TOTAL_SIZE"
echo ""
echo -e "${GREEN}Backup completed successfully!${NC}"

71
scripts/daily-backup.sh Executable file
View File

@@ -0,0 +1,71 @@
#!/bin/bash
# =============================================================================
# Tägliches Backup vom Mac Mini zum MacBook
# =============================================================================
# Dieses Script synchronisiert das Projekt vom Mac Mini zum MacBook
# um bei Festplattendefekt keine Daten zu verlieren.
#
# Installation:
# chmod +x ~/Projekte/breakpilot-pwa/scripts/daily-backup.sh
#
# Manuell ausführen:
# ~/Projekte/breakpilot-pwa/scripts/daily-backup.sh
#
# Automatisch täglich (via LaunchAgent - wird unten erstellt):
# Das Script läuft automatisch jeden Tag um 02:00 Uhr
# =============================================================================
set -e
# Konfiguration
MACMINI_HOST="benjaminadmin@macmini"
REMOTE_DIR="/Users/benjaminadmin/Projekte/breakpilot-pwa"
LOCAL_DIR="/Users/benjaminadmin/Projekte/breakpilot-pwa"
BACKUP_LOG="/Users/benjaminadmin/Projekte/backup-logs"
TIMESTAMP=$(date +%Y-%m-%d_%H-%M-%S)
# Log-Verzeichnis erstellen
mkdir -p "$BACKUP_LOG"
LOG_FILE="$BACKUP_LOG/backup-$TIMESTAMP.log"
echo "=== Backup gestartet: $TIMESTAMP ===" | tee "$LOG_FILE"
# 1. Prüfe Verbindung zum Mac Mini
echo "[1/4] Prüfe Verbindung zum Mac Mini..." | tee -a "$LOG_FILE"
if ! ping -c 1 -t 5 macmini &>/dev/null; then
echo "FEHLER: Mac Mini nicht erreichbar!" | tee -a "$LOG_FILE"
exit 1
fi
echo " ✓ Mac Mini erreichbar" | tee -a "$LOG_FILE"
# 2. Hole neuesten Stand vom Git Remote
echo "[2/4] Hole neueste Commits von Gitea..." | tee -a "$LOG_FILE"
cd "$LOCAL_DIR"
git fetch origin 2>&1 | tee -a "$LOG_FILE"
echo " ✓ Git fetch erfolgreich" | tee -a "$LOG_FILE"
# 3. Synchronisiere Docker-Volumes und Datenbanken (optional)
echo "[3/4] Sichere wichtige Daten vom Mac Mini..." | tee -a "$LOG_FILE"
# Postgres Backup
POSTGRES_BACKUP="$BACKUP_LOG/postgres-$TIMESTAMP.sql.gz"
ssh "$MACMINI_HOST" "/usr/local/bin/docker exec breakpilot-pwa-postgres pg_dump -U postgres breakpilot | gzip" > "$POSTGRES_BACKUP" 2>> "$LOG_FILE"
if [ -s "$POSTGRES_BACKUP" ]; then
echo " ✓ Postgres-Backup: $POSTGRES_BACKUP" | tee -a "$LOG_FILE"
else
echo " ⚠ Postgres-Backup fehlgeschlagen oder leer" | tee -a "$LOG_FILE"
fi
# 4. Aufräumen alter Backups (behalte letzte 7 Tage)
echo "[4/4] Räume alte Backups auf..." | tee -a "$LOG_FILE"
find "$BACKUP_LOG" -name "postgres-*.sql.gz" -mtime +7 -delete 2>/dev/null
find "$BACKUP_LOG" -name "backup-*.log" -mtime +7 -delete 2>/dev/null
echo " ✓ Alte Backups entfernt (>7 Tage)" | tee -a "$LOG_FILE"
echo "" | tee -a "$LOG_FILE"
echo "=== Backup abgeschlossen: $(date +%Y-%m-%d_%H-%M-%S) ===" | tee -a "$LOG_FILE"
echo "" | tee -a "$LOG_FILE"
echo "Zusammenfassung:" | tee -a "$LOG_FILE"
echo " - Git Repository: Aktuell (via fetch)" | tee -a "$LOG_FILE"
echo " - Postgres-Backup: $POSTGRES_BACKUP" | tee -a "$LOG_FILE"
echo " - Log: $LOG_FILE" | tee -a "$LOG_FILE"

91
scripts/env-switch.sh Executable file
View File

@@ -0,0 +1,91 @@
#!/bin/bash
# ============================================
# BreakPilot Environment Switcher
# ============================================
# Usage: ./scripts/env-switch.sh [dev|staging|prod]
# ============================================
set -e
ENV=${1:-dev}
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT_DIR="$(dirname "$SCRIPT_DIR")"
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
cd "$ROOT_DIR"
case $ENV in
dev|development)
ENV_FILE=".env.dev"
ENV_NAME="Development"
COLOR=$GREEN
;;
staging)
ENV_FILE=".env.staging"
ENV_NAME="Staging"
COLOR=$YELLOW
;;
prod|production)
ENV_FILE=".env.prod"
ENV_NAME="Production"
COLOR=$RED
echo -e "${RED}========================================${NC}"
echo -e "${RED} WARNING: Production environment!${NC}"
echo -e "${RED}========================================${NC}"
read -p "Are you sure? (yes/no): " CONFIRM
if [ "$CONFIRM" != "yes" ]; then
echo "Aborted."
exit 0
fi
;;
*)
echo -e "${RED}Unknown environment: $ENV${NC}"
echo ""
echo "Usage: $0 [dev|staging|prod]"
echo ""
echo "Available environments:"
echo " dev - Development (default)"
echo " staging - Staging/Testing"
echo " prod - Production (use with caution!)"
exit 1
;;
esac
# Check if env file exists
if [ ! -f "$ENV_FILE" ]; then
if [ -f ".env.example" ]; then
echo -e "${YELLOW}Creating $ENV_FILE from .env.example...${NC}"
cp .env.example "$ENV_FILE"
echo -e "${YELLOW}Please edit $ENV_FILE with appropriate values.${NC}"
else
echo -e "${RED}Error: $ENV_FILE not found and no .env.example available.${NC}"
exit 1
fi
fi
# Copy to .env
echo -e "${COLOR}Switching to $ENV_NAME environment...${NC}"
cp "$ENV_FILE" .env
echo -e "${GREEN}✓ Environment switched to: $ENV_NAME${NC}"
echo ""
echo "To start services:"
case $ENV in
dev|development)
echo " docker compose up -d"
;;
staging)
echo " docker compose -f docker-compose.yml -f docker-compose.staging.yml up -d"
echo " Or: ./scripts/start.sh staging"
;;
prod|production)
echo " docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d"
echo " Or: ./scripts/start.sh prod"
;;
esac

287
scripts/integration-tests.sh Executable file
View File

@@ -0,0 +1,287 @@
#!/bin/bash
# BreakPilot Integration Tests
# Testet API-Endpoints mit curl
# Voraussetzung: docker-compose up -d
set -e
# Farben für Output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Basis-URLs
CONSENT_SERVICE_URL="http://localhost:8081"
BACKEND_URL="http://localhost:8000"
MAILPIT_URL="http://localhost:8025"
# Test Counter
TESTS_PASSED=0
TESTS_FAILED=0
# Helper Functions
log_info() {
echo -e "${YELLOW}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[✓]${NC} $1"
((TESTS_PASSED++))
}
log_error() {
echo -e "${RED}[✗]${NC} $1"
((TESTS_FAILED++))
}
# Test Helper
test_endpoint() {
local name=$1
local method=$2
local url=$3
local expected_status=$4
local headers=$5
local body=$6
log_info "Testing: $name"
if [ -z "$body" ]; then
response=$(curl -s -w "\n%{http_code}" -X "$method" "$url" $headers)
else
response=$(curl -s -w "\n%{http_code}" -X "$method" "$url" $headers -d "$body")
fi
http_code=$(echo "$response" | tail -n1)
response_body=$(echo "$response" | sed '$d')
if [ "$http_code" == "$expected_status" ]; then
log_success "$name - Status: $http_code"
else
log_error "$name - Expected: $expected_status, Got: $http_code"
echo "Response: $response_body"
fi
}
echo "========================================="
echo "BreakPilot Integration Tests"
echo "========================================="
echo ""
# 1. Health Checks
log_info "=== 1. Health Checks ==="
test_endpoint \
"Consent Service Health" \
"GET" \
"$CONSENT_SERVICE_URL/health" \
"200"
test_endpoint \
"Backend Health" \
"GET" \
"$BACKEND_URL/health" \
"200"
test_endpoint \
"Mailpit Health" \
"GET" \
"$MAILPIT_URL/api/v1/info" \
"200"
echo ""
# 2. Auth Tests
log_info "=== 2. Authentication Tests ==="
# Register User
log_info "Registering test user..."
REGISTER_RESPONSE=$(curl -s -X POST "$CONSENT_SERVICE_URL/api/v1/auth/register" \
-H "Content-Type: application/json" \
-d '{
"email": "integration-test@example.com",
"password": "TestPassword123!",
"first_name": "Integration",
"last_name": "Test"
}' -w "\n%{http_code}")
REGISTER_STATUS=$(echo "$REGISTER_RESPONSE" | tail -n1)
if [ "$REGISTER_STATUS" == "201" ] || [ "$REGISTER_STATUS" == "409" ]; then
log_success "User Registration (or already exists)"
else
log_error "User Registration - Status: $REGISTER_STATUS"
fi
# Login
log_info "Logging in..."
LOGIN_RESPONSE=$(curl -s -X POST "$CONSENT_SERVICE_URL/api/v1/auth/login" \
-H "Content-Type: application/json" \
-d '{
"email": "integration-test@example.com",
"password": "TestPassword123!"
}')
ACCESS_TOKEN=$(echo "$LOGIN_RESPONSE" | jq -r '.access_token // empty')
if [ -n "$ACCESS_TOKEN" ]; then
log_success "Login - Token received"
else
log_error "Login - No access token received"
echo "Response: $LOGIN_RESPONSE"
fi
echo ""
# 3. Protected Endpoints
log_info "=== 3. Protected Endpoint Tests ==="
if [ -n "$ACCESS_TOKEN" ]; then
test_endpoint \
"Get My Consents (Protected)" \
"GET" \
"$CONSENT_SERVICE_URL/api/v1/consent/my" \
"200" \
"-H 'Authorization: Bearer $ACCESS_TOKEN'"
test_endpoint \
"Get My Profile (Protected)" \
"GET" \
"$CONSENT_SERVICE_URL/api/v1/users/me" \
"200" \
"-H 'Authorization: Bearer $ACCESS_TOKEN'"
else
log_error "Skipping protected endpoint tests - no access token"
fi
echo ""
# 4. Document Tests
log_info "=== 4. Document Tests ==="
test_endpoint \
"Get Published Documents" \
"GET" \
"$CONSENT_SERVICE_URL/api/v1/documents/published" \
"200"
test_endpoint \
"Get Document by Type (terms)" \
"GET" \
"$CONSENT_SERVICE_URL/api/v1/documents/type/terms" \
"200"
echo ""
# 5. Consent Tests
log_info "=== 5. Consent Tests ==="
if [ -n "$ACCESS_TOKEN" ]; then
# Get published document ID
DOCUMENTS_RESPONSE=$(curl -s "$CONSENT_SERVICE_URL/api/v1/documents/published")
VERSION_ID=$(echo "$DOCUMENTS_RESPONSE" | jq -r '.[0].current_version.id // empty')
if [ -n "$VERSION_ID" ]; then
log_info "Creating consent for version: $VERSION_ID"
test_endpoint \
"Create Consent" \
"POST" \
"$CONSENT_SERVICE_URL/api/v1/consent" \
"201" \
"-H 'Authorization: Bearer $ACCESS_TOKEN' -H 'Content-Type: application/json'" \
"{\"document_type\":\"terms\",\"version_id\":\"$VERSION_ID\",\"consented\":true}"
test_endpoint \
"Check Consent Status" \
"GET" \
"$CONSENT_SERVICE_URL/api/v1/consent/check/terms" \
"200" \
"-H 'Authorization: Bearer $ACCESS_TOKEN'"
else
log_error "No published document version found for consent test"
fi
else
log_error "Skipping consent tests - no access token"
fi
echo ""
# 6. GDPR Tests
log_info "=== 6. GDPR Tests ==="
if [ -n "$ACCESS_TOKEN" ]; then
test_endpoint \
"Request Data Export" \
"POST" \
"$CONSENT_SERVICE_URL/api/v1/gdpr/export-request" \
"201" \
"-H 'Authorization: Bearer $ACCESS_TOKEN'"
test_endpoint \
"Get Export Status" \
"GET" \
"$CONSENT_SERVICE_URL/api/v1/gdpr/export-status" \
"200" \
"-H 'Authorization: Bearer $ACCESS_TOKEN'"
else
log_error "Skipping GDPR tests - no access token"
fi
echo ""
# 7. Mailpit Tests
log_info "=== 7. Email Tests ==="
# Check for emails in Mailpit
MAILPIT_MESSAGES=$(curl -s "$MAILPIT_URL/api/v1/messages")
MESSAGE_COUNT=$(echo "$MAILPIT_MESSAGES" | jq '.total // 0')
log_info "Emails in Mailpit: $MESSAGE_COUNT"
if [ "$MESSAGE_COUNT" -gt 0 ]; then
log_success "Emails received in Mailpit"
# Check for welcome email
WELCOME_EMAIL=$(echo "$MAILPIT_MESSAGES" | jq '.messages[] | select(.Subject | contains("Willkommen")) | .Subject')
if [ -n "$WELCOME_EMAIL" ]; then
log_success "Welcome email found: $WELCOME_EMAIL"
else
log_info "No welcome email found (may have been sent in previous run)"
fi
else
log_info "No emails in Mailpit (expected if user was already registered)"
fi
echo ""
# 8. DSMS Tests (optional - if DSMS is running)
log_info "=== 8. DSMS Tests (Optional) ==="
if curl -sf "$CONSENT_SERVICE_URL/api/v1/dsms/health" > /dev/null 2>&1; then
test_endpoint \
"DSMS Gateway Health" \
"GET" \
"$CONSENT_SERVICE_URL/api/v1/dsms/health" \
"200"
else
log_info "DSMS not available - skipping DSMS tests"
fi
echo ""
# Summary
echo "========================================="
echo "Test Results"
echo "========================================="
echo -e "${GREEN}Passed:${NC} $TESTS_PASSED"
echo -e "${RED}Failed:${NC} $TESTS_FAILED"
echo "========================================="
if [ $TESTS_FAILED -eq 0 ]; then
echo -e "${GREEN}All tests passed! ✓${NC}"
exit 0
else
echo -e "${RED}Some tests failed! ✗${NC}"
exit 1
fi

13
scripts/mac-mini/backup.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/bin/bash
# Backup vom Mac Mini zum MacBook
MAC_MINI="benjaminadmin@192.168.178.163"
BACKUP_DIR="/Users/benjaminadmin/Projekte/breakpilot-pwa-backup-$(date +%Y%m%d)"
echo "📥 Backup vom Mac Mini..."
mkdir -p "$BACKUP_DIR"
rsync -av --exclude='venv' --exclude='node_modules' --exclude='__pycache__' \
${MAC_MINI}:~/Projekte/breakpilot-pwa/ \
"$BACKUP_DIR/"
echo "✅ Backup complete: $BACKUP_DIR"

39
scripts/mac-mini/docker.sh Executable file
View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Docker-Befehle auf Mac Mini ausführen
MAC_MINI="benjaminadmin@192.168.178.163"
DOCKER="/usr/local/bin/docker"
case "$1" in
ps)
ssh $MAC_MINI "$DOCKER ps --format 'table {{.Names}}\t{{.Status}}\t{{.Ports}}'"
;;
logs)
ssh $MAC_MINI "$DOCKER logs -f ${2:-breakpilot-pwa-backend}"
;;
restart)
ssh $MAC_MINI "cd ~/Projekte/breakpilot-pwa && $DOCKER compose restart ${2:-backend}"
;;
up)
ssh $MAC_MINI "cd ~/Projekte/breakpilot-pwa && $DOCKER compose up -d $2"
;;
down)
ssh $MAC_MINI "cd ~/Projekte/breakpilot-pwa && $DOCKER compose down"
;;
build)
ssh $MAC_MINI "cd ~/Projekte/breakpilot-pwa && $DOCKER compose build ${2:-backend}"
;;
exec)
shift
ssh $MAC_MINI "$DOCKER exec -it breakpilot-pwa-backend $*"
;;
*)
echo "Usage: $0 {ps|logs [container]|restart [service]|up [service]|down|build [service]|exec [cmd]}"
echo ""
echo "Examples:"
echo " $0 ps # Show running containers"
echo " $0 logs backend # Follow backend logs"
echo " $0 restart backend # Restart backend"
echo " $0 build backend # Rebuild backend image"
echo " $0 exec bash # Shell in backend container"
;;
esac

View File

@@ -0,0 +1,99 @@
#!/bin/bash
# BreakPilot - Selektiver Service-Start für Mac Mini
# Spart GPU/CPU Ressourcen indem nur benötigte Services gestartet werden
COMPOSE_FILE="$HOME/Projekte/breakpilot-pwa/docker-compose.yml"
cd "$HOME/Projekte/breakpilot-pwa"
show_help() {
echo "BreakPilot Service Manager"
echo "=========================="
echo ""
echo "Verwendung: $0 <profil>"
echo ""
echo "Profile:"
echo " core - Basis-Services (postgres, valkey, mailpit, minio)"
echo " dev - Entwicklung (core + backend, website, consent, billing)"
echo " klausur - Klausurkorrektur (dev + klausur-service, embedding, qdrant)"
echo " school - Schulverwaltung (dev + school-service)"
echo " jitsi - Videokonferenz (dev + jitsi stack)"
echo " erp - ERPNext (dev + erpnext stack)"
echo " chat - LibreChat/RAG (separat starten)"
echo " all - Alle Services (nicht empfohlen!)"
echo " stop - Alle Services stoppen"
echo " status - Status anzeigen"
echo ""
echo "Beispiel: $0 dev"
}
start_core() {
echo "Starte Core-Services..."
docker-compose up -d postgres valkey mailpit minio
}
start_dev() {
start_core
echo "Starte Entwicklungs-Services..."
docker-compose up -d backend website consent-service billing-service
}
start_klausur() {
start_dev
echo "Starte Klausur-Services..."
docker-compose up -d klausur-service embedding-service qdrant
}
start_school() {
start_dev
echo "Starte School-Service..."
docker-compose up -d school-service
}
start_jitsi() {
start_dev
echo "Starte Jitsi-Stack..."
docker-compose up -d jitsi-web jitsi-xmpp jitsi-jicofo jitsi-jvb
}
start_erp() {
start_dev
echo "Starte ERPNext-Stack..."
docker-compose up -d erpnext-db erpnext-redis-queue erpnext-redis-cache \
erpnext-backend erpnext-websocket erpnext-scheduler \
erpnext-worker-short erpnext-frontend
}
start_all() {
echo "WARNUNG: Startet ALLE Services - hoher Ressourcenverbrauch!"
read -p "Fortfahren? (y/n) " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
docker-compose up -d
fi
}
stop_all() {
echo "Stoppe alle Services..."
docker-compose down
}
show_status() {
echo "=== Laufende Container ==="
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | head -20
echo ""
echo "=== Ressourcen ==="
docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}" | head -15
}
case "$1" in
core) start_core ;;
dev) start_dev ;;
klausur) start_klausur ;;
school) start_school ;;
jitsi) start_jitsi ;;
erp) start_erp ;;
all) start_all ;;
stop) stop_all ;;
status) show_status ;;
*) show_help ;;
esac

40
scripts/mac-mini/status.sh Executable file
View File

@@ -0,0 +1,40 @@
#!/bin/bash
# Mac Mini Status prüfen
MAC_MINI="benjaminadmin@192.168.178.100"
echo "🖥️ Mac Mini Status Check"
echo "========================="
echo ""
# Ping
if ping -c 1 -W 1 192.168.178.100 > /dev/null 2>&1; then
echo "✅ Mac Mini erreichbar (192.168.178.100)"
else
echo "❌ Mac Mini nicht erreichbar!"
exit 1
fi
# SSH
if ssh -o ConnectTimeout=5 $MAC_MINI "echo ok" > /dev/null 2>&1; then
echo "✅ SSH verbunden"
else
echo "❌ SSH nicht verfügbar"
exit 1
fi
# Docker
echo ""
echo "📦 Docker Container:"
ssh $MAC_MINI "/usr/local/bin/docker ps --format 'table {{.Names}}\t{{.Status}}'" 2>/dev/null || echo "❌ Docker nicht verfügbar"
# APIs
echo ""
echo "🌐 API Status:"
curl -s -o /dev/null -w "Backend: %{http_code}\n" http://192.168.178.100:8000/api/consent/health
curl -s -o /dev/null -w "Ollama: %{http_code}\n" http://192.168.178.100:11434/api/tags 2>/dev/null || echo "Ollama: nicht verfügbar"
echo ""
echo "🔗 URLs:"
echo " Admin: http://192.168.178.100:8000/admin"
echo " MinIO: http://192.168.178.100:9001"
echo " Mailpit: http://192.168.178.100:8025"

11
scripts/mac-mini/sync.sh Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/bash
# Synchronisiert Code zum Mac Mini
MAC_MINI="benjaminadmin@192.168.178.100"
PROJECT_PATH="~/Projekte/breakpilot-pwa"
echo "🔄 Syncing to Mac Mini..."
rsync -av --exclude='venv' --exclude='node_modules' --exclude='__pycache__' --exclude='.git' \
/Users/benjaminadmin/Projekte/breakpilot-pwa/backend/ \
${MAC_MINI}:${PROJECT_PATH}/backend/
echo "✅ Sync complete!"

258
scripts/pre-commit-check.py Executable file
View File

@@ -0,0 +1,258 @@
#!/usr/bin/env python3
"""
BreakPilot Pre-Commit Check Script
Prüft vor einem Commit:
1. Sind alle geänderten Dateien dokumentiert?
2. Haben alle geänderten Funktionen Tests?
3. Sind Security-relevante Änderungen markiert?
4. Sind ADRs für neue Module vorhanden?
Verwendung:
python3 scripts/pre-commit-check.py # Prüft staged files
python3 scripts/pre-commit-check.py --all # Prüft alle uncommitted changes
python3 scripts/pre-commit-check.py --fix # Versucht automatische Fixes
Exit Codes:
0 - Alles OK
1 - Warnungen (nicht blockierend)
2 - Fehler (blockierend)
"""
import subprocess
import sys
import os
import re
from pathlib import Path
from typing import List, Tuple, Dict
# ============================================
# Konfiguration
# ============================================
# Dateien die dokumentiert sein sollten
DOC_REQUIRED_PATTERNS = {
r"consent-service/internal/handlers/.*\.go$": "docs/api/consent-service-api.md",
r"backend/.*_api\.py$": "docs/api/backend-api.md",
r"website/app/api/.*/route\.ts$": "docs/api/frontend-api.md",
}
# Dateien die Tests haben sollten
TEST_REQUIRED_PATTERNS = {
r"consent-service/internal/services/([^/]+)\.go$": r"consent-service/internal/services/\1_test.go",
r"backend/([^/]+)\.py$": r"backend/tests/test_\1.py",
}
# Neue Module die ADRs benötigen
ADR_REQUIRED_PATTERNS = [
r"consent-service/internal/services/\w+_service\.go$",
r"backend/[^/]+_service\.py$",
r"website/app/admin/[^/]+/page\.tsx$",
]
# Security-relevante Patterns
SECURITY_PATTERNS = [
(r"password|secret|token|api_key|apikey", "Credentials"),
(r"exec\(|eval\(|subprocess|os\.system", "Code Execution"),
(r"sql|query.*\+|f['\"].*select|f['\"].*insert", "SQL Injection Risk"),
]
# ============================================
# Git Helpers
# ============================================
def get_staged_files() -> List[str]:
"""Gibt die für Commit vorgemerkten Dateien zurück."""
result = subprocess.run(
["git", "diff", "--cached", "--name-only", "--diff-filter=ACMR"],
capture_output=True, text=True
)
return [f.strip() for f in result.stdout.strip().split("\n") if f.strip()]
def get_all_changed_files() -> List[str]:
"""Gibt alle geänderten Dateien zurück (staged + unstaged)."""
result = subprocess.run(
["git", "diff", "--name-only", "--diff-filter=ACMR", "HEAD"],
capture_output=True, text=True
)
return [f.strip() for f in result.stdout.strip().split("\n") if f.strip()]
def get_file_content(filepath: str) -> str:
"""Liest den Inhalt einer Datei."""
try:
with open(filepath, 'r', encoding='utf-8') as f:
return f.read()
except Exception:
return ""
def file_exists(filepath: str) -> bool:
"""Prüft ob eine Datei existiert."""
return Path(filepath).exists()
# ============================================
# Checks
# ============================================
def check_documentation(files: List[str]) -> List[Tuple[str, str]]:
"""Prüft ob Dokumentation für geänderte Dateien existiert."""
issues = []
for filepath in files:
for pattern, doc_file in DOC_REQUIRED_PATTERNS.items():
if re.match(pattern, filepath):
if not file_exists(doc_file):
issues.append((filepath, f"Dokumentation fehlt: {doc_file}"))
break
return issues
def check_tests(files: List[str]) -> List[Tuple[str, str]]:
"""Prüft ob Tests für geänderte Dateien existieren."""
issues = []
for filepath in files:
# Überspringe Test-Dateien selbst
if "_test.go" in filepath or "test_" in filepath or "__tests__" in filepath:
continue
for pattern, test_pattern in TEST_REQUIRED_PATTERNS.items():
match = re.match(pattern, filepath)
if match:
test_file = re.sub(pattern, test_pattern, filepath)
if not file_exists(test_file):
issues.append((filepath, f"Test fehlt: {test_file}"))
break
return issues
def check_adrs(files: List[str]) -> List[Tuple[str, str]]:
"""Prüft ob ADRs für neue Module vorhanden sind."""
issues = []
adr_dir = Path("docs/adr")
for filepath in files:
for pattern in ADR_REQUIRED_PATTERNS:
if re.match(pattern, filepath):
# Prüfe ob es ein neues File ist (nicht nur Änderung)
result = subprocess.run(
["git", "diff", "--cached", "--name-status", filepath],
capture_output=True, text=True
)
if result.stdout.startswith("A"): # Added
# Prüfe ob es einen ADR dafür gibt
module_name = Path(filepath).stem
adr_exists = any(
adr_dir.glob(f"ADR-*{module_name}*.md")
) if adr_dir.exists() else False
if not adr_exists:
issues.append((
filepath,
f"Neues Modul ohne ADR. Erstelle: docs/adr/ADR-NNNN-{module_name}.md"
))
break
return issues
def check_security(files: List[str]) -> List[Tuple[str, str]]:
"""Prüft auf security-relevante Änderungen."""
warnings = []
for filepath in files:
content = get_file_content(filepath)
content_lower = content.lower()
for pattern, category in SECURITY_PATTERNS:
if re.search(pattern, content_lower):
warnings.append((filepath, f"Security-Review empfohlen: {category}"))
break # Nur eine Warnung pro Datei
return warnings
# ============================================
# Main
# ============================================
def print_section(title: str, items: List[Tuple[str, str]], icon: str = "⚠️"):
"""Gibt eine Sektion aus."""
if items:
print(f"\n{icon} {title}:")
for filepath, message in items:
print(f" {filepath}")
print(f"{message}")
def main():
import argparse
parser = argparse.ArgumentParser(description="BreakPilot Pre-Commit Check")
parser.add_argument("--all", action="store_true", help="Check all changed files, not just staged")
parser.add_argument("--fix", action="store_true", help="Attempt automatic fixes")
parser.add_argument("--strict", action="store_true", help="Treat warnings as errors")
args = parser.parse_args()
print("=" * 60)
print("BREAKPILOT PRE-COMMIT CHECK")
print("=" * 60)
# Dateien ermitteln
files = get_all_changed_files() if args.all else get_staged_files()
if not files:
print("\n✅ Keine Dateien zu prüfen.")
return 0
print(f"\nPrüfe {len(files)} Datei(en)...")
# Checks ausführen
doc_issues = check_documentation(files)
test_issues = check_tests(files)
adr_issues = check_adrs(files)
security_warnings = check_security(files)
# Ergebnisse ausgeben
has_errors = False
has_warnings = False
if doc_issues:
print_section("Fehlende Dokumentation", doc_issues, "📝")
has_warnings = True
if test_issues:
print_section("Fehlende Tests", test_issues, "🧪")
has_warnings = True
if adr_issues:
print_section("Fehlende ADRs", adr_issues, "📋")
has_warnings = True
if security_warnings:
print_section("Security-Hinweise", security_warnings, "🔒")
has_warnings = True
# Zusammenfassung
print("\n" + "=" * 60)
if not has_errors and not has_warnings:
print("✅ Alle Checks bestanden!")
return 0
total_issues = len(doc_issues) + len(test_issues) + len(adr_issues)
total_warnings = len(security_warnings)
print(f"📊 Zusammenfassung:")
print(f" Issues: {total_issues}")
print(f" Warnings: {total_warnings}")
if args.strict and (has_errors or has_warnings):
print("\n❌ Commit blockiert (--strict mode)")
return 2
if has_errors:
print("\n❌ Commit blockiert wegen Fehlern")
return 2
print("\n⚠️ Commit möglich, aber Warnings beachten!")
return 1
if __name__ == "__main__":
sys.exit(main())

154
scripts/promote.sh Executable file
View File

@@ -0,0 +1,154 @@
#!/bin/bash
# ============================================
# BreakPilot Code Promotion
# ============================================
# Promotes code between environments via Git branches
#
# Usage: ./scripts/promote.sh [dev-to-staging|staging-to-prod]
#
# Branch Structure:
# develop -> Daily development work
# staging -> Tested and approved code
# main -> Production-ready releases
# ============================================
set -e
ACTION=$1
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT_DIR="$(dirname "$SCRIPT_DIR")"
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
BLUE='\033[0;34m'
NC='\033[0m'
cd "$ROOT_DIR"
# Check if git repo
if [ ! -d ".git" ]; then
echo -e "${RED}Error: Not a git repository.${NC}"
echo "Run: git init"
exit 1
fi
case $ACTION in
dev-to-staging)
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE} Promoting: develop -> staging${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
# Ensure working directory is clean
if ! git diff-index --quiet HEAD -- 2>/dev/null; then
echo -e "${YELLOW}Warning: You have uncommitted changes.${NC}"
read -p "Stash changes and continue? (yes/no): " STASH
if [ "$STASH" = "yes" ]; then
git stash
STASHED=true
else
echo "Aborted. Commit or stash your changes first."
exit 1
fi
fi
# Update develop
echo -e "${YELLOW}Updating develop branch...${NC}"
git checkout develop
git pull origin develop 2>/dev/null || true
# Merge to staging
echo -e "${YELLOW}Merging into staging...${NC}"
git checkout staging
git pull origin staging 2>/dev/null || true
git merge develop -m "Promote: develop -> staging ($(date +%Y-%m-%d_%H:%M))"
# Return to develop
git checkout develop
# Restore stash if applicable
if [ "$STASHED" = "true" ]; then
git stash pop
fi
echo ""
echo -e "${GREEN}✓ Merged develop into staging${NC}"
echo ""
echo "Next steps:"
echo " 1. Review changes on staging branch"
echo " 2. Test staging environment: ./scripts/start.sh staging"
echo " 3. Push when ready: git push origin staging"
;;
staging-to-prod)
echo -e "${RED}========================================${NC}"
echo -e "${RED} WARNING: Promoting to PRODUCTION${NC}"
echo -e "${RED}========================================${NC}"
echo ""
read -p "Have tests passed on staging? (yes/no): " TESTED
if [ "$TESTED" != "yes" ]; then
echo "Please test on staging first."
exit 0
fi
read -p "Are you sure you want to promote to production? (yes/no): " CONFIRM
if [ "$CONFIRM" != "yes" ]; then
echo "Aborted."
exit 0
fi
# Update staging
echo -e "${YELLOW}Updating staging branch...${NC}"
git checkout staging
git pull origin staging 2>/dev/null || true
# Merge to main
echo -e "${YELLOW}Merging into main (production)...${NC}"
git checkout main
git pull origin main 2>/dev/null || true
git merge staging -m "Release: staging -> main ($(date +%Y-%m-%d_%H:%M))"
# Return to develop
git checkout develop
echo ""
echo -e "${GREEN}✓ Merged staging into main (production)${NC}"
echo ""
echo "Next steps:"
echo " 1. Review changes on main branch"
echo " 2. Push when ready: git push origin main"
echo " 3. Create a release tag: git tag -a v1.x.x -m 'Release v1.x.x'"
;;
status)
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE} Branch Status${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
CURRENT=$(git branch --show-current)
echo -e "Current branch: ${GREEN}$CURRENT${NC}"
echo ""
echo "Branches:"
git branch -v
echo ""
echo "Recent commits:"
git log --oneline -5
;;
*)
echo "Usage: $0 [dev-to-staging|staging-to-prod|status]"
echo ""
echo "Commands:"
echo " dev-to-staging - Merge develop into staging"
echo " staging-to-prod - Merge staging into main (production)"
echo " status - Show branch status"
echo ""
echo "Branch workflow:"
echo " develop (daily work) -> staging (tested) -> main (production)"
exit 1
;;
esac

View File

@@ -0,0 +1,478 @@
#!/usr/bin/env python3
"""
Qwen Refactoring Orchestrator
=============================
Orchestriert Code-Refactoring via Qwen2.5:32B auf dem Mac Mini.
Workflow:
1. Liest große Dateien in Chunks
2. Sendet Refactoring-Prompts an Qwen via Ollama API
3. Validiert und integriert Ergebnisse
4. Führt Tests aus
Usage:
python qwen_refactor_orchestrator.py --file backend/frontend/static/js/studio.js
python qwen_refactor_orchestrator.py --all-large-files
python qwen_refactor_orchestrator.py --status
"""
import argparse
import asyncio
import json
import os
import re
import subprocess
import sys
from dataclasses import dataclass, field
from datetime import datetime
from pathlib import Path
from typing import List, Optional, Dict, Any
import httpx
# Konfiguration
MAC_MINI_HOST = "mac-mini-von-benjamin.fritz.box"
OLLAMA_PORT = 11434
OLLAMA_URL = f"http://{MAC_MINI_HOST}:{OLLAMA_PORT}"
MODEL_NAME = "qwen2.5:32b"
MAX_CHUNK_LINES = 800 # ~1500 Tokens pro Chunk
MAX_CONTEXT_TOKENS = 28000 # Sicherheitsmarge für 32K Kontext
PROJECT_ROOT = Path(__file__).parent.parent
REFACTOR_OUTPUT_DIR = PROJECT_ROOT / "refactored"
REFACTOR_LOG_FILE = PROJECT_ROOT / "refactor_log.json"
# Top 10 große Dateien
LARGE_FILES = [
("backend/frontend/static/js/studio.js", 9787, "JavaScript"),
("website/components/admin/SystemInfoSection.tsx", 5690, "TypeScript/React"),
("backend/frontend/modules/companion.py", 5513, "Python"),
("backend/classroom_api.py", 4467, "Python"),
("backend/frontend/school.py", 3732, "Python"),
("backend/frontend/components/admin_panel.py", 3434, "Python"),
("backend/ai_processor.py", 2999, "Python"),
("website/app/admin/rag/page.tsx", 2964, "TypeScript/React"),
("backend/frontend/modules/alerts.py", 2902, "Python"),
("backend/frontend/meetings.py", 2847, "Python"),
]
@dataclass
class RefactorChunk:
"""Ein Chunk einer Datei für Refactoring"""
file_path: str
chunk_index: int
total_chunks: int
start_line: int
end_line: int
content: str
language: str
refactored_content: Optional[str] = None
status: str = "pending" # pending, processing, completed, failed
error: Optional[str] = None
@dataclass
class RefactorSession:
"""Eine Refactoring-Session für eine Datei"""
file_path: str
language: str
original_lines: int
chunks: List[RefactorChunk] = field(default_factory=list)
started_at: Optional[datetime] = None
completed_at: Optional[datetime] = None
status: str = "pending"
tests_passed: Optional[bool] = None
class QwenRefactorOrchestrator:
"""Orchestriert Refactoring via Qwen auf Mac Mini"""
def __init__(self):
self.sessions: Dict[str, RefactorSession] = {}
self.load_state()
def load_state(self):
"""Lädt den Zustand aus der Log-Datei"""
if REFACTOR_LOG_FILE.exists():
try:
with open(REFACTOR_LOG_FILE) as f:
data = json.load(f)
# Rekonstruiere Sessions aus JSON
for path, session_data in data.get("sessions", {}).items():
self.sessions[path] = RefactorSession(**session_data)
except Exception as e:
print(f"Warning: Could not load state: {e}")
def save_state(self):
"""Speichert den Zustand in die Log-Datei"""
data = {
"sessions": {
path: {
"file_path": s.file_path,
"language": s.language,
"original_lines": s.original_lines,
"status": s.status,
"started_at": s.started_at.isoformat() if s.started_at else None,
"completed_at": s.completed_at.isoformat() if s.completed_at else None,
"tests_passed": s.tests_passed,
"chunks": [
{
"chunk_index": c.chunk_index,
"total_chunks": c.total_chunks,
"start_line": c.start_line,
"end_line": c.end_line,
"status": c.status,
"error": c.error,
}
for c in s.chunks
],
}
for path, s in self.sessions.items()
},
"last_updated": datetime.now().isoformat(),
}
with open(REFACTOR_LOG_FILE, "w") as f:
json.dump(data, f, indent=2)
async def check_ollama_status(self) -> Dict[str, Any]:
"""Prüft ob Ollama auf dem Mac Mini erreichbar ist"""
try:
async with httpx.AsyncClient(timeout=10.0) as client:
response = await client.get(f"{OLLAMA_URL}/api/tags")
if response.status_code == 200:
models = response.json().get("models", [])
qwen_available = any(m.get("name", "").startswith("qwen2.5:32b") for m in models)
return {
"status": "online",
"models": [m.get("name") for m in models],
"qwen_available": qwen_available,
}
except Exception as e:
return {"status": "offline", "error": str(e)}
return {"status": "unknown"}
def split_file_into_chunks(self, file_path: str, language: str) -> List[RefactorChunk]:
"""Teilt eine Datei in logische Chunks auf"""
full_path = PROJECT_ROOT / file_path
if not full_path.exists():
raise FileNotFoundError(f"File not found: {full_path}")
with open(full_path) as f:
lines = f.readlines()
chunks = []
current_chunk_start = 0
current_chunk_lines = []
# Finde logische Trennstellen basierend auf Sprache
if language == "Python":
# Trenne bei Klassen und Top-Level-Funktionen
split_pattern = re.compile(r"^(class |def |async def )")
elif language in ["JavaScript", "TypeScript/React"]:
# Trenne bei Funktionen, Klassen, und export statements
split_pattern = re.compile(r"^(export |function |class |const \w+ = )")
else:
split_pattern = None
for i, line in enumerate(lines):
current_chunk_lines.append(line)
# Prüfe ob wir splitten sollten
should_split = False
if len(current_chunk_lines) >= MAX_CHUNK_LINES:
should_split = True
elif split_pattern and i > current_chunk_start + 100:
# Versuche an logischer Stelle zu trennen
if split_pattern.match(line) and len(current_chunk_lines) > 200:
should_split = True
if should_split or i == len(lines) - 1:
chunk = RefactorChunk(
file_path=file_path,
chunk_index=len(chunks),
total_chunks=0, # Wird später gesetzt
start_line=current_chunk_start + 1,
end_line=i + 1,
content="".join(current_chunk_lines),
language=language,
)
chunks.append(chunk)
current_chunk_start = i + 1
current_chunk_lines = []
# Setze total_chunks
for chunk in chunks:
chunk.total_chunks = len(chunks)
return chunks
def create_refactoring_prompt(self, chunk: RefactorChunk, session: RefactorSession) -> str:
"""Erstellt den Refactoring-Prompt für Qwen"""
return f"""Du bist ein erfahrener Software-Entwickler. Refaktoriere den folgenden {chunk.language}-Code.
ZIELE:
1. Teile große Funktionen in kleinere, wiederverwendbare Einheiten
2. Verbessere die Lesbarkeit durch bessere Variablennamen
3. Entferne doppelten Code (DRY-Prinzip)
4. Füge hilfreiche Kommentare hinzu (aber nicht übertreiben)
5. Behalte die gesamte Funktionalität bei!
REGELN:
- Keine neuen Dependencies hinzufügen
- Alle existierenden Exports/APIs beibehalten
- Keine Breaking Changes
- Code muss weiterhin funktionieren
DATEI: {chunk.file_path}
CHUNK: {chunk.chunk_index + 1}/{chunk.total_chunks} (Zeilen {chunk.start_line}-{chunk.end_line})
SPRACHE: {chunk.language}
ORIGINALZEILEN: {session.original_lines}
--- ORIGINAL CODE ---
{chunk.content}
--- END ORIGINAL CODE ---
Gib NUR den refaktorierten Code zurück, ohne Erklärungen oder Markdown-Blöcke.
"""
async def refactor_chunk(self, chunk: RefactorChunk, session: RefactorSession) -> bool:
"""Refaktoriert einen einzelnen Chunk via Qwen"""
chunk.status = "processing"
self.save_state()
prompt = self.create_refactoring_prompt(chunk, session)
try:
async with httpx.AsyncClient(timeout=300.0) as client:
response = await client.post(
f"{OLLAMA_URL}/api/generate",
json={
"model": MODEL_NAME,
"prompt": prompt,
"stream": False,
"options": {
"num_predict": 4096,
"temperature": 0.3,
},
},
)
if response.status_code == 200:
result = response.json()
refactored = result.get("response", "")
# Entferne eventuelle Markdown-Code-Blöcke
refactored = re.sub(r"^```\w*\n", "", refactored)
refactored = re.sub(r"\n```$", "", refactored)
chunk.refactored_content = refactored
chunk.status = "completed"
self.save_state()
return True
else:
chunk.status = "failed"
chunk.error = f"HTTP {response.status_code}: {response.text}"
self.save_state()
return False
except Exception as e:
chunk.status = "failed"
chunk.error = str(e)
self.save_state()
return False
async def refactor_file(self, file_path: str, language: str) -> RefactorSession:
"""Refaktoriert eine komplette Datei"""
print(f"\n{'=' * 60}")
print(f"Starte Refactoring: {file_path}")
print(f"{'=' * 60}")
# Erstelle Session
full_path = PROJECT_ROOT / file_path
with open(full_path) as f:
original_lines = len(f.readlines())
chunks = self.split_file_into_chunks(file_path, language)
session = RefactorSession(
file_path=file_path,
language=language,
original_lines=original_lines,
chunks=chunks,
started_at=datetime.now(),
status="processing",
)
self.sessions[file_path] = session
self.save_state()
print(f"Datei aufgeteilt in {len(chunks)} Chunks")
# Refaktoriere jeden Chunk
for i, chunk in enumerate(chunks):
print(f"\nChunk {i + 1}/{len(chunks)} (Zeilen {chunk.start_line}-{chunk.end_line})...")
success = await self.refactor_chunk(chunk, session)
if success:
print(f" ✓ Chunk {i + 1} refaktoriert")
else:
print(f" ✗ Chunk {i + 1} fehlgeschlagen: {chunk.error}")
# Prüfe ob alle Chunks erfolgreich waren
all_success = all(c.status == "completed" for c in chunks)
if all_success:
# Kombiniere refaktorierten Code
refactored_content = "\n".join(c.refactored_content for c in chunks if c.refactored_content)
# Speichere refaktorierten Code
output_dir = REFACTOR_OUTPUT_DIR / Path(file_path).parent
output_dir.mkdir(parents=True, exist_ok=True)
output_file = REFACTOR_OUTPUT_DIR / file_path
with open(output_file, "w") as f:
f.write(refactored_content)
session.status = "completed"
session.completed_at = datetime.now()
print(f"\n✓ Refactoring abgeschlossen: {output_file}")
else:
session.status = "partial"
failed_chunks = [c for c in chunks if c.status == "failed"]
print(f"\n⚠ Refactoring teilweise abgeschlossen. {len(failed_chunks)} Chunks fehlgeschlagen.")
self.save_state()
return session
async def run_tests(self, file_path: str) -> bool:
"""Führt Tests für die refaktorierte Datei aus"""
print(f"\nFühre Tests aus für {file_path}...")
# Bestimme Test-Kommando basierend auf Dateityp
if file_path.endswith(".py"):
# Python Tests
test_cmd = ["python", "-m", "pytest", "-v", "--tb=short"]
if "backend" in file_path:
test_cmd.append("backend/tests/")
elif file_path.endswith((".ts", ".tsx", ".js")):
# TypeScript/JavaScript Tests
if "website" in file_path:
test_cmd = ["npm", "test", "--", "--passWithNoTests"]
else:
print(" Keine Tests für diesen Dateityp konfiguriert")
return True
try:
result = subprocess.run(
test_cmd,
cwd=PROJECT_ROOT,
capture_output=True,
text=True,
timeout=300,
)
if result.returncode == 0:
print(" ✓ Tests bestanden")
return True
else:
print(f" ✗ Tests fehlgeschlagen:\n{result.stdout}\n{result.stderr}")
return False
except Exception as e:
print(f" ✗ Test-Ausführung fehlgeschlagen: {e}")
return False
def print_status(self):
"""Zeigt den aktuellen Status aller Sessions"""
print("\n" + "=" * 70)
print("QWEN REFACTORING STATUS")
print("=" * 70)
if not self.sessions:
print("Keine aktiven Sessions")
return
for path, session in self.sessions.items():
completed = sum(1 for c in session.chunks if c.status == "completed")
failed = sum(1 for c in session.chunks if c.status == "failed")
pending = sum(1 for c in session.chunks if c.status == "pending")
status_icon = {
"pending": "",
"processing": "",
"completed": "",
"partial": "",
}.get(session.status, "?")
print(f"\n{status_icon} {path}")
print(f" Status: {session.status}")
print(f" Chunks: {completed}/{len(session.chunks)} completed, {failed} failed, {pending} pending")
if session.started_at:
print(f" Gestartet: {session.started_at.strftime('%Y-%m-%d %H:%M')}")
if session.completed_at:
print(f" Abgeschlossen: {session.completed_at.strftime('%Y-%m-%d %H:%M')}")
if session.tests_passed is not None:
print(f" Tests: {'✓ Bestanden' if session.tests_passed else '✗ Fehlgeschlagen'}")
async def main():
parser = argparse.ArgumentParser(description="Qwen Refactoring Orchestrator")
parser.add_argument("--file", help="Einzelne Datei refaktorieren")
parser.add_argument("--all-large-files", action="store_true", help="Alle großen Dateien refaktorieren")
parser.add_argument("--status", action="store_true", help="Status anzeigen")
parser.add_argument("--check-ollama", action="store_true", help="Ollama-Status prüfen")
parser.add_argument("--run-tests", help="Tests für refaktorierte Datei ausführen")
args = parser.parse_args()
orchestrator = QwenRefactorOrchestrator()
if args.status:
orchestrator.print_status()
return
if args.check_ollama:
print("Prüfe Ollama-Status auf Mac Mini...")
status = await orchestrator.check_ollama_status()
print(f"Status: {status['status']}")
if status.get("models"):
print(f"Verfügbare Modelle: {', '.join(status['models'])}")
if status.get("qwen_available"):
print("✓ Qwen2.5:32B ist verfügbar")
else:
print("✗ Qwen2.5:32B ist NICHT verfügbar")
if status.get("error"):
print(f"Fehler: {status['error']}")
return
if args.run_tests:
success = await orchestrator.run_tests(args.run_tests)
sys.exit(0 if success else 1)
if args.file:
# Finde Sprache für die Datei
language = "Unknown"
for f, _, lang in LARGE_FILES:
if f == args.file:
language = lang
break
if language == "Unknown":
if args.file.endswith(".py"):
language = "Python"
elif args.file.endswith((".ts", ".tsx")):
language = "TypeScript/React"
elif args.file.endswith(".js"):
language = "JavaScript"
await orchestrator.refactor_file(args.file, language)
return
if args.all_large_files:
print("Refaktoriere alle großen Dateien...")
for file_path, lines, language in LARGE_FILES:
try:
await orchestrator.refactor_file(file_path, language)
except Exception as e:
print(f"Fehler bei {file_path}: {e}")
return
# Default: Status anzeigen
orchestrator.print_status()
if __name__ == "__main__":
asyncio.run(main())

94
scripts/restore.sh Executable file
View File

@@ -0,0 +1,94 @@
#!/bin/bash
# BreakPilot Database Restore Script
# Stellt ein Backup der PostgreSQL-Datenbank wieder her
set -e
# Konfiguration
BACKUP_DIR="${BACKUP_DIR:-./backups}"
CONTAINER_NAME="${CONTAINER_NAME:-breakpilot-pwa-postgres}"
DB_USER="${DB_USER:-breakpilot}"
DB_NAME="${DB_NAME:-breakpilot_db}"
# Farben für Output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
echo -e "${GREEN}=== BreakPilot Database Restore ===${NC}"
echo ""
# Backup-Datei als Argument oder automatisch das neueste wählen
if [ -n "$1" ]; then
BACKUP_FILE="$1"
else
# Neuestes Backup finden
BACKUP_FILE=$(ls -t "$BACKUP_DIR"/breakpilot_*.sql.gz 2>/dev/null | head -1)
if [ -z "$BACKUP_FILE" ]; then
echo -e "${RED}Error: No backup files found in $BACKUP_DIR${NC}"
exit 1
fi
echo -e "${YELLOW}No backup file specified. Using most recent:${NC}"
echo "$BACKUP_FILE"
echo ""
fi
# Prüfen ob Backup-Datei existiert
if [ ! -f "$BACKUP_FILE" ]; then
echo -e "${RED}Error: Backup file not found: $BACKUP_FILE${NC}"
exit 1
fi
# Prüfen ob Container läuft
if ! docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"; then
echo -e "${RED}Error: Container '$CONTAINER_NAME' is not running${NC}"
exit 1
fi
# Warnung anzeigen
echo -e "${RED}⚠️ WARNING: This will overwrite all current data!${NC}"
echo ""
read -p "Are you sure you want to restore from this backup? (yes/no): " CONFIRM
if [ "$CONFIRM" != "yes" ]; then
echo "Restore cancelled."
exit 0
fi
echo ""
echo -e "${YELLOW}Stopping dependent services...${NC}"
# Consent Service stoppen (falls läuft)
docker stop breakpilot-pwa-consent-service 2>/dev/null || true
docker stop breakpilot-pwa-backend 2>/dev/null || true
echo -e "${YELLOW}Restoring database...${NC}"
# Datenbank droppen und neu erstellen
docker exec "$CONTAINER_NAME" psql -U "$DB_USER" -d postgres -c "DROP DATABASE IF EXISTS ${DB_NAME};"
docker exec "$CONTAINER_NAME" psql -U "$DB_USER" -d postgres -c "CREATE DATABASE ${DB_NAME};"
# Backup wiederherstellen
gunzip -c "$BACKUP_FILE" | docker exec -i "$CONTAINER_NAME" psql -U "$DB_USER" -d "$DB_NAME"
echo -e "${GREEN}✓ Database restored successfully${NC}"
echo ""
echo -e "${YELLOW}Restarting services...${NC}"
# Services wieder starten
docker start breakpilot-pwa-consent-service 2>/dev/null || true
docker start breakpilot-pwa-backend 2>/dev/null || true
# Warten bis Services bereit sind
sleep 5
echo ""
echo -e "${GREEN}=== Restore completed! ===${NC}"
echo ""
echo "Verify the restore by checking:"
echo " - http://localhost:8000/app"
echo " - http://localhost:8081/health"

View File

@@ -0,0 +1,91 @@
#!/bin/bash
# Full Compliance Update Script
# Run on Mac Mini in background:
# nohup ./run_full_compliance_update.sh > /tmp/compliance_update.log 2>&1 &
set -e
LOG_FILE="/tmp/compliance_update.log"
TIMESTAMP=$(date +"%Y-%m-%d %H:%M:%S")
log() {
echo "[$TIMESTAMP] $1" | tee -a $LOG_FILE
}
log "=============================================="
log "FULL COMPLIANCE UPDATE PIPELINE"
log "Started at: $TIMESTAMP"
log "=============================================="
# Step 1: Wait for ongoing re-ingestion to complete
log ""
log "Step 1: Checking if re-ingestion is still running..."
while /usr/local/bin/docker exec breakpilot-pwa-klausur-service pgrep -f "legal_corpus_ingestion.py" > /dev/null 2>&1; do
log " Re-ingestion still running, waiting 60 seconds..."
CURRENT_COUNT=$(curl -s http://localhost:6333/collections/bp_legal_corpus 2>/dev/null | python3 -c "import sys, json; print(json.load(sys.stdin).get('result',{}).get('points_count',0))" 2>/dev/null || echo "0")
log " Current chunk count: $CURRENT_COUNT"
sleep 60
done
log " Re-ingestion complete!"
# Step 1b: Re-run TDDDG with PDF support
log ""
log "Step 1b: Re-ingesting TDDDG with PDF support..."
/usr/local/bin/docker exec -e QDRANT_HOST=qdrant -e EMBEDDING_SERVICE_URL=http://embedding-service:8087 breakpilot-pwa-klausur-service python -c "
import asyncio
from legal_corpus_ingestion import LegalCorpusIngestion
async def main():
ingestion = LegalCorpusIngestion()
await ingestion.ingest_single('TDDDG', force=True)
asyncio.run(main())
" 2>&1 | tee -a $LOG_FILE
log " TDDDG re-ingestion complete!"
# Step 2: Check Qdrant chunk count
log ""
log "Step 2: Checking Qdrant collection status..."
CHUNK_COUNT=$(curl -s http://localhost:6333/collections/bp_legal_corpus | python3 -c "import sys, json; print(json.load(sys.stdin)['result']['points_count'])")
log " Total chunks in bp_legal_corpus: $CHUNK_COUNT"
# Step 3: Run compliance pipeline (checkpoint extraction + control generation)
log ""
log "Step 3: Running compliance pipeline..."
/usr/local/bin/docker exec breakpilot-pwa-klausur-service python /app/full_compliance_pipeline.py 2>&1 | tee -a $LOG_FILE
# Step 4: Check if compliance output was generated
log ""
log "Step 4: Checking compliance output..."
if /usr/local/bin/docker exec breakpilot-pwa-klausur-service ls /tmp/compliance_output/statistics.json > /dev/null 2>&1; then
log " Compliance output generated successfully!"
/usr/local/bin/docker exec breakpilot-pwa-klausur-service cat /tmp/compliance_output/statistics.json
else
log " ERROR: Compliance output not found!"
fi
# Step 5: Re-seed compliance database with new data
log ""
log "Step 5: Updating compliance database..."
# Call the backend API to re-seed compliance data
SEED_RESULT=$(curl -s -X POST http://localhost:8000/api/v1/compliance/seed -H "Content-Type: application/json" -d '{"force": true}' || echo '{"error": "API call failed"}')
log " Seed result: $SEED_RESULT"
# Step 6: Update statistics
log ""
log "Step 6: Updating dashboard statistics..."
STATS=$(curl -s http://localhost:8000/api/v1/compliance/dashboard/statistics || echo '{"error": "API call failed"}')
log " Dashboard statistics:"
echo "$STATS" | python3 -m json.tool 2>/dev/null || echo "$STATS"
# Final Summary
log ""
log "=============================================="
log "PIPELINE COMPLETE"
log "Finished at: $(date +"%Y-%m-%d %H:%M:%S")"
log "=============================================="
log ""
log "Results:"
log " - Chunks in RAG: $CHUNK_COUNT"
log " - Check /tmp/compliance_output/ for detailed results"
log " - View Frontend at: https://macmini:3002/compliance"
log " - View RAG Status at: https://macmini:3002/ai/rag"
log ""

217
scripts/security-scan.sh Executable file
View File

@@ -0,0 +1,217 @@
#!/bin/bash
# BreakPilot DevSecOps Security Scanning Script
#
# Usage: ./scripts/security-scan.sh [options]
# --all Run all scans
# --secrets Run secrets detection (Gitleaks)
# --sast Run static analysis (Semgrep, Bandit)
# --sca Run dependency scanning (Trivy, Grype)
# --sbom Generate SBOM (Syft)
# --image Scan Docker images (Trivy)
# --ci CI mode (exit on critical findings)
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Default values
RUN_ALL=false
RUN_SECRETS=false
RUN_SAST=false
RUN_SCA=false
RUN_SBOM=false
RUN_IMAGE=false
CI_MODE=false
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--all) RUN_ALL=true; shift ;;
--secrets) RUN_SECRETS=true; shift ;;
--sast) RUN_SAST=true; shift ;;
--sca) RUN_SCA=true; shift ;;
--sbom) RUN_SBOM=true; shift ;;
--image) RUN_IMAGE=true; shift ;;
--ci) CI_MODE=true; shift ;;
*) echo "Unknown option: $1"; exit 1 ;;
esac
done
# If no specific option, run all
if ! $RUN_SECRETS && ! $RUN_SAST && ! $RUN_SCA && ! $RUN_SBOM && ! $RUN_IMAGE; then
RUN_ALL=true
fi
echo -e "${BLUE}╔════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ BreakPilot DevSecOps Security Scanner ║${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════════════════╝${NC}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(dirname "$SCRIPT_DIR")"
cd "$PROJECT_DIR"
SCAN_RESULTS_DIR="$PROJECT_DIR/security-reports"
mkdir -p "$SCAN_RESULTS_DIR"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
CRITICAL_FOUND=false
# =============================================
# 1. Secrets Detection (Gitleaks)
# =============================================
run_secrets_scan() {
echo -e "\n${YELLOW}[1/5] Secrets Detection (Gitleaks)${NC}"
if command -v gitleaks &> /dev/null; then
echo "Running Gitleaks..."
if gitleaks detect --source . --config .gitleaks.toml --report-path "$SCAN_RESULTS_DIR/gitleaks-${TIMESTAMP}.json" --report-format json 2>&1; then
echo -e "${GREEN}✓ No secrets found${NC}"
else
echo -e "${RED}✗ Secrets detected! Check $SCAN_RESULTS_DIR/gitleaks-${TIMESTAMP}.json${NC}"
CRITICAL_FOUND=true
fi
else
echo -e "${YELLOW}⚠ Gitleaks not installed. Install: brew install gitleaks${NC}"
fi
}
# =============================================
# 2. Static Analysis (Semgrep + Bandit)
# =============================================
run_sast_scan() {
echo -e "\n${YELLOW}[2/5] Static Analysis (SAST)${NC}"
# Semgrep
if command -v semgrep &> /dev/null; then
echo "Running Semgrep..."
semgrep scan --config auto --config .semgrep.yml \
--json --output "$SCAN_RESULTS_DIR/semgrep-${TIMESTAMP}.json" \
--severity ERROR || true
echo -e "${GREEN}✓ Semgrep scan complete${NC}"
else
echo -e "${YELLOW}⚠ Semgrep not installed. Install: pip install semgrep${NC}"
fi
# Bandit (Python)
if command -v bandit &> /dev/null; then
echo "Running Bandit..."
bandit -r backend/ -ll -x backend/tests/* \
-f json -o "$SCAN_RESULTS_DIR/bandit-${TIMESTAMP}.json" 2>/dev/null || true
echo -e "${GREEN}✓ Bandit scan complete${NC}"
else
echo -e "${YELLOW}⚠ Bandit not installed. Install: pip install bandit${NC}"
fi
}
# =============================================
# 3. Dependency Scanning (Trivy + Grype)
# =============================================
run_sca_scan() {
echo -e "\n${YELLOW}[3/5] Dependency Scanning (SCA)${NC}"
# Trivy filesystem scan
if command -v trivy &> /dev/null; then
echo "Running Trivy filesystem scan..."
trivy fs . --config .trivy.yaml \
--format json --output "$SCAN_RESULTS_DIR/trivy-fs-${TIMESTAMP}.json" \
--severity HIGH,CRITICAL || true
echo -e "${GREEN}✓ Trivy filesystem scan complete${NC}"
else
echo -e "${YELLOW}⚠ Trivy not installed. Install: brew install trivy${NC}"
fi
# Grype
if command -v grype &> /dev/null; then
echo "Running Grype..."
grype dir:. -o json > "$SCAN_RESULTS_DIR/grype-${TIMESTAMP}.json" 2>/dev/null || true
echo -e "${GREEN}✓ Grype scan complete${NC}"
else
echo -e "${YELLOW}⚠ Grype not installed. Install: brew install grype${NC}"
fi
}
# =============================================
# 4. SBOM Generation (Syft)
# =============================================
run_sbom_generation() {
echo -e "\n${YELLOW}[4/5] SBOM Generation (Syft)${NC}"
if command -v syft &> /dev/null; then
echo "Generating SBOM..."
syft dir:. -o cyclonedx-json="$SCAN_RESULTS_DIR/sbom-${TIMESTAMP}.json" 2>/dev/null || true
syft dir:. -o spdx-json="$SCAN_RESULTS_DIR/sbom-spdx-${TIMESTAMP}.json" 2>/dev/null || true
echo -e "${GREEN}✓ SBOM generated (CycloneDX + SPDX)${NC}"
else
echo -e "${YELLOW}⚠ Syft not installed. Install: brew install syft${NC}"
fi
}
# =============================================
# 5. Container Image Scanning (Trivy)
# =============================================
run_image_scan() {
echo -e "\n${YELLOW}[5/5] Container Image Scanning${NC}"
if command -v trivy &> /dev/null; then
# Scan each built image
for IMAGE in breakpilot-pwa-backend breakpilot-pwa-consent-service breakpilot-pwa-school-service; do
if docker image inspect "$IMAGE" &> /dev/null; then
echo "Scanning $IMAGE..."
trivy image "$IMAGE" \
--format json --output "$SCAN_RESULTS_DIR/trivy-image-${IMAGE}-${TIMESTAMP}.json" \
--severity HIGH,CRITICAL || true
else
echo -e "${YELLOW}⚠ Image $IMAGE not found, skipping${NC}"
fi
done
echo -e "${GREEN}✓ Container image scans complete${NC}"
else
echo -e "${YELLOW}⚠ Trivy not installed. Install: brew install trivy${NC}"
fi
}
# =============================================
# Run selected scans
# =============================================
if $RUN_ALL || $RUN_SECRETS; then
run_secrets_scan
fi
if $RUN_ALL || $RUN_SAST; then
run_sast_scan
fi
if $RUN_ALL || $RUN_SCA; then
run_sca_scan
fi
if $RUN_ALL || $RUN_SBOM; then
run_sbom_generation
fi
if $RUN_ALL || $RUN_IMAGE; then
run_image_scan
fi
# =============================================
# Summary
# =============================================
echo -e "\n${BLUE}╔════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Scan Summary ║${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════════════════╝${NC}"
echo -e "Reports saved to: ${GREEN}$SCAN_RESULTS_DIR${NC}"
ls -la "$SCAN_RESULTS_DIR"/*-${TIMESTAMP}.* 2>/dev/null || echo "No reports generated"
if $CRITICAL_FOUND && $CI_MODE; then
echo -e "\n${RED}✗ Critical security findings detected. CI pipeline should fail.${NC}"
exit 1
else
echo -e "\n${GREEN}✓ Security scan completed${NC}"
exit 0
fi

78
scripts/server-backup.sh Executable file
View File

@@ -0,0 +1,78 @@
#!/bin/bash
# =============================================================================
# Server Backup Script (läuft auf Mac Mini)
# =============================================================================
# Dieses Script läuft auf dem Mac Mini und erstellt tägliche Backups.
# Die Backups werden lokal auf dem Mac Mini gespeichert.
#
# Installation auf Mac Mini:
# 1. Script kopieren: scp scripts/server-backup.sh macmini:~/scripts/
# 2. Ausführbar machen: ssh macmini "chmod +x ~/scripts/server-backup.sh"
# 3. Cron einrichten: ssh macmini "crontab -e"
# Zeile hinzufügen: 0 2 * * * /Users/benjaminadmin/scripts/server-backup.sh
#
# Backups werden gespeichert in: ~/backups/
# =============================================================================
set -e
# Konfiguration
BACKUP_DIR="/Users/benjaminadmin/backups"
TIMESTAMP=$(date +%Y-%m-%d_%H-%M-%S)
RETENTION_DAYS=14
# Erstelle Backup-Verzeichnis
mkdir -p "$BACKUP_DIR"
LOG_FILE="$BACKUP_DIR/backup-$TIMESTAMP.log"
echo "=== Server Backup gestartet: $TIMESTAMP ===" | tee "$LOG_FILE"
# 1. PostgreSQL Backup
echo "[1/3] PostgreSQL Backup..." | tee -a "$LOG_FILE"
PG_BACKUP="$BACKUP_DIR/postgres-$TIMESTAMP.sql.gz"
/usr/local/bin/docker exec breakpilot-pwa-postgres pg_dump -U breakpilot breakpilot_db 2>/dev/null | gzip > "$PG_BACKUP"
if [ -s "$PG_BACKUP" ]; then
SIZE=$(du -h "$PG_BACKUP" | cut -f1)
echo " ✓ PostgreSQL: $PG_BACKUP ($SIZE)" | tee -a "$LOG_FILE"
else
echo " ⚠ PostgreSQL Backup leer oder fehlgeschlagen" | tee -a "$LOG_FILE"
fi
# 2. Gitea Repositories Backup (Bare Repos)
echo "[2/3] Gitea Repositories..." | tee -a "$LOG_FILE"
GITEA_BACKUP="$BACKUP_DIR/gitea-repos-$TIMESTAMP.tar.gz"
/usr/local/bin/docker exec breakpilot-pwa-gitea tar czf - /var/lib/gitea/git/repositories 2>/dev/null > "$GITEA_BACKUP"
if [ -s "$GITEA_BACKUP" ]; then
SIZE=$(du -h "$GITEA_BACKUP" | cut -f1)
echo " ✓ Gitea: $GITEA_BACKUP ($SIZE)" | tee -a "$LOG_FILE"
else
echo " ⚠ Gitea Backup leer oder fehlgeschlagen" | tee -a "$LOG_FILE"
fi
# 3. Vault Backup (Secrets)
echo "[3/3] Vault Secrets..." | tee -a "$LOG_FILE"
VAULT_BACKUP="$BACKUP_DIR/vault-$TIMESTAMP.json"
curl -s -H "X-Vault-Token: breakpilot-dev-token" "http://localhost:8200/v1/secret/data/cicd/api-tokens" > "$VAULT_BACKUP" 2>/dev/null
curl -s -H "X-Vault-Token: breakpilot-dev-token" "http://localhost:8200/v1/secret/data/breakpilot" >> "$VAULT_BACKUP" 2>/dev/null
if [ -s "$VAULT_BACKUP" ]; then
echo " ✓ Vault: $VAULT_BACKUP" | tee -a "$LOG_FILE"
else
echo " ⚠ Vault Backup leer" | tee -a "$LOG_FILE"
fi
# 4. Aufräumen alter Backups
echo "" | tee -a "$LOG_FILE"
echo "Räume Backups älter als $RETENTION_DAYS Tage auf..." | tee -a "$LOG_FILE"
find "$BACKUP_DIR" -name "postgres-*.sql.gz" -mtime +$RETENTION_DAYS -delete 2>/dev/null
find "$BACKUP_DIR" -name "gitea-repos-*.tar.gz" -mtime +$RETENTION_DAYS -delete 2>/dev/null
find "$BACKUP_DIR" -name "vault-*.json" -mtime +$RETENTION_DAYS -delete 2>/dev/null
find "$BACKUP_DIR" -name "backup-*.log" -mtime +$RETENTION_DAYS -delete 2>/dev/null
echo " ✓ Alte Backups entfernt" | tee -a "$LOG_FILE"
# Zusammenfassung
echo "" | tee -a "$LOG_FILE"
echo "=== Backup abgeschlossen: $(date +%Y-%m-%d_%H-%M-%S) ===" | tee -a "$LOG_FILE"
echo "" | tee -a "$LOG_FILE"
echo "Backup-Verzeichnis: $BACKUP_DIR" | tee -a "$LOG_FILE"
ls -lh "$BACKUP_DIR"/*-$TIMESTAMP* 2>/dev/null | tee -a "$LOG_FILE"

View File

@@ -0,0 +1,130 @@
#!/bin/bash
# =============================================================================
# Gitea Branch Protection Setup
# =============================================================================
# Dieses Script richtet Branch Protection für das breakpilot-pwa Repository ein.
#
# Voraussetzungen:
# 1. Gitea API Token erstellen unter:
# http://macmini:3003/user/settings/applications
# → "Generate New Token" → Name: "branch-protection" → Alle Rechte
#
# 2. Token als Umgebungsvariable setzen:
# export GITEA_TOKEN="dein-token-hier"
#
# 3. Script ausführen:
# ./scripts/setup-branch-protection.sh
# =============================================================================
set -e
# Konfiguration
GITEA_URL="http://macmini:3003"
OWNER="pilotadmin"
REPO="breakpilot-pwa"
BRANCH="main"
# Prüfe Token
if [ -z "$GITEA_TOKEN" ]; then
echo "=============================================="
echo "FEHLER: GITEA_TOKEN nicht gesetzt!"
echo "=============================================="
echo ""
echo "Schritte zum Einrichten:"
echo ""
echo "1. Öffne: http://macmini:3003/user/settings/applications"
echo ""
echo "2. Klicke 'Generate New Token'"
echo " - Name: branch-protection"
echo " - Wähle alle Berechtigungen (oder mindestens 'repo')"
echo ""
echo "3. Kopiere den Token und führe aus:"
echo " export GITEA_TOKEN=\"dein-token-hier\""
echo " ./scripts/setup-branch-protection.sh"
echo ""
exit 1
fi
echo "=============================================="
echo "Gitea Branch Protection Setup"
echo "=============================================="
echo ""
echo "Repository: $OWNER/$REPO"
echo "Branch: $BRANCH"
echo ""
# Prüfe API-Zugang
echo "[1/3] Prüfe API-Zugang..."
API_CHECK=$(curl -s -o /dev/null -w "%{http_code}" \
-H "Authorization: token $GITEA_TOKEN" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO")
if [ "$API_CHECK" != "200" ]; then
echo "FEHLER: API-Zugang fehlgeschlagen (HTTP $API_CHECK)"
echo "Prüfe Token und Repository-Name."
exit 1
fi
echo " ✓ API-Zugang OK"
# Branch Protection einrichten
echo "[2/3] Richte Branch Protection ein..."
# Lösche bestehende Protection falls vorhanden
curl -s -X DELETE \
-H "Authorization: token $GITEA_TOKEN" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/branch_protections/$BRANCH" \
>/dev/null 2>&1 || true
# Erstelle neue Branch Protection
RESPONSE=$(curl -s -X POST \
-H "Authorization: token $GITEA_TOKEN" \
-H "Content-Type: application/json" \
"$GITEA_URL/api/v1/repos/$OWNER/$REPO/branch_protections" \
-d '{
"branch_name": "main",
"enable_push": false,
"enable_push_whitelist": true,
"push_whitelist_usernames": [],
"push_whitelist_deploy_keys": false,
"enable_merge_whitelist": false,
"enable_status_check": true,
"status_check_contexts": [],
"required_approvals": 1,
"enable_approvals_whitelist": false,
"block_on_rejected_reviews": true,
"block_on_outdated_branch": true,
"dismiss_stale_approvals": true,
"require_signed_commits": false,
"protected_file_patterns": "",
"unprotected_file_patterns": ""
}')
if echo "$RESPONSE" | grep -q "branch_name"; then
echo " ✓ Branch Protection aktiviert"
else
echo " ⚠ Möglicherweise bereits eingerichtet oder Fehler:"
echo "$RESPONSE" | head -5
fi
# Zusammenfassung
echo "[3/3] Fertig!"
echo ""
echo "=============================================="
echo "Branch Protection für 'main' ist aktiv:"
echo "=============================================="
echo ""
echo " ✓ Direkter Push auf 'main' blockiert"
echo " ✓ Pull Request erforderlich"
echo " ✓ Mindestens 1 Approval erforderlich"
echo " ✓ Veraltete Approvals werden verworfen"
echo " ✓ Blockiert bei abgelehnten Reviews"
echo ""
echo "Team-Workflow:"
echo " 1. git checkout -b feature/mein-feature"
echo " 2. git push -u origin feature/mein-feature"
echo " 3. Pull Request erstellen in Gitea"
echo " 4. Code Review + Approval"
echo " 5. Merge in main"
echo ""
echo "Gitea: http://macmini:3003/$OWNER/$REPO"
echo ""

102
scripts/setup-gitea.sh Executable file
View File

@@ -0,0 +1,102 @@
#!/bin/bash
# =============================================================================
# Gitea Setup Script
# =============================================================================
# Dieses Script richtet Gitea und den Actions Runner für die SBOM Pipeline ein.
#
# Voraussetzungen:
# - Docker und Docker Compose installiert
# - Zugriff auf Mac Mini (ssh macmini)
#
# Verwendung:
# ./scripts/setup-gitea.sh
# =============================================================================
set -e
echo "=========================================="
echo "Gitea Setup für BreakPilot"
echo "=========================================="
# Farben für Output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Prüfen ob wir auf Mac Mini sind oder lokal
if [ "$(hostname)" = "macmini" ] || [ "$(hostname)" = "Mac-mini" ]; then
DOCKER_CMD="docker"
COMPOSE_CMD="docker compose"
PROJECT_DIR="/Users/benjaminadmin/Projekte/breakpilot-pwa"
else
# Remote execution auf Mac Mini
DOCKER_CMD="ssh macmini /usr/local/bin/docker"
COMPOSE_CMD="ssh macmini /usr/local/bin/docker compose -f /Users/benjaminadmin/Projekte/breakpilot-pwa/docker-compose.yml"
PROJECT_DIR="."
fi
echo ""
echo -e "${YELLOW}1. Starte Gitea Container...${NC}"
$COMPOSE_CMD up -d gitea
echo ""
echo -e "${YELLOW}2. Warte auf Gitea Startup (30 Sekunden)...${NC}"
sleep 30
echo ""
echo -e "${YELLOW}3. Prüfe Gitea Health...${NC}"
HEALTH_CHECK=$(curl -s -o /dev/null -w "%{http_code}" http://macmini:3003/api/healthz 2>/dev/null || echo "000")
if [ "$HEALTH_CHECK" = "200" ]; then
echo -e "${GREEN}✓ Gitea ist erreichbar!${NC}"
else
echo -e "${RED}✗ Gitea nicht erreichbar (HTTP $HEALTH_CHECK)${NC}"
echo " Bitte prüfen: docker logs breakpilot-pwa-gitea"
exit 1
fi
echo ""
echo -e "${GREEN}=========================================="
echo "Gitea ist bereit!"
echo "==========================================${NC}"
echo ""
echo "Nächste Schritte (manuell):"
echo ""
echo "1. Öffne Gitea im Browser:"
echo " http://macmini:3003"
echo ""
echo "2. Erstelle einen Admin-Account:"
echo " - Username: admin"
echo " - Email: admin@breakpilot.de"
echo " - Passwort: (sicheres Passwort wählen)"
echo ""
echo "3. Erstelle ein Repository:"
echo " - Name: breakpilot-pwa"
echo " - Visibility: Private"
echo ""
echo "4. Aktiviere Gitea Actions:"
echo " Repository Settings → Actions → Enable Repository Actions"
echo ""
echo "5. Erstelle einen Runner Token:"
echo " Repository Settings → Actions → Runners → Create new Runner"
echo " → Token kopieren"
echo ""
echo "6. Starte den Runner mit Token:"
echo " export GITEA_RUNNER_TOKEN=<dein-token>"
echo " docker compose up -d gitea-runner"
echo ""
echo "7. Push das Repository zu Gitea:"
echo " git remote add gitea http://macmini:3003/admin/breakpilot-pwa.git"
echo " git push gitea main"
echo ""
echo "8. Die SBOM Pipeline läuft automatisch bei jedem Push!"
echo ""
echo -e "${YELLOW}Hinweis: Die PostgreSQL-Datenbank 'gitea' wird automatisch erstellt.${NC}"
echo ""
# Optional: Gitea DB in PostgreSQL erstellen
echo -e "${YELLOW}Erstelle Gitea Datenbank in PostgreSQL (falls nicht vorhanden)...${NC}"
$DOCKER_CMD exec -i breakpilot-pwa-postgres psql -U breakpilot -d postgres -c "CREATE DATABASE gitea;" 2>/dev/null || echo " (Datenbank existiert bereits)"
echo ""
echo -e "${GREEN}Setup abgeschlossen!${NC}"

102
scripts/start-content-services.sh Executable file
View File

@@ -0,0 +1,102 @@
#!/bin/bash
# BreakPilot Content Service - Startup Script
# Starts all Content Service components
set -e
echo "
╔════════════════════════════════════════════════════════╗
║ 🎓 BreakPilot Content Service - Startup ║
║ 📦 Starting Educational Content Platform... ║
╚════════════════════════════════════════════════════════╝
"
# Check Docker
if ! command -v docker &> /dev/null; then
echo "❌ Docker not found. Please install Docker first."
exit 1
fi
if ! command -v docker-compose &> /dev/null; then
echo "❌ docker-compose not found. Please install docker-compose first."
exit 1
fi
# Create network if not exists
if ! docker network inspect breakpilot-pwa-network &> /dev/null; then
echo "📡 Creating Docker network..."
docker network create breakpilot-pwa-network
fi
# Start services
echo "🚀 Starting Content Services..."
echo ""
docker-compose \
-f docker-compose.yml \
-f docker-compose.content.yml \
up -d
echo ""
echo "⏳ Waiting for services to be healthy..."
sleep 10
# Check service health
echo ""
echo "🔍 Checking service status..."
echo ""
# Content Service
if curl -f http://localhost:8002/health &> /dev/null; then
echo "✅ Content Service API - http://localhost:8002"
else
echo "⚠️ Content Service API - Starting..."
fi
# MinIO
if curl -f http://localhost:9000/minio/health/live &> /dev/null; then
echo "✅ MinIO Storage - http://localhost:9001 (UI)"
else
echo "⚠️ MinIO Storage - Starting..."
fi
# H5P Service
if curl -f http://localhost:8003/health &> /dev/null; then
echo "✅ H5P Service - http://localhost:8003"
else
echo "⚠️ H5P Service - Starting..."
fi
# Content DB
if docker exec breakpilot-pwa-content-db pg_isready -U breakpilot -d breakpilot_content &> /dev/null; then
echo "✅ Content Database - localhost:5433"
else
echo "⚠️ Content Database - Starting..."
fi
echo ""
echo "
╔════════════════════════════════════════════════════════╗
║ ✅ Content Services Started! ║
╠════════════════════════════════════════════════════════╣
║ ║
║ 📍 Content Service API: http://localhost:8002/docs ║
║ 📍 MinIO Console: http://localhost:9001 ║
║ 📍 H5P Editor: http://localhost:8003 ║
║ 📍 Content Database: localhost:5433 ║
║ ║
║ 📚 Setup Guide: CONTENT_SERVICE_SETUP.md ║
║ ║
╚════════════════════════════════════════════════════════╝
"
echo "💡 Quick Commands:"
echo ""
echo " View Logs: docker-compose -f docker-compose.content.yml logs -f"
echo " Stop Services: docker-compose -f docker-compose.content.yml down"
echo " Restart: docker-compose -f docker-compose.content.yml restart"
echo ""
echo "🎉 Ready to create educational content!"
echo ""

52
scripts/start.sh Executable file
View File

@@ -0,0 +1,52 @@
#!/bin/bash
# ============================================
# BreakPilot Quick Start
# ============================================
# Usage: ./scripts/start.sh [dev|staging|prod] [services...]
#
# Examples:
# ./scripts/start.sh dev # Start all dev services
# ./scripts/start.sh dev backend postgres # Start only specific services
# ./scripts/start.sh staging # Start staging environment
# ============================================
set -e
ENV=${1:-dev}
shift 2>/dev/null || true
SERVICES="$@"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT_DIR="$(dirname "$SCRIPT_DIR")"
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
cd "$ROOT_DIR"
# Switch environment
"$SCRIPT_DIR/env-switch.sh" "$ENV"
echo ""
echo -e "${YELLOW}Starting services...${NC}"
echo ""
case $ENV in
dev|development)
docker compose up -d $SERVICES
;;
staging)
docker compose -f docker-compose.yml -f docker-compose.staging.yml up -d $SERVICES
;;
prod|production)
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d $SERVICES
;;
esac
echo ""
echo -e "${GREEN}✓ Services started!${NC}"
echo ""
echo "Check status with: docker compose ps"
echo "View logs with: docker compose logs -f [service]"

132
scripts/status.sh Executable file
View File

@@ -0,0 +1,132 @@
#!/bin/bash
# ============================================
# BreakPilot Status Overview
# ============================================
# Shows current environment, git branch, and service status
#
# Usage: ./scripts/status.sh
# ============================================
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT_DIR="$(dirname "$SCRIPT_DIR")"
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
cd "$ROOT_DIR"
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE} BreakPilot Status${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
# Current Environment
echo -e "${CYAN}Environment:${NC}"
if [ -f .env ]; then
CURRENT_ENV=$(grep "^ENVIRONMENT=" .env 2>/dev/null | cut -d= -f2)
PROJECT_NAME=$(grep "^COMPOSE_PROJECT_NAME=" .env 2>/dev/null | cut -d= -f2)
case $CURRENT_ENV in
development)
echo -e " Active: ${GREEN}$CURRENT_ENV${NC}"
;;
staging)
echo -e " Active: ${YELLOW}$CURRENT_ENV${NC}"
;;
production)
echo -e " Active: ${RED}$CURRENT_ENV${NC}"
;;
*)
echo -e " Active: ${CURRENT_ENV:-unknown}"
;;
esac
echo " Project: ${PROJECT_NAME:-not set}"
else
echo -e " ${YELLOW}Not configured (.env missing)${NC}"
echo " Run: ./scripts/env-switch.sh dev"
fi
echo ""
# Git Status
echo -e "${CYAN}Git:${NC}"
if [ -d .git ]; then
BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
case $BRANCH in
develop)
echo -e " Branch: ${GREEN}$BRANCH${NC} (development)"
;;
staging)
echo -e " Branch: ${YELLOW}$BRANCH${NC} (staging)"
;;
main)
echo -e " Branch: ${RED}$BRANCH${NC} (production)"
;;
*)
echo " Branch: $BRANCH"
;;
esac
# Check for uncommitted changes
if git diff-index --quiet HEAD -- 2>/dev/null; then
echo -e " Status: ${GREEN}Clean${NC}"
else
CHANGES=$(git status --porcelain | wc -l | tr -d ' ')
echo -e " Status: ${YELLOW}$CHANGES uncommitted change(s)${NC}"
fi
else
echo -e " ${YELLOW}Not a git repository${NC}"
echo " Run: git init"
fi
echo ""
# Docker Status
echo -e "${CYAN}Services:${NC}"
if command -v docker &> /dev/null; then
RUNNING=$(docker compose ps --format "table {{.Name}}\t{{.Status}}" 2>/dev/null | tail -n +2 | wc -l | tr -d ' ')
if [ "$RUNNING" -gt 0 ]; then
echo " Running: $RUNNING container(s)"
echo ""
docker compose ps --format "table {{.Name}}\t{{.Status}}\t{{.Ports}}" 2>/dev/null | head -15
TOTAL=$(docker compose ps -a --format "{{.Name}}" 2>/dev/null | wc -l | tr -d ' ')
if [ "$TOTAL" -gt 14 ]; then
echo " ... and $((TOTAL - 14)) more"
fi
else
echo -e " ${YELLOW}No services running${NC}"
echo " Start with: ./scripts/start.sh dev"
fi
else
echo -e " ${RED}Docker not available${NC}"
fi
echo ""
# Database Volumes
echo -e "${CYAN}Database Volumes:${NC}"
docker volume ls --format "{{.Name}}" 2>/dev/null | grep -E "(postgres|breakpilot.*postgres)" | while read vol; do
case $vol in
*staging*)
echo -e " ${YELLOW}$vol${NC}"
;;
*prod*)
echo -e " ${RED}$vol${NC}"
;;
*)
echo -e " ${GREEN}$vol${NC}"
;;
esac
done || echo " No database volumes found"
echo ""
# Quick Commands
echo -e "${CYAN}Quick Commands:${NC}"
echo " Switch env: ./scripts/env-switch.sh [dev|staging]"
echo " Start: ./scripts/start.sh [dev|staging]"
echo " Stop: ./scripts/stop.sh [dev|staging]"
echo " Promote: ./scripts/promote.sh [dev-to-staging|staging-to-prod]"
echo " Logs: docker compose logs -f [service]"

40
scripts/stop.sh Executable file
View File

@@ -0,0 +1,40 @@
#!/bin/bash
# ============================================
# BreakPilot Stop Services
# ============================================
# Usage: ./scripts/stop.sh [dev|staging|prod]
# ============================================
set -e
ENV=${1:-dev}
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT_DIR="$(dirname "$SCRIPT_DIR")"
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
cd "$ROOT_DIR"
echo -e "${YELLOW}Stopping $ENV services...${NC}"
case $ENV in
dev|development)
docker compose down
;;
staging)
docker compose -f docker-compose.yml -f docker-compose.staging.yml down
;;
prod|production)
docker compose -f docker-compose.yml -f docker-compose.prod.yml down
;;
*)
echo "Unknown environment: $ENV"
echo "Usage: $0 [dev|staging|prod]"
exit 1
;;
esac
echo -e "${GREEN}✓ Services stopped.${NC}"

View File

@@ -0,0 +1,167 @@
#!/bin/bash
# ============================================
# Woodpecker OAuth Credentials Sync Script
# ============================================
# Dieses Script synchronisiert die Woodpecker OAuth-Credentials
# zwischen Gitea, Vault und der .env-Datei.
#
# Verwendung:
# ./scripts/sync-woodpecker-credentials.sh [--regenerate]
#
# Optionen:
# --regenerate Erstellt neue OAuth-Credentials
#
# Das Script:
# 1. Liest die aktuellen Credentials aus Vault
# 2. Aktualisiert die .env-Datei
# 3. Optional: Erstellt neue Credentials in Gitea
# ============================================
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(dirname "$SCRIPT_DIR")"
ENV_FILE="$PROJECT_DIR/.env"
# Vault Konfiguration
VAULT_ADDR="${VAULT_ADDR:-http://localhost:8200}"
VAULT_TOKEN="${VAULT_DEV_TOKEN:-breakpilot-dev-token}"
VAULT_SECRET_PATH="secret/cicd/woodpecker"
# Gitea Konfiguration
GITEA_URL="${GITEA_URL:-http://macmini:3003}"
GITEA_API_TOKEN_PATH="secret/cicd/api-tokens"
# Farben für Output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
log_info() { echo -e "${GREEN}[INFO]${NC} $1"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
# Prüfe ob Docker verfügbar ist
check_docker() {
if command -v docker &> /dev/null; then
DOCKER_CMD="docker"
elif [ -x "/usr/local/bin/docker" ]; then
DOCKER_CMD="/usr/local/bin/docker"
else
log_error "Docker nicht gefunden"
exit 1
fi
}
# Hole Credentials aus Vault
get_vault_credentials() {
log_info "Lade Credentials aus Vault..."
VAULT_RESPONSE=$($DOCKER_CMD exec -e VAULT_TOKEN="$VAULT_TOKEN" breakpilot-pwa-vault \
vault kv get -format=json "$VAULT_SECRET_PATH" 2>/dev/null || echo "{}")
if echo "$VAULT_RESPONSE" | grep -q "gitea_client_id"; then
GITEA_CLIENT_ID=$(echo "$VAULT_RESPONSE" | python3 -c "import sys,json; print(json.load(sys.stdin)['data']['data']['gitea_client_id'])")
GITEA_CLIENT_SECRET=$(echo "$VAULT_RESPONSE" | python3 -c "import sys,json; print(json.load(sys.stdin)['data']['data']['gitea_client_secret'])")
log_info "Credentials aus Vault geladen"
return 0
else
log_warn "Keine Credentials in Vault gefunden"
return 1
fi
}
# Aktualisiere .env Datei
update_env_file() {
log_info "Aktualisiere .env Datei..."
if [ -z "$GITEA_CLIENT_ID" ] || [ -z "$GITEA_CLIENT_SECRET" ]; then
log_error "Credentials nicht verfügbar"
return 1
fi
# Erstelle Backup
cp "$ENV_FILE" "$ENV_FILE.backup"
# Aktualisiere oder füge WOODPECKER_GITEA_CLIENT hinzu
if grep -q "^WOODPECKER_GITEA_CLIENT=" "$ENV_FILE"; then
sed -i.tmp "s|^WOODPECKER_GITEA_CLIENT=.*|WOODPECKER_GITEA_CLIENT=$GITEA_CLIENT_ID|" "$ENV_FILE"
else
echo "WOODPECKER_GITEA_CLIENT=$GITEA_CLIENT_ID" >> "$ENV_FILE"
fi
# Aktualisiere oder füge WOODPECKER_GITEA_SECRET hinzu
if grep -q "^WOODPECKER_GITEA_SECRET=" "$ENV_FILE"; then
sed -i.tmp "s|^WOODPECKER_GITEA_SECRET=.*|WOODPECKER_GITEA_SECRET=$GITEA_CLIENT_SECRET|" "$ENV_FILE"
else
echo "WOODPECKER_GITEA_SECRET=$GITEA_CLIENT_SECRET" >> "$ENV_FILE"
fi
rm -f "$ENV_FILE.tmp"
log_info ".env aktualisiert"
}
# Erstelle neue OAuth-Credentials in Gitea
regenerate_credentials() {
log_info "Erstelle neue OAuth-Credentials..."
# Hole Gitea API Token aus Vault
API_TOKEN_RESPONSE=$($DOCKER_CMD exec -e VAULT_TOKEN="$VAULT_TOKEN" breakpilot-pwa-vault \
vault kv get -format=json "$GITEA_API_TOKEN_PATH" 2>/dev/null)
GITEA_API_TOKEN=$(echo "$API_TOKEN_RESPONSE" | python3 -c "import sys,json; print(json.load(sys.stdin)['data']['data']['gitea_token'])")
# Generiere neue Credentials
NEW_CLIENT_ID=$(uuidgen | tr '[:upper:]' '[:lower:]')
NEW_CLIENT_SECRET=$(openssl rand -hex 32)
# Erstelle bcrypt Hash für das Secret
HASHED_SECRET=$($DOCKER_CMD run --rm alpine sh -c "apk add --no-cache apache2-utils >/dev/null 2>&1 && htpasswd -nbBC 10 '' '$NEW_CLIENT_SECRET' | cut -d: -f2 | sed 's/^\$2y/\$2a/'")
HASHED_SECRET_B64=$(echo -n "$HASHED_SECRET" | base64)
# Lösche alte OAuth Apps
$DOCKER_CMD exec breakpilot-pwa-postgres psql -U breakpilot -d gitea -c \
"DELETE FROM oauth2_application WHERE name = 'Woodpecker CI';" >/dev/null
# Erstelle neue OAuth App
TIMESTAMP=$(date +%s)
$DOCKER_CMD exec breakpilot-pwa-postgres psql -U breakpilot -d gitea -c \
"INSERT INTO oauth2_application (uid, name, client_id, confidential_client, redirect_uris, created_unix, updated_unix) VALUES (1, 'Woodpecker CI', '$NEW_CLIENT_ID', true, '[\"http://macmini:8090/authorize\"]', $TIMESTAMP, $TIMESTAMP);" >/dev/null
# Setze das gehashte Secret (via base64 decode um Shell-Escaping zu vermeiden)
DECODED_HASH=$(echo "$HASHED_SECRET_B64" | base64 -d)
$DOCKER_CMD exec breakpilot-pwa-postgres psql -U breakpilot -d gitea -c \
"UPDATE oauth2_application SET client_secret = '$DECODED_HASH' WHERE client_id = '$NEW_CLIENT_ID';" >/dev/null
# Speichere in Vault
$DOCKER_CMD exec -e VAULT_TOKEN="$VAULT_TOKEN" breakpilot-pwa-vault vault kv put "$VAULT_SECRET_PATH" \
gitea_client_id="$NEW_CLIENT_ID" \
gitea_client_secret="$NEW_CLIENT_SECRET" >/dev/null
GITEA_CLIENT_ID="$NEW_CLIENT_ID"
GITEA_CLIENT_SECRET="$NEW_CLIENT_SECRET"
log_info "Neue Credentials erstellt und in Vault gespeichert"
}
# Hauptlogik
main() {
check_docker
if [ "$1" == "--regenerate" ]; then
regenerate_credentials
else
get_vault_credentials || regenerate_credentials
fi
update_env_file
log_info "Credentials synchronisiert!"
echo ""
echo "Nächste Schritte:"
echo "1. Sync .env zu macmini: rsync -av .env macmini:~/Projekte/breakpilot-pwa/"
echo "2. Woodpecker neu starten: docker compose restart woodpecker-server"
echo "3. Login testen: http://macmini:8090"
}
main "$@"

239
scripts/test-environment-setup.sh Executable file
View File

@@ -0,0 +1,239 @@
#!/bin/bash
# ============================================
# BreakPilot Environment Setup Tests
# ============================================
# Tests for environment configuration and scripts
#
# Usage: ./scripts/test-environment-setup.sh
# ============================================
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT_DIR="$(dirname "$SCRIPT_DIR")"
# Colors
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Test counters
TESTS_RUN=0
TESTS_PASSED=0
TESTS_FAILED=0
# Test result function
test_result() {
local name=$1
local result=$2
TESTS_RUN=$((TESTS_RUN + 1))
if [ "$result" -eq 0 ]; then
echo -e "${GREEN}${NC} $name"
TESTS_PASSED=$((TESTS_PASSED + 1))
else
echo -e "${RED}${NC} $name"
TESTS_FAILED=$((TESTS_FAILED + 1))
fi
}
cd "$ROOT_DIR"
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE} Environment Setup Tests${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
# ==========================================
# File Existence Tests
# ==========================================
echo -e "${YELLOW}Testing: File Existence${NC}"
# Test .gitignore
test -f ".gitignore"
test_result ".gitignore exists" $?
# Test .env files
test -f ".env.dev"
test_result ".env.dev exists" $?
test -f ".env.staging"
test_result ".env.staging exists" $?
test -f ".env.example"
test_result ".env.example exists" $?
# Test Docker Compose files
test -f "docker-compose.yml"
test_result "docker-compose.yml exists" $?
test -f "docker-compose.override.yml"
test_result "docker-compose.override.yml exists" $?
test -f "docker-compose.staging.yml"
test_result "docker-compose.staging.yml exists" $?
# Test Scripts
test -x "scripts/env-switch.sh"
test_result "scripts/env-switch.sh is executable" $?
test -x "scripts/start.sh"
test_result "scripts/start.sh is executable" $?
test -x "scripts/stop.sh"
test_result "scripts/stop.sh is executable" $?
test -x "scripts/promote.sh"
test_result "scripts/promote.sh is executable" $?
test -x "scripts/status.sh"
test_result "scripts/status.sh is executable" $?
# Test Documentation
test -f "docs/architecture/environments.md"
test_result "docs/architecture/environments.md exists" $?
test -f "docs/guides/environment-setup.md"
test_result "docs/guides/environment-setup.md exists" $?
echo ""
# ==========================================
# Environment File Content Tests
# ==========================================
echo -e "${YELLOW}Testing: Environment File Content${NC}"
# Test .env.dev content
grep -q "ENVIRONMENT=development" .env.dev 2>/dev/null
test_result ".env.dev contains ENVIRONMENT=development" $?
grep -q "COMPOSE_PROJECT_NAME=breakpilot-dev" .env.dev 2>/dev/null
test_result ".env.dev contains correct COMPOSE_PROJECT_NAME" $?
grep -q "POSTGRES_DB=breakpilot_dev" .env.dev 2>/dev/null
test_result ".env.dev contains correct POSTGRES_DB" $?
# Test .env.staging content
grep -q "ENVIRONMENT=staging" .env.staging 2>/dev/null
test_result ".env.staging contains ENVIRONMENT=staging" $?
grep -q "COMPOSE_PROJECT_NAME=breakpilot-staging" .env.staging 2>/dev/null
test_result ".env.staging contains correct COMPOSE_PROJECT_NAME" $?
grep -q "POSTGRES_DB=breakpilot_staging" .env.staging 2>/dev/null
test_result ".env.staging contains correct POSTGRES_DB" $?
grep -q "DEBUG=false" .env.staging 2>/dev/null
test_result ".env.staging has DEBUG=false" $?
echo ""
# ==========================================
# Docker Compose Syntax Tests
# ==========================================
echo -e "${YELLOW}Testing: Docker Compose Syntax${NC}"
# Test docker-compose.override.yml syntax
docker compose -f docker-compose.yml -f docker-compose.override.yml config > /dev/null 2>&1
test_result "docker-compose.override.yml syntax valid" $?
# Test docker-compose.staging.yml syntax
docker compose -f docker-compose.yml -f docker-compose.staging.yml config > /dev/null 2>&1
test_result "docker-compose.staging.yml syntax valid" $?
echo ""
# ==========================================
# Git Repository Tests
# ==========================================
echo -e "${YELLOW}Testing: Git Repository${NC}"
# Test git repo exists
test -d ".git"
test_result "Git repository initialized" $?
# Test branches exist
git branch --list develop | grep -q develop
test_result "develop branch exists" $?
git branch --list staging | grep -q staging
test_result "staging branch exists" $?
git branch --list main | grep -q main
test_result "main branch exists" $?
# Test current branch
CURRENT_BRANCH=$(git branch --show-current)
[ "$CURRENT_BRANCH" = "develop" ]
test_result "Current branch is develop" $?
echo ""
# ==========================================
# .gitignore Tests
# ==========================================
echo -e "${YELLOW}Testing: .gitignore Content${NC}"
# Test that .env is ignored but .env.dev is not
grep -q "^\.env$" .gitignore 2>/dev/null
test_result ".gitignore excludes .env" $?
grep -q "!\.env\.dev" .gitignore 2>/dev/null
test_result ".gitignore includes .env.dev" $?
grep -q "!\.env\.staging" .gitignore 2>/dev/null
test_result ".gitignore includes .env.staging" $?
grep -q "__pycache__" .gitignore 2>/dev/null
test_result ".gitignore excludes __pycache__" $?
grep -q "node_modules" .gitignore 2>/dev/null
test_result ".gitignore excludes node_modules" $?
echo ""
# ==========================================
# Script Functionality Tests
# ==========================================
echo -e "${YELLOW}Testing: Script Functionality${NC}"
# Test env-switch.sh creates .env
./scripts/env-switch.sh dev > /dev/null 2>&1
test -f ".env"
test_result "env-switch.sh creates .env file" $?
# Verify .env content matches .env.dev
grep -q "ENVIRONMENT=development" .env 2>/dev/null
test_result "env-switch.sh sets correct environment" $?
# Test switching to staging
./scripts/env-switch.sh staging > /dev/null 2>&1
grep -q "ENVIRONMENT=staging" .env 2>/dev/null
test_result "env-switch.sh can switch to staging" $?
# Switch back to dev
./scripts/env-switch.sh dev > /dev/null 2>&1
echo ""
# ==========================================
# Summary
# ==========================================
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE} Test Summary${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
echo -e "Tests run: $TESTS_RUN"
echo -e "Tests passed: ${GREEN}$TESTS_PASSED${NC}"
echo -e "Tests failed: ${RED}$TESTS_FAILED${NC}"
echo ""
if [ $TESTS_FAILED -eq 0 ]; then
echo -e "${GREEN}All tests passed!${NC}"
exit 0
else
echo -e "${RED}Some tests failed.${NC}"
exit 1
fi