This repository has been archived on 2026-02-15. You can view files and clone it. You cannot open issues or pull requests or push a commit.
Files
breakpilot-pwa/docs/ci-cd/TEST-PIPELINE-DEVELOPER-GUIDE.md
Benjamin Admin bfdaf63ba9 fix: Restore all files lost during destructive rebase
A previous `git pull --rebase origin main` dropped 177 local commits,
losing 3400+ files across admin-v2, backend, studio-v2, website,
klausur-service, and many other services. The partial restore attempt
(660295e2) only recovered some files.

This commit restores all missing files from pre-rebase ref 98933f5e
while preserving post-rebase additions (night-scheduler, night-mode UI,
NightModeWidget dashboard integration).

Restored features include:
- AI Module Sidebar (FAB), OCR Labeling, OCR Compare
- GPU Dashboard, RAG Pipeline, Magic Help
- Klausur-Korrektur (8 files), Abitur-Archiv (5+ files)
- Companion, Zeugnisse-Crawler, Screen Flow
- Full backend, studio-v2, website, klausur-service
- All compliance SDKs, agent-core, voice-service
- CI/CD configs, documentation, scripts

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 09:51:32 +01:00

34 KiB

CI/CD Pipeline & Test-System - Entwicklerdokumentation

Letzte Aktualisierung: 2026-02-04 Status: Produktiv Maintainer: DevOps Team


Inhaltsverzeichnis

  1. Architektur-Übersicht
  2. Woodpecker CI Pipeline
  3. Test Registry Backend
  4. Datenbank-Schema
  5. Backlog-System
  6. Frontend Dashboard
  7. Service-Übersicht
  8. Fehlerbehebung
  9. API-Referenz

1. Architektur-Übersicht

Systemkomponenten

┌─────────────────────────────────────────────────────────────────────────────┐
│                           ENTWICKLER-WORKFLOW                                │
│                                                                              │
│   git push ──▶ Gitea (macmini:3003) ──▶ Webhook ──▶ Woodpecker CI           │
└─────────────────────────────────────────────────────────────────────────────┘
                                                            │
                                                            ▼
┌─────────────────────────────────────────────────────────────────────────────┐
│                         WOODPECKER CI PIPELINE                               │
│                                                                              │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐  ┌──────────────────┐ │
│  │  Go Tests    │  │ Python Tests │  │  Node Tests  │  │ Report Results   │ │
│  │  (consent,   │  │  (backend,   │  │  (h5p)       │  │ (curl → Backend) │ │
│  │   billing,   │  │   voice,     │  │              │  │                  │ │
│  │   school)    │  │   klausur)   │  │              │  │                  │ │
│  └──────────────┘  └──────────────┘  └──────────────┘  └──────────────────┘ │
│                                                            │                 │
└────────────────────────────────────────────────────────────│─────────────────┘
                                                             │
                                                             ▼
┌─────────────────────────────────────────────────────────────────────────────┐
│                           BACKEND (FastAPI)                                  │
│                                                                              │
│  POST /api/tests/ci-result                                                   │
│       │                                                                      │
│       ├──▶ TestRunDB (Test-Durchläufe)                                      │
│       ├──▶ TestResultDB (Einzelne Tests)                                    │
│       ├──▶ TestServiceStatsDB (Aggregierte Stats)                           │
│       └──▶ FailedTestBacklogDB (Backlog bei Fehlern)                        │
│                                                                              │
└─────────────────────────────────────────────────────────────────────────────┘
                                                             │
                                                             ▼
┌─────────────────────────────────────────────────────────────────────────────┐
│                         FRONTEND (Next.js)                                   │
│                                                                              │
│  Test Dashboard: https://macmini:3002/infrastructure/tests                   │
│  CI/CD Dashboard: https://macmini:3002/infrastructure/ci-cd                  │
│                                                                              │
└─────────────────────────────────────────────────────────────────────────────┘

Technologie-Stack

Komponente Technologie Port Beschreibung
CI/CD Server Woodpecker CI v3 4431 (HTTPS) Pipeline-Orchestrierung
Git Server Gitea 3003 Repository-Hosting
Backend API FastAPI (Python) 8000 Test Registry, Backlog-Management
Datenbank PostgreSQL 5432 Persistente Speicherung
Frontend Next.js 3002 Admin Dashboard
Plattform ARM64 (Apple Silicon) - Mac Mini M2

2. Woodpecker CI Pipeline

Konfigurationsdatei

Pfad: .woodpecker/main.yml

Pipeline-Stages

# Stage 1: Lint (nur bei PRs)
go-lint          # golangci-lint für Go Services
python-lint      # ruff/black für Python

# Stage 2: Unit Tests
test-go-consent      # consent-service (Go)
test-go-billing      # billing-service (Go)
test-go-school       # school-service (Go)
test-python-backend  # backend (Python/pytest)
test-python-voice    # voice-service inkl. BQAS (Python/pytest)
test-python-klausur  # klausur-service (Python/pytest)
test-nodejs-h5p      # h5p-service (Node.js/Jest)

# Stage 3: Report
report-test-results  # Sendet Ergebnisse an Backend API

# Stage 4: Build (nur Tags/manuell)
build-consent-service
build-backend
build-voice-service

# Stage 5: Deploy (nur manuell)
deploy-production

Test-Step Struktur (Beispiel Go)

test-go-consent:
  image: golang:1.23-alpine
  environment:
    CGO_ENABLED: "0"
  commands:
    - |
      set -euo pipefail
      apk add --no-cache jq bash
      mkdir -p .ci-results

      # Directory-Check (falls Service nicht existiert)
      if [ ! -d "consent-service" ]; then
        echo '{"service":"consent-service","framework":"go","total":0,"passed":0,"failed":0,"skipped":0,"coverage":0}' > .ci-results/results-consent.json
        echo "WARNUNG: consent-service Verzeichnis nicht gefunden"
        exit 0
      fi

      cd consent-service
      set +e
      go test -v -json -coverprofile=coverage.out ./... 2>&1 | tee ../.ci-results/test-consent.json
      TEST_EXIT=$?
      set -e

      # jq für korrektes JSON-Parsing (nur Test-Zeilen zählen, nicht Package-Zeilen)
      TOTAL=$(jq -s '[.[] | select(.Action=="run" and .Test != null)] | length' ../.ci-results/test-consent.json || echo 0)
      PASSED=$(jq -s '[.[] | select(.Action=="pass" and .Test != null)] | length' ../.ci-results/test-consent.json || echo 0)
      FAILED=$(jq -s '[.[] | select(.Action=="fail" and .Test != null)] | length' ../.ci-results/test-consent.json || echo 0)
      SKIPPED=$(jq -s '[.[] | select(.Action=="skip" and .Test != null)] | length' ../.ci-results/test-consent.json || echo 0)

      COVERAGE=$(go tool cover -func=coverage.out 2>/dev/null | tail -1 | awk '{print $3}' | tr -d '%' || echo "0")
      [ -z "$COVERAGE" ] && COVERAGE=0

      echo "{\"service\":\"consent-service\",\"framework\":\"go\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":$COVERAGE}" > ../.ci-results/results-consent.json
      cat ../.ci-results/results-consent.json

      if [ "$FAILED" -gt "0" ] || [ "$TEST_EXIT" -ne "0" ]; then exit 1; fi

Wichtige Änderungen gegenüber früheren Versionen:

  • set -euo pipefail für strikte Fehlerbehandlung
  • jq statt grep -c für korrektes JSON-Parsing (verhindert Überzählung bei mehreren Actions pro Test)
  • Directory-Check vor dem cd Befehl
  • Separate Prüfung von TEST_EXIT und FAILED Count

Test-Step Struktur (Beispiel Python)

test-python-backend:
  image: python:3.12-slim
  commands:
    - |
      mkdir -p .ci-results
      cd backend
      pip install --quiet -r requirements.txt
      pip install --quiet pytest pytest-cov pytest-asyncio pytest-json-report

      # Tests mit JSON-Report ausführen
      pytest tests/ -v --tb=short \
        --cov=. --cov-report=term-missing \
        --json-report --json-report-file=../.ci-results/test-backend.json || true

      # Statistiken aus JSON extrahieren
      if [ -f ../.ci-results/test-backend.json ]; then
        TOTAL=$(python3 -c "import json; d=json.load(open('../.ci-results/test-backend.json')); print(d.get('summary',{}).get('total',0))")
        PASSED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-backend.json')); print(d.get('summary',{}).get('passed',0))")
        FAILED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-backend.json')); print(d.get('summary',{}).get('failed',0))")
        SKIPPED=$(python3 -c "import json; d=json.load(open('../.ci-results/test-backend.json')); print(d.get('summary',{}).get('skipped',0))")
      else
        TOTAL=0; PASSED=0; FAILED=0; SKIPPED=0
      fi

      # JSON-Ergebnis speichern
      echo "{\"service\":\"backend\",\"framework\":\"pytest\",\"total\":$TOTAL,\"passed\":$PASSED,\"failed\":$FAILED,\"skipped\":$SKIPPED,\"coverage\":0}" \
        > ../.ci-results/results-backend.json

Report-Step

report-test-results:
  image: curlimages/curl:8.10.1
  commands:
    - |
      set -uo pipefail
      echo "=== Sende Test-Ergebnisse an Dashboard ==="
      echo "Pipeline Status: ${CI_PIPELINE_STATUS:-unknown}"
      ls -la .ci-results/ || echo "Verzeichnis nicht gefunden"

      PIPELINE_STATUS="${CI_PIPELINE_STATUS:-unknown}"

      # Schleife über alle Ergebnis-Dateien
      for f in .ci-results/results-*.json; do
        [ -f "$f" ] || continue
        echo "Sending: $f"
        curl -f -sS -X POST "http://backend:8000/api/tests/ci-result" \
          -H "Content-Type: application/json" \
          -d "{
            \"pipeline_id\": \"${CI_PIPELINE_NUMBER}\",
            \"commit\": \"${CI_COMMIT_SHA}\",
            \"branch\": \"${CI_COMMIT_BRANCH}\",
            \"status\": \"${PIPELINE_STATUS}\",
            \"test_results\": $(cat "$f")
          }" || echo "WARNUNG: Konnte $f nicht senden"
      done

      echo "=== Test-Ergebnisse gesendet ==="
  when:
    status: [success, failure]  # Läuft immer, auch bei Test-Fehlern
  depends_on:
    - test-go-consent
    - test-go-billing
    - test-go-school
    - test-python-backend
    - test-python-voice
    - test-python-klausur
    - test-nodejs-h5p

Wichtige Verbesserungen:

  • Vollständiges depends_on: Alle Test-Steps inklusive integration-tests sind aufgelistet
  • Schleife statt if-Blöcke: Reduziert Code-Duplikation
  • Dynamischer Status: ${CI_PIPELINE_STATUS} statt hardcodiertem "success"
  • curl mit -f Flag: Zeigt HTTP-Fehler an
  • Pinned Image-Version: curl:8.10.1 statt latest

2.5 Integration Tests

Nach den Unit-Tests laeuft ein vollstaendiger Integration-Test-Schritt, der alle Services in einer Docker-Compose-Umgebung testet.

Docker Compose Services

Service Container-Name Port (extern) Port (intern)
PostgreSQL postgres-test 55432 5432
Valkey valkey-test 56379 6379
Consent Service consent-service-test 58081 8081
Backend backend-test 58000 8000
Mailpit mailpit-test 58025/51025 8025/1025

Pipeline-Step Konfiguration

integration-tests:
  image: docker:27-cli
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock
  commands:
    - |
      # 1. Docker Compose Umgebung starten
      docker compose -f docker-compose.test.yml up -d

      # 2. Auf healthy Services warten (Timeout: 120s pro Service)
      for service in postgres-test valkey-test consent-service-test backend-test; do
        echo "Waiting for $service..."
        timeout=120
        while [ $elapsed -lt $timeout ]; do
          status=$(docker compose -f docker-compose.test.yml ps $service --format json | jq -r '.[0].Health')
          if [ "$status" = "healthy" ]; then break; fi
          sleep 5
        done
      done

      # 3. Integration Tests im Backend-Container ausfuehren
      docker compose -f docker-compose.test.yml exec -T backend-test \
        pytest tests/test_integration/ -v --tb=short \
        --json-report --json-report-file=/tmp/integration-results.json

      # 4. Cleanup
      docker compose -f docker-compose.test.yml down -v
  when:
    - event: [push, pull_request]
      branch: [main, develop]
  depends_on:
    - test-python-backend

Environment Variables im Integration-Modus

Variable Wert
SKIP_INTEGRATION_TESTS false
DATABASE_URL postgresql://breakpilot:breakpilot_test@postgres-test:5432/breakpilot_test
CONSENT_SERVICE_URL http://consent-service-test:8081
VALKEY_URL / REDIS_URL redis://valkey-test:6379
SMTP_HOST / SMTP_PORT mailpit-test / 1025

Lokales Testen der Integration-Tests

# 1. Test-Umgebung starten
docker compose -f docker-compose.test.yml up -d

# 2. Warten bis healthy
docker compose -f docker-compose.test.yml ps

# 3. Tests ausfuehren
cd backend
export SKIP_INTEGRATION_TESTS=false
pytest tests/test_integration/ -v

# 4. Aufraeumen
docker compose -f docker-compose.test.yml down -v

Troubleshooting Integration Tests

Problem Loesung
Service nicht healthy docker compose -f docker-compose.test.yml logs <service> --tail=100
Port bereits belegt lsof -i :<port> und bestehende Container stoppen
Tests finden keine Services Sicherstellen dass SKIP_INTEGRATION_TESTS=false gesetzt ist
Timeout beim Warten Health-Check-Intervalle in docker-compose.test.yml anpassen

Weitere Details: Siehe Integration Test Environment Dokumentation


CI-Result JSON-Format

{
  "pipeline_id": "27",
  "commit": "abc12345",
  "branch": "main",
  "status": "success",
  "test_results": {
    "service": "consent-service",
    "framework": "go",
    "total": 57,
    "passed": 57,
    "failed": 0,
    "skipped": 0,
    "coverage": 75.5
  }
}

3. Test Registry Backend

Dateien

Datei Beschreibung
backend/api/tests/registry.py Haupt-API Router (~2200 Zeilen)
backend/api/tests/models.py Pydantic/Dataclass Models
backend/api/tests/db_models.py SQLAlchemy DB Models
backend/api/tests/repository.py Datenbank-Repository
backend/api/tests/database.py DB-Session Management

Datenfluss bei CI-Result

@router.post("/ci-result")
async def receive_ci_result(result: CIResultRequest, background_tasks: BackgroundTasks):
    """
    1. Extrahiere Service-Daten aus test_results
    2. Erstelle TestRunDB Eintrag
    3. Aktualisiere TestServiceStatsDB
    4. Aktualisiere In-Memory Cache (_persisted_results)
    5. Bei Fehlern: Erstelle FailedTestBacklogDB Eintrag (Background Task)
    6. Bei 0 Fehlern: Schließe offene Backlog-Einträge (Background Task)
    """

In-Memory Cache

# Wird beim Start aus PostgreSQL geladen
_persisted_results: Dict[str, Dict] = {}

# Struktur pro Service:
{
    "consent-service": {
        "total": 57,
        "passed": 57,
        "failed": 0,
        "last_run": "2026-02-02T18:46:50",
        "status": "passed",
        "failed_test_ids": []
    }
}

# Wird bei jedem CI-Result sofort aktualisiert für Echtzeit-Updates

4. Datenbank-Schema

TestRunDB (test_runs)

Speichert jeden Test-Durchlauf.

CREATE TABLE test_runs (
    id SERIAL PRIMARY KEY,
    run_id VARCHAR(50) UNIQUE NOT NULL,      -- z.B. "ci-27-consent-service"
    service VARCHAR(100) NOT NULL,
    framework VARCHAR(50) NOT NULL,           -- go, pytest, jest
    started_at TIMESTAMP NOT NULL,
    completed_at TIMESTAMP,
    status VARCHAR(20) NOT NULL,              -- queued, running, completed, failed
    total_tests INTEGER DEFAULT 0,
    passed_tests INTEGER DEFAULT 0,
    failed_tests INTEGER DEFAULT 0,
    skipped_tests INTEGER DEFAULT 0,
    duration_seconds FLOAT DEFAULT 0,
    git_commit VARCHAR(40),
    git_branch VARCHAR(100),
    triggered_by VARCHAR(50),                 -- manual, ci, schedule
    output TEXT,
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE INDEX idx_test_runs_service ON test_runs(service);
CREATE INDEX idx_test_runs_started_at ON test_runs(started_at);

TestResultDB (test_results)

Speichert einzelne Test-Ergebnisse.

CREATE TABLE test_results (
    id SERIAL PRIMARY KEY,
    run_id VARCHAR(50) REFERENCES test_runs(run_id) ON DELETE CASCADE,
    test_name VARCHAR(500) NOT NULL,
    test_file VARCHAR(500),
    line_number INTEGER,
    status VARCHAR(20) NOT NULL,              -- passed, failed, skipped, error
    duration_ms FLOAT,
    error_message TEXT,
    error_type VARCHAR(100),
    output TEXT,
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE INDEX idx_test_results_run_id ON test_results(run_id);
CREATE INDEX idx_test_results_status ON test_results(status);

FailedTestBacklogDB (failed_tests_backlog)

Persistenter Backlog für fehlgeschlagene Tests.

CREATE TABLE failed_tests_backlog (
    id SERIAL PRIMARY KEY,
    test_name VARCHAR(500) NOT NULL,
    test_file VARCHAR(500),
    service VARCHAR(100) NOT NULL,
    framework VARCHAR(50),
    error_message TEXT,
    error_type VARCHAR(100),
    first_failed_at TIMESTAMP NOT NULL,
    last_failed_at TIMESTAMP NOT NULL,
    failure_count INTEGER DEFAULT 1,
    status VARCHAR(30) DEFAULT 'open',        -- open, in_progress, fixed, wont_fix, flaky
    priority VARCHAR(20) DEFAULT 'medium',    -- critical, high, medium, low
    assigned_to VARCHAR(100),
    fix_suggestion TEXT,
    notes TEXT,
    -- Auto-Close Felder
    resolved_at TIMESTAMP,
    resolution_commit VARCHAR(50),
    resolution_notes TEXT,
    created_at TIMESTAMP DEFAULT NOW(),
    updated_at TIMESTAMP DEFAULT NOW(),

    CONSTRAINT uq_backlog_test_service UNIQUE (test_name, service)
);

CREATE INDEX idx_backlog_service ON failed_tests_backlog(service);
CREATE INDEX idx_backlog_status ON failed_tests_backlog(status);
CREATE INDEX idx_backlog_priority ON failed_tests_backlog(priority);

TestServiceStatsDB (test_service_stats)

Aggregierte Statistiken pro Service für schnelle Abfragen.

CREATE TABLE test_service_stats (
    id SERIAL PRIMARY KEY,
    service VARCHAR(100) UNIQUE NOT NULL,
    total_tests INTEGER DEFAULT 0,
    passed_tests INTEGER DEFAULT 0,
    failed_tests INTEGER DEFAULT 0,
    skipped_tests INTEGER DEFAULT 0,
    pass_rate FLOAT DEFAULT 0.0,
    last_run_id VARCHAR(50),
    last_run_at TIMESTAMP,
    last_status VARCHAR(20),
    updated_at TIMESTAMP DEFAULT NOW()
);

TestFixHistoryDB (test_fixes_history)

Historie aller Fix-Versuche.

CREATE TABLE test_fixes_history (
    id SERIAL PRIMARY KEY,
    backlog_id INTEGER REFERENCES failed_tests_backlog(id) ON DELETE CASCADE,
    fix_type VARCHAR(50),                     -- manual, auto_claude, auto_script
    fix_description TEXT,
    commit_hash VARCHAR(40),
    success BOOLEAN,
    created_at TIMESTAMP DEFAULT NOW()
);

CREATE INDEX idx_fixes_backlog_id ON test_fixes_history(backlog_id);

5. Backlog-System

Status-Workflow

                    ┌─────────────┐
                    │             │
    Test schlägt ──▶│    open     │◀── Test schlägt erneut fehl
    fehl             │             │
                    └──────┬──────┘
                           │
                           │ Entwickler beginnt Fix
                           ▼
                    ┌─────────────┐
                    │             │
                    │ in_progress │
                    │             │
                    └──────┬──────┘
                           │
           ┌───────────────┼───────────────┐
           │               │               │
           ▼               ▼               ▼
    ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
    │             │ │             │ │             │
    │   fixed     │ │  wont_fix   │ │    flaky    │
    │             │ │             │ │             │
    └─────────────┘ └─────────────┘ └─────────────┘
           ▲
           │
           │ Automatisch wenn alle
           │ Tests bestehen
    ┌──────┴──────┐
    │  resolved   │
    │ (auto-close)│
    └─────────────┘

Automatische Backlog-Erstellung

Bei failed > 0 in CI-Result:

async def _create_backlog_entry(service_name, framework, failed_count, pipeline_id, commit, branch):
    """
    1. Prüfe ob Eintrag für Service bereits existiert
    2. Falls ja: Erhöhe failure_count, aktualisiere last_failed_at
    3. Falls nein: Erstelle neuen Eintrag mit status='open'
    """
    with get_db_session() as db:
        existing = db.query(FailedTestBacklogDB).filter(
            FailedTestBacklogDB.service == service_name,
            FailedTestBacklogDB.status == "open"
        ).first()

        if existing:
            existing.failure_count += failed_count
            existing.last_failed_at = datetime.utcnow()
        else:
            entry = FailedTestBacklogDB(
                test_name=f"Pipeline {pipeline_id} - {failed_count} Tests",
                service=service_name,
                framework=framework,
                status="open",
                priority="medium",
                first_failed_at=datetime.utcnow(),
                last_failed_at=datetime.utcnow(),
                failure_count=failed_count,
                fix_suggestion="Analysiere den Stack-Trace für Details."
            )
            db.add(entry)
        db.commit()

Automatisches Schließen (Auto-Close)

Bei failed == 0 in CI-Result:

async def _close_backlog_entry(service_name, pipeline_id, commit):
    """
    Schließt alle offenen Backlog-Einträge für einen Service,
    wenn alle Tests bestanden haben.
    """
    with get_db_session() as db:
        open_entries = db.query(FailedTestBacklogDB).filter(
            FailedTestBacklogDB.service == service_name,
            FailedTestBacklogDB.status == "open"
        ).all()

        for entry in open_entries:
            entry.status = "resolved"
            entry.resolved_at = datetime.utcnow()
            entry.resolution_commit = commit[:8] if commit else None
            entry.resolution_notes = f"Automatisch geschlossen - alle Tests in Pipeline {pipeline_id} bestanden"

        db.commit()

Prioritäts-Regeln

Priorität Kriterium
critical > 10 Fehler oder Service-kritisch
high 5-10 Fehler oder häufige Regression
medium 1-4 Fehler (Standard)
low Flaky Tests oder Edge Cases

6. Frontend Dashboard

URLs

Dashboard URL Beschreibung
Test Dashboard https://macmini:3002/infrastructure/tests Übersicht aller Services
CI/CD Dashboard https://macmini:3002/infrastructure/ci-cd Pipeline-Status
Backlog https://macmini:3002/infrastructure/tests (Tab) Fehlgeschlagene Tests

Komponenten

Pfad: admin-v2/app/(admin)/infrastructure/tests/page.tsx

// Haupt-Komponenten
<TestDashboard>
  <ServiceGrid />        // Kacheln für jeden Service
  <TestRunsTable />      // Letzte Test-Durchläufe
  <BacklogPanel />       // Offene Backlog-Einträge
  <CoverageChart />      // Coverage-Visualisierung
</TestDashboard>

API-Aufrufe

// Test Registry laden
const registry = await fetch('/api/tests/registry').then(r => r.json());

// Backlog laden
const backlog = await fetch('/api/tests/backlog').then(r => r.json());

// Test-Runs laden
const runs = await fetch('/api/tests/runs').then(r => r.json());

// Coverage laden
const coverage = await fetch('/api/tests/coverage').then(r => r.json());

7. Service-Übersicht

Registrierte Services

Service Sprache Framework Port Tests Status
consent-service Go go_test 8081 ~60 Aktiv
billing-service Go go_test 8082 ~20 Aktiv
school-service Go go_test 8084 ~15 Aktiv
backend Python pytest 8000 ~200 Aktiv
voice-service Python pytest 8091 ~30 Aktiv (inkl. BQAS)
klausur-service Python pytest 8086 ~150 Aktiv
h5p-service Node.js jest - ~25 Aktiv
edu-search-service Go go_test 8088 0 Keine Tests
ai-compliance-sdk Go go_test - ~150 Nicht in Pipeline
website TypeScript jest 3000 ~0 Nicht in Pipeline
bqas-golden Python pytest 8091 ~10 In voice-service
bqas-rag Python pytest 8091 ~10 In voice-service

Service-Definition (models.py)

SERVICE_DEFINITIONS = [
    {
        "service": "consent-service",
        "display_name": "Consent Service",
        "port": 8081,
        "language": "go",
        "base_path": "/consent-service",
        "test_pattern": "*_test.go",
        "framework": TestFramework.GO_TEST,
    },
    # ... weitere Services
]

Tests pro Service

consent-service/
├── internal/
│   ├── handlers/
│   │   └── handlers_test.go      # HTTP Handler Tests
│   ├── services/
│   │   ├── auth_service_test.go  # Auth Tests
│   │   └── consent_service_test.go
│   └── middleware/
│       └── middleware_test.go
└── cmd/
    └── server_test.go

backend (Python)

backend/tests/
├── test_consent_client.py
├── test_gdpr_api.py
├── test_documents.py
├── test_worksheets_api.py
├── test_auth.py
└── ...

voice-service (Python)

voice-service/tests/
├── test_encryption.py
├── test_intent_router.py
├── test_sessions.py
├── test_tasks.py
└── bqas/
    ├── test_golden.py        # BQAS Golden Suite
    └── test_rag.py           # BQAS RAG Tests

klausur-service (Python)

klausur-service/backend/tests/
├── test_advanced_rag.py
├── test_byoeh.py
├── test_mail_service.py
├── test_ocr_labeling.py
├── test_rag_admin.py
├── test_rbac.py
├── test_vocab_worksheet.py
└── test_worksheet_editor.py

h5p-service (Node.js)

h5p-service/tests/
├── server.test.js
├── setup.js
└── README.md

8. Fehlerbehebung

Problem: Tests zeigen 0/0 an

Ursache: Pipeline-Tests produzieren keine Ergebnisse

Lösung:

  1. Prüfe Woodpecker Agent Logs:
    docker logs breakpilot-pwa-woodpecker-agent --tail=100
    
  2. Prüfe ob Service kompiliert:
    cd billing-service && go build ./...
    
  3. Prüfe ob Tests lokal laufen:
    cd billing-service && go test -v ./...
    

Problem: Backlog zeigt 500 Error

Ursache: Fehlende DB-Spalten

Lösung:

ALTER TABLE failed_tests_backlog ADD COLUMN resolved_at TIMESTAMP;
ALTER TABLE failed_tests_backlog ADD COLUMN resolution_commit VARCHAR(50);
ALTER TABLE failed_tests_backlog ADD COLUMN resolution_notes TEXT;

Problem: Frontend zeigt alte Daten

Ursache: In-Memory Cache nicht aktualisiert

Lösung:

  1. Backend neustarten:
    docker compose restart backend
    
  2. Oder: Manuell Cache-Refresh via API (wenn implementiert)

Problem: Pipeline startet nicht

Ursache: Webhook von Gitea nicht empfangen

Lösung:

  1. Prüfe Gitea Webhook-Konfiguration
  2. Prüfe Woodpecker Server Logs:
    docker logs breakpilot-pwa-woodpecker-server --tail=50
    
  3. Manuell Pipeline triggern via Woodpecker UI

Problem: OOM Kill in Pipeline

Ursache: Zu viele parallele Tests

Lösung:

  1. Tests sequentiell statt parallel ausführen
  2. Memory-Limits in docker-compose erhöhen
  3. Große Test-Suites aufteilen

9. API-Referenz

Test Registry Endpoints

GET /api/tests/registry

Gibt alle registrierten Services mit Test-Statistiken zurück.

Response:

{
  "services": [
    {
      "service": "consent-service",
      "display_name": "Consent Service",
      "port": 8081,
      "language": "go",
      "total_tests": 57,
      "passed_tests": 57,
      "failed_tests": 0,
      "skipped_tests": 0,
      "pass_rate": 100.0,
      "coverage_percent": 75.5,
      "last_run": "2026-02-02T18:46:50",
      "status": "passed"
    }
  ],
  "stats": {
    "total_tests": 500,
    "total_passed": 480,
    "total_failed": 20,
    "services_count": 12,
    "overall_pass_rate": 96.0
  }
}

POST /api/tests/ci-result

Empfängt Test-Ergebnisse von der CI/CD-Pipeline.

Request:

{
  "pipeline_id": "27",
  "commit": "abc12345def67890",
  "branch": "main",
  "status": "success",
  "test_results": {
    "service": "consent-service",
    "framework": "go",
    "total": 57,
    "passed": 57,
    "failed": 0,
    "skipped": 0,
    "coverage": 75.5
  }
}

Response:

{
  "received": true,
  "run_id": "ci-27-consent-service",
  "service": "consent-service",
  "pipeline_id": "27",
  "status": "passed",
  "tests": {"total": 57, "passed": 57, "failed": 0},
  "stored_in": "postgres"
}

GET /api/tests/backlog

Gibt alle Backlog-Einträge zurück.

Query-Parameter:

  • status: Filter nach Status (open, in_progress, fixed, etc.)
  • service: Filter nach Service
  • limit: Anzahl der Ergebnisse (default: 50)

Response:

{
  "total": 15,
  "items": [
    {
      "id": 1,
      "test_name": "Pipeline 27 - 3 Tests",
      "service": "backend",
      "status": "open",
      "priority": "medium",
      "failure_count": 3,
      "first_failed_at": "2026-02-02T10:00:00",
      "last_failed_at": "2026-02-02T18:00:00",
      "resolved_at": null
    }
  ]
}

GET /api/tests/runs

Gibt die letzten Test-Durchläufe zurück.

Query-Parameter:

  • service: Filter nach Service
  • limit: Anzahl (default: 20)

GET /api/tests/coverage

Gibt Coverage-Informationen zurück.

POST /api/tests/run/{suite}

Startet einen Test-Run manuell.

PUT /api/tests/backlog/{id}

Aktualisiert einen Backlog-Eintrag (Status, Priorität, Assignee).


Anhang

Umgebungsvariablen

# Woodpecker
WOODPECKER_URL=https://macmini:4431
WOODPECKER_TOKEN=<jwt-token>

# Backend
DATABASE_URL=postgresql://user:pass@postgres:5432/breakpilot
TEST_REGISTRY_ENABLED=true

Lokale CI-Simulation

Für lokales Testen ohne Woodpecker CI stehen zwei Hilfsdateien zur Verfügung:

Makefile (Projektroot)

# Alle Tests lokal ausführen
make ci

# Nur Go-Tests
make test-go

# Nur Python-Tests
make test-python

# Woodpecker Agent Logs
make logs-agent

# Backend Logs (ci-result Filter)
make logs-backend

# Test-Ergebnisse löschen
make clean

docker-compose.test.yml (Projektroot)

# Test-Datenbanken starten
docker compose -f docker-compose.test.yml up -d

# Test-Datenbanken stoppen und Daten löschen
docker compose -f docker-compose.test.yml down -v
Service Port Credentials
PostgreSQL 55432 breakpilot_test/breakpilot/breakpilot
Redis 56379 (keine)

Nützliche Befehle

# Woodpecker Logs
docker logs breakpilot-pwa-woodpecker-agent --tail=100
docker logs breakpilot-pwa-woodpecker-server --tail=100

# Backend Logs
docker compose logs backend --tail=100 | grep -E "(CI-RESULT|test|error)"

# Datenbank prüfen
docker compose exec backend python3 -c "
from sqlalchemy.orm import Session
from classroom_engine.database import engine
from api.tests.db_models import TestRunDB, TestServiceStatsDB

with Session(engine) as db:
    runs = db.query(TestRunDB).order_by(TestRunDB.started_at.desc()).limit(5).all()
    for r in runs:
        print(f'{r.run_id}: {r.status} - {r.passed_tests}/{r.total_tests}')
"

# Pipeline manuell triggern
curl -X POST "https://macmini:4431/api/repos/pilotadmin/breakpilot-pwa/pipelines" \
  -H "Authorization: Bearer $WOODPECKER_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"branch":"main"}'

Weiterführende Dokumentation


Generiert am 2026-02-02 von Claude Code Aktualisiert am 2026-02-02: jq-Parsing für Go-Tests, vollständiges depends_on, Makefile & docker-compose.test.yml Aktualisiert am 2026-02-04: Integration Test Environment mit Docker Compose hinzugefuegt (Sektion 2.5)