This repository has been archived on 2026-02-15. You can view files and clone it. You cannot open issues or pull requests or push a commit.
Files
breakpilot-pwa/backend/.env.example
BreakPilot Dev 19855efacc
Some checks failed
Tests / Go Tests (push) Has been cancelled
Tests / Python Tests (push) Has been cancelled
Tests / Integration Tests (push) Has been cancelled
Tests / Go Lint (push) Has been cancelled
Tests / Python Lint (push) Has been cancelled
Tests / Security Scan (push) Has been cancelled
Tests / All Checks Passed (push) Has been cancelled
Security Scanning / Secret Scanning (push) Has been cancelled
Security Scanning / Dependency Vulnerability Scan (push) Has been cancelled
Security Scanning / Go Security Scan (push) Has been cancelled
Security Scanning / Python Security Scan (push) Has been cancelled
Security Scanning / Node.js Security Scan (push) Has been cancelled
Security Scanning / Docker Image Security (push) Has been cancelled
Security Scanning / Security Summary (push) Has been cancelled
CI/CD Pipeline / Go Tests (push) Has been cancelled
CI/CD Pipeline / Python Tests (push) Has been cancelled
CI/CD Pipeline / Website Tests (push) Has been cancelled
CI/CD Pipeline / Linting (push) Has been cancelled
CI/CD Pipeline / Security Scan (push) Has been cancelled
CI/CD Pipeline / Docker Build & Push (push) Has been cancelled
CI/CD Pipeline / Integration Tests (push) Has been cancelled
CI/CD Pipeline / Deploy to Staging (push) Has been cancelled
CI/CD Pipeline / Deploy to Production (push) Has been cancelled
CI/CD Pipeline / CI Summary (push) Has been cancelled
ci/woodpecker/manual/build-ci-image Pipeline was successful
ci/woodpecker/manual/main Pipeline failed
feat: BreakPilot PWA - Full codebase (clean push without large binaries)
All services: admin-v2, studio-v2, website, ai-compliance-sdk,
consent-service, klausur-service, voice-service, and infrastructure.
Large PDFs and compiled binaries excluded via .gitignore.
2026-02-11 13:25:58 +01:00

136 lines
4.3 KiB
Plaintext

# BreakPilot Backend Environment Variables
# Kopiere diese Datei nach .env und passe die Werte an
# =============================================================================
# LLM Gateway Konfiguration
# =============================================================================
# Server-Einstellungen
LLM_GATEWAY_HOST=0.0.0.0
LLM_GATEWAY_PORT=8002
LLM_GATEWAY_DEBUG=false
# API Keys für Gateway-Zugriff (komma-separiert)
# Generiere mit: openssl rand -hex 32
LLM_API_KEYS=your-api-key-1,your-api-key-2
# JWT Secret (gleicher Wert wie Consent Service für SSO)
JWT_SECRET=your-jwt-secret
# Rate Limiting
LLM_RATE_LIMIT_RPM=60
LLM_RATE_LIMIT_TPM=100000
# Logging
LLM_LOG_LEVEL=INFO
LLM_AUDIT_LOGGING=true
# Backend Priorität (Reihenfolge für Fallback)
LLM_BACKEND_PRIORITY=ollama,vllm,anthropic
# =============================================================================
# Ollama Backend (lokal)
# =============================================================================
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_DEFAULT_MODEL=llama3.1:8b
OLLAMA_TIMEOUT=120
OLLAMA_ENABLED=true
# =============================================================================
# vLLM Backend (remote, z.B. vast.ai)
# =============================================================================
# VLLM_BASE_URL=http://gpu-server:8000
# VLLM_API_KEY=your-vllm-api-key
# VLLM_DEFAULT_MODEL=meta-llama/Meta-Llama-3.1-8B-Instruct
# VLLM_TIMEOUT=120
# VLLM_ENABLED=false
# =============================================================================
# Anthropic Claude API (Fallback)
# =============================================================================
ANTHROPIC_API_KEY=your-anthropic-api-key
ANTHROPIC_DEFAULT_MODEL=claude-3-5-sonnet-20241022
ANTHROPIC_TIMEOUT=120
ANTHROPIC_ENABLED=true
# =============================================================================
# Tool Gateway (Web Search mit PII-Schutz)
# =============================================================================
# Tavily API für Web-Suche
TAVILY_API_KEY=your-tavily-api-key
TAVILY_BASE_URL=https://api.tavily.com
TAVILY_TIMEOUT=30
TAVILY_MAX_RESULTS=5
TAVILY_SEARCH_DEPTH=basic
TAVILY_INCLUDE_ANSWER=true
TAVILY_INCLUDE_IMAGES=false
# PII Redaktion (automatische Entfernung personenbezogener Daten)
PII_REDACTION_ENABLED=true
# =============================================================================
# Alerts Agent (Google Alerts Relevanz-Filterung)
# =============================================================================
# Agent aktivieren
ALERTS_AGENT_ENABLED=false
# LLM-basiertes Scoring aktivieren (sonst Keyword-basiert)
ALERTS_USE_LLM=false
# LLM Gateway URL für Scoring (Standard: lokaler Server)
LLM_GATEWAY_URL=http://localhost:8000/llm
# Google Alerts RSS Feed URLs (komma-separiert)
# Erstelle Feeds unter: https://www.google.com/alerts
# Wähle "Zustellung an: RSS-Feed"
# ALERTS_RSS_FEEDS=https://google.com/alerts/feeds/.../...,https://google.com/alerts/feeds/.../...
# Feed Labels (komma-separiert, gleiche Reihenfolge wie FEEDS)
# ALERTS_RSS_LABELS=Inklusion Bayern,Datenschutz Schule,Schulrecht
# Fetch-Intervall in Minuten
ALERTS_FETCH_INTERVAL=60
# =============================================================================
# vast.ai GPU Infrastructure
# =============================================================================
# vast.ai API Key (von https://cloud.vast.ai/cli/)
# Wenn gesetzt, werden GPU-Kontrollen im Admin Panel aktiviert
# VAST_API_KEY=your-vast-api-key
# Instance ID (numerische ID deiner vast.ai Instanz)
# VAST_INSTANCE_ID=123456
# Control API Key (Admin-Schutz für Start/Stop Endpoints)
# Generiere mit: openssl rand -hex 32
# CONTROL_API_KEY=your-control-api-key
# Health Check Konfiguration
VAST_HEALTH_PORT=8001
VAST_HEALTH_PATH=/health
VAST_WAIT_TIMEOUT_S=600
# Auto-Shutdown (stoppt Instanz bei Inaktivität)
VAST_AUTO_SHUTDOWN=true
VAST_AUTO_SHUTDOWN_MINUTES=30
# State Persistence (Pfade für Status und Audit Log)
# VAST_STATE_PATH=./vast_state.json
# VAST_AUDIT_PATH=./vast_audit.log
# =============================================================================
# Bestehendes Backend
# =============================================================================
# Consent Service URL
CONSENT_SERVICE_URL=http://localhost:8081
# OpenAI API Key (für andere Features wie ai_processor.py)
OPENAI_API_KEY=your-openai-api-key