feat(pitch-deck): update Engineering + AI Pipeline slides with current data
Engineering slide: - Woodpecker CI → Gitea Actions + Coolify - Stats: 481K LOC, 10 containers, 48+ modules, 14 Dockerfiles - Infrastructure: Hetzner + SysEleven (BSI) + OVH, no US providers - Service architecture: compliance-only (Frontend, Backend, Infra) AI Pipeline slide: - 38+ indexed regulations, 6,259 extracted controls, 325+ obligations - 6 Qdrant collections, 2,274+ chunks - UCCA policy engine (45 rules, E0-E3 escalation) - LLM: 120B on OVH + 1000B on SysEleven (BSI), via LiteLLM - QA: PDF-QA pipeline, Gitea Actions CI, Coolify deploy Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -449,13 +449,13 @@ export const PRESENTER_SCRIPT: SlideScript[] = [
|
||||
duration: 40,
|
||||
paragraphs: [
|
||||
{
|
||||
text_de: 'Engineering Deep Dive: 761.000 Zeilen Code, 45 Container, über 65 Module. Tech-Stack: Go, Python, TypeScript, Next.js. Alles Open Source mit kommerziell nutzbaren Lizenzen.',
|
||||
text_en: 'Engineering deep dive: 761,000 lines of code, 45 containers, over 65 modules. Tech stack: Go, Python, TypeScript, Next.js. Everything open source with commercially usable licenses.',
|
||||
text_de: 'Engineering Deep Dive: 481.000 Zeilen Code, 10 Container, 48 Compliance-Module. Tech-Stack: Go, Python, TypeScript mit Next.js. CI/CD über Gitea Actions, automatisches Deploy via Coolify auf Hetzner.',
|
||||
text_en: 'Engineering deep dive: 481,000 lines of code, 10 containers, 48 compliance modules. Tech stack: Go, Python, TypeScript with Next.js. CI/CD via Gitea Actions, automatic deploy via Coolify on Hetzner.',
|
||||
pause_after: 2000,
|
||||
},
|
||||
{
|
||||
text_de: 'Integriert: ERPNext als Open-Source-ERP, Matrix Chat, eigenes Jitsi, NVIDIA Meeting-Modul, Code-Assistent mit kundenspezifischen Namespaces.',
|
||||
text_en: 'Integrated: ERPNext as open source ERP, Matrix chat, own Jitsi, NVIDIA meeting module, code assistant with customer-specific namespaces.',
|
||||
text_de: 'Infrastruktur: 100 Prozent EU-Cloud. PostgreSQL und Qdrant auf Hetzner, 120-Milliarden-Parameter-LLM auf OVH, 1000-Milliarden-Parameter-LLM auf SysEleven — BSI-zertifiziert. Keine US-Anbieter.',
|
||||
text_en: 'Infrastructure: 100 percent EU cloud. PostgreSQL and Qdrant on Hetzner, 120 billion parameter LLM on OVH, 1 trillion parameter LLM on SysEleven — BSI certified. No US providers.',
|
||||
pause_after: 1500,
|
||||
},
|
||||
],
|
||||
@@ -467,13 +467,13 @@ export const PRESENTER_SCRIPT: SlideScript[] = [
|
||||
duration: 40,
|
||||
paragraphs: [
|
||||
{
|
||||
text_de: 'Die KI-Pipeline: RAG mit 2.274 Rechtstexten, Multi-Agent-System mit spezialisierten Compliance-Agenten, SAST- und DAST-Integration, automatische SBOM-Generierung und Jira-Integration mit Code-Fix-Vorschlägen.',
|
||||
text_en: 'The AI pipeline: RAG with 2,274 legal texts, multi-agent system with specialized compliance agents, SAST and DAST integration, automatic SBOM generation and Jira integration with code fix suggestions.',
|
||||
text_de: 'Die KI-Pipeline: 38 Verordnungen indexiert, 6.259 Controls extrahiert, 325 Pflichten aus 9 Regulierungen abgeleitet. RAG mit 6 Qdrant-Collections, BGE-M3 Embeddings, Hybrid Search mit Cross-Encoder Re-Ranking.',
|
||||
text_en: 'The AI pipeline: 38 regulations indexed, 6,259 controls extracted, 325 obligations derived from 9 regulations. RAG with 6 Qdrant collections, BGE-M3 embeddings, hybrid search with cross-encoder re-ranking.',
|
||||
pause_after: 2500,
|
||||
},
|
||||
{
|
||||
text_de: 'Vielen Dank für Ihre Aufmerksamkeit. Die Plattform läuft, die Module sind gebaut, die KI ist trainiert. Jetzt geht es um Skalierung.',
|
||||
text_en: 'Thank you for your attention. The platform is running, the modules are built, the AI is trained. Now it is about scaling.',
|
||||
text_de: 'Kernprinzip: Das LLM ist nicht die Wahrheitsquelle. Wahrheit gleich Regeln plus Evidenz. Das LLM ist Übersetzer und Subsumtions-Helfer. Deterministische Policy Engine mit 45 Regeln und Eskalationsstufen E0 bis E3.',
|
||||
text_en: 'Core principle: The LLM is not the source of truth. Truth equals rules plus evidence. The LLM is translator and subsumption helper. Deterministic policy engine with 45 rules and escalation levels E0 to E3.',
|
||||
pause_after: 2000,
|
||||
},
|
||||
],
|
||||
|
||||
Reference in New Issue
Block a user