Files
breakpilot-lehrer/studio-v2/components/learn/AudioButton.tsx
Benjamin Admin 20a0585eb1
Some checks failed
CI / go-lint (push) Has been skipped
CI / python-lint (push) Has been skipped
CI / nodejs-lint (push) Has been skipped
CI / test-go-school (push) Successful in 44s
CI / test-go-edu-search (push) Successful in 51s
CI / test-python-klausur (push) Failing after 2m44s
CI / test-python-agent-core (push) Successful in 33s
CI / test-nodejs-website (push) Successful in 34s
Add interactive learning modules MVP (Phases 1-3.1)
New feature: After OCR vocabulary extraction, users can generate interactive
learning modules (flashcards, quiz, type trainer) with one click.

Frontend (studio-v2):
- Fortune Sheet spreadsheet editor tab in vocab-worksheet
- "Lernmodule generieren" button in ExportTab
- /learn page with unit overview and exercise type cards
- /learn/[unitId]/flashcards — Flip-card trainer with Leitner spaced repetition
- /learn/[unitId]/quiz — Multiple choice quiz with explanations
- /learn/[unitId]/type — Type-in trainer with Levenshtein distance feedback
- AudioButton component using Web Speech API for EN+DE TTS

Backend (klausur-service):
- vocab_learn_bridge.py: Converts VocabularyEntry[] to analysis_data format
- POST /sessions/{id}/generate-learning-unit endpoint

Backend (backend-lehrer):
- generate-qa, generate-mc, generate-cloze endpoints on learning units
- get-qa/mc/cloze data retrieval endpoints
- Leitner progress update + next review items endpoints

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 07:13:23 +02:00

76 lines
2.4 KiB
TypeScript

'use client'
import React, { useCallback, useState } from 'react'
interface AudioButtonProps {
text: string
lang: 'en' | 'de'
isDark: boolean
size?: 'sm' | 'md' | 'lg'
}
export function AudioButton({ text, lang, isDark, size = 'md' }: AudioButtonProps) {
const [isSpeaking, setIsSpeaking] = useState(false)
const speak = useCallback(() => {
if (!('speechSynthesis' in window)) return
if (isSpeaking) {
window.speechSynthesis.cancel()
setIsSpeaking(false)
return
}
const utterance = new SpeechSynthesisUtterance(text)
utterance.lang = lang === 'de' ? 'de-DE' : 'en-GB'
utterance.rate = 0.9
utterance.pitch = 1.0
// Try to find a good voice
const voices = window.speechSynthesis.getVoices()
const preferred = voices.find((v) =>
v.lang.startsWith(lang === 'de' ? 'de' : 'en') && v.localService
) || voices.find((v) => v.lang.startsWith(lang === 'de' ? 'de' : 'en'))
if (preferred) utterance.voice = preferred
utterance.onend = () => setIsSpeaking(false)
utterance.onerror = () => setIsSpeaking(false)
setIsSpeaking(true)
window.speechSynthesis.speak(utterance)
}, [text, lang, isSpeaking])
const sizeClasses = {
sm: 'w-7 h-7',
md: 'w-9 h-9',
lg: 'w-11 h-11',
}
const iconSizes = {
sm: 'w-3.5 h-3.5',
md: 'w-4 h-4',
lg: 'w-5 h-5',
}
return (
<button
onClick={speak}
className={`${sizeClasses[size]} rounded-full flex items-center justify-center transition-all ${
isSpeaking
? 'bg-blue-500 text-white animate-pulse'
: isDark
? 'bg-white/10 text-white/60 hover:bg-white/20 hover:text-white'
: 'bg-slate-100 text-slate-500 hover:bg-slate-200 hover:text-slate-700'
}`}
title={isSpeaking ? 'Stop' : `${lang === 'de' ? 'Deutsch' : 'Englisch'} vorlesen`}
>
<svg className={iconSizes[size]} fill="none" stroke="currentColor" viewBox="0 0 24 24">
{isSpeaking ? (
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M21 12a9 9 0 11-18 0 9 9 0 0118 0zM10 9v6m4-6v6" />
) : (
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M15.536 8.464a5 5 0 010 7.072m2.828-9.9a9 9 0 010 12.728M5.586 15H4a1 1 0 01-1-1v-4a1 1 0 011-1h1.586l4.707-4.707C10.923 3.663 12 4.109 12 5v14c0 .891-1.077 1.337-1.707.707L5.586 15z" />
)}
</svg>
</button>
)
}