fix: 5 regex bugs + text extraction scroll + GT update
Build + Deploy / build-backend-compliance (push) Successful in 23s
Build + Deploy / build-ai-sdk (push) Successful in 13s
Build + Deploy / build-admin-compliance (push) Successful in 13s
Build + Deploy / build-developer-portal (push) Successful in 14s
Build + Deploy / build-tts (push) Successful in 15s
Build + Deploy / build-document-crawler (push) Successful in 13s
CI / secret-scan (push) Has been skipped
CI / go-lint (push) Has been skipped
CI / python-lint (push) Has been skipped
CI / nodejs-lint (push) Has been skipped
CI / test-go (push) Successful in 39s
Build + Deploy / build-dsms-gateway (push) Successful in 15s
Build + Deploy / build-dsms-node (push) Successful in 14s
CI / branch-name (push) Has been skipped
CI / guardrail-integrity (push) Has been skipped
CI / loc-budget (push) Failing after 15s
CI / nodejs-build (push) Successful in 2m26s
CI / dep-audit (push) Has been skipped
CI / sbom-scan (push) Has been skipped
CI / test-python-backend (push) Successful in 39s
CI / test-python-document-crawler (push) Successful in 25s
CI / test-python-dsms-gateway (push) Successful in 22s
CI / validate-canonical-controls (push) Successful in 15s
Build + Deploy / trigger-orca (push) Successful in 2m28s

Root cause: Spiegel DSI text was truncated (lazy-loading) — the
rights/DSB/complaints sections at the bottom were never extracted.

Fixes:
1. Text extraction: scroll to bottom before innerText (dsi_discovery.py)
2. V.i.S.d.P.: add "verantwortlicher i.s.v." + "§18 Abs. N MStV" pattern
3. USt-IdNr: add "umsatzsteuer-id" + "DE 212 442 423" (with spaces)
4. Profiler: remove generic "anwalt"/"praxis" (false positive on Spiegel
   "Redaktionsanwalt"), keep only "rechtsanwalt", "kanzlei" etc.
5. Section splitter: auto_fill_from_dsi() fills empty Cookie/Social-Media
   rows from sections found in the DSI text

Ground Truth 06-spiegel.md fully rewritten with verified data from
live website — 3 L1 False Negatives identified (DSB, Beschwerderecht,
Betroffenenrechte all present on website but not in extracted text).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Benjamin Admin
2026-05-13 01:20:55 +02:00
parent 8bb90d73e5
commit c702260ec1
6 changed files with 194 additions and 78 deletions
@@ -174,9 +174,14 @@ async def _run_compliance_check(check_id: str, req: ComplianceCheckRequest):
"word_count": len(text.split()) if text else 0,
})
# Step 1b: If same URL used for multiple doc_types, try section splitting
from compliance.services.section_splitter import split_shared_texts
# Step 1b: Section splitting — two cases:
# 1. Same URL used for multiple doc_types → split by heading
# 2. DSI text contains Cookie/Social-Media sections → auto-fill empty rows
from compliance.services.section_splitter import (
split_shared_texts, auto_fill_from_dsi,
)
split_shared_texts(doc_entries, url_text_cache)
auto_fill_from_dsi(doc_entries)
# Refresh doc_texts after splitting
for entry in doc_entries:
if entry.get("text"):
@@ -59,26 +59,34 @@ _NONPROFIT_KEYWORDS = [
]
_REGULATED_PROFESSIONS = {
# Anwalt — nur spezifische Begriffe, nicht "anwalt" allein
# (matcht sonst Redaktionsanwalt, Justiziar etc.)
"rechtsanwalt": "anwalt",
"anwalt": "anwalt",
"anwaeltin": "anwalt",
"anwältin": "anwalt",
"rechtsanwaeltin": "anwalt",
"rechtsanwältin": "anwalt",
"kanzlei": "anwalt",
"rechtsanwaltskammer": "anwalt",
"arzt": "arzt",
"ärztin": "arzt",
"aerztin": "arzt",
"praxis": "arzt",
"zugelassener anwalt": "anwalt",
# Arzt — "praxis" entfernt (matcht "in der Praxis")
"arztpraxis": "arzt",
"zahnarzt": "arzt",
"facharzt": "arzt",
"aerztekammer": "arzt",
"ärztekammer": "arzt",
"kassenärztlich": "arzt",
"kassenaerztlich": "arzt",
# Steuerberater
"steuerberater": "steuerberater",
"steuerberaterin": "steuerberater",
"steuerberaterkammer": "steuerberater",
# Architekt
"architekt": "architekt",
"architektin": "architekt",
"architektenkammer": "architekt",
# Notar
"notar": "notar",
"notariat": "notar",
# Apotheker
"apotheke": "apotheker",
"apotheker": "apotheker",
}
@@ -135,8 +135,9 @@ IMPRESSUM_CHECKLIST = [
"label": "USt-IdNr.",
"level": 1, "parent": None,
"patterns": [
r"ust.*id", r"umsatzsteuer.*identifikation",
r"vat.*id", r"de\s*\d{9}",
r"ust[\s.-]*id", r"umsatzsteuer[\s-]*id",
r"umsatzsteuer.*identifikation",
r"vat[\s.-]*id", r"de\s*\d{3}\s*\d{3}\s*\d{3}",
],
"severity": "MEDIUM",
"hint": "§5(1) Nr.6 TMG: Die USt-IdNr. muss angegeben werden, sofern vorhanden. Die Steuernummer ist KEIN Ersatz.",
@@ -146,7 +147,7 @@ IMPRESSUM_CHECKLIST = [
"label": "USt-IdNr. im Format DE + 9 Ziffern",
"level": 2, "parent": "vat",
"patterns": [
r"de\s*\d{9}",
r"de\s*\d{3}\s*\d{3}\s*\d{3}",
],
"severity": "LOW",
"hint": "Deutsche USt-IdNr.: 'DE' + exakt 9 Ziffern (z.B. DE123456789). Validierung: https://evatr.bff-online.de/",
@@ -187,7 +188,8 @@ IMPRESSUM_CHECKLIST = [
"patterns": [
r"v\.?\s*i\.?\s*s\.?\s*d\.?\s*p",
r"(?:redaktionell|inhaltlich)\s+verantwortlich",
r"§\s*18\s+m(?:edien)?st(?:aat)?v",
r"§\s*18\s+(?:abs\.?\s*\d+\s+)?m(?:edien)?st(?:aat)?v",
r"verantwortlich\w*\s+i\.?\s*s\.?\s*(?:d\.?\s*)?v\.?",
],
"severity": "INFO",
"hint": "Nur relevant wenn die Website journalistisch-redaktionelle Inhalte hat (Blog, Ratgeber, News, Fachartikel). Reine Unternehmensseiten ohne redaktionelle Inhalte benoetigen keinen V.i.S.d.P. Pruefen Sie, ob die Website einen Blog oder Ratgeber-Bereich hat.",
@@ -168,3 +168,48 @@ def _find_section_for_type(sections: list[dict], doc_type: str) -> str | None:
return section["text"]
return None # No match → keep full text
def auto_fill_from_dsi(doc_entries: list[dict]) -> None:
"""Auto-fill empty document rows from sections found in the DSI text.
If the user only entered the DSI URL but left Cookie/Social-Media empty,
and the DSI text contains those sections, auto-fill them.
"""
# Find the DSI entry
dsi_entry = None
for entry in doc_entries:
if entry["doc_type"] in ("dse", "datenschutz", "privacy") and entry.get("text"):
dsi_entry = entry
break
if not dsi_entry:
return
dsi_text = dsi_entry["text"]
if len(dsi_text) < 300:
return
# Split DSI into sections
sections = _split_at_headings(dsi_text)
if not sections:
return
# Find empty entries that could be filled from DSI sections
filled = []
for entry in doc_entries:
if entry.get("text") or entry.get("url"):
continue # Already has content
doc_type = entry["doc_type"]
section_text = _find_section_for_type(sections, doc_type)
if section_text and len(section_text.split()) >= 30:
entry["text"] = section_text
entry["word_count"] = len(section_text.split())
entry["url"] = f"{dsi_entry.get('url', '')} (Abschnitt)"
filled.append(doc_type)
if filled:
logger.info(
"Auto-filled %d empty rows from DSI sections: %s",
len(filled), ", ".join(filled),
)