Files
breakpilot-compliance/backend-compliance/compliance/tests/test_audit_routes.py
Sharang Parnerkar 3320ef94fc refactor: phase 0 guardrails + phase 1 step 2 (models.py split)
Squash of branch refactor/phase0-guardrails-and-models-split — 4 commits,
81 files, 173/173 pytest green, OpenAPI contract preserved (360 paths /
484 operations).

## Phase 0 — Architecture guardrails

Three defense-in-depth layers to keep the architecture rules enforced
regardless of who opens Claude Code in this repo:

  1. .claude/settings.json PreToolUse hook on Write/Edit blocks any file
     that would exceed the 500-line hard cap. Auto-loads in every Claude
     session in this repo.
  2. scripts/githooks/pre-commit (install via scripts/install-hooks.sh)
     enforces the LOC cap locally, freezes migrations/ without
     [migration-approved], and protects guardrail files without
     [guardrail-change].
  3. .gitea/workflows/ci.yaml gains loc-budget + guardrail-integrity +
     sbom-scan (syft+grype) jobs, adds mypy --strict for the new Python
     packages (compliance/{services,repositories,domain,schemas}), and
     tsc --noEmit for admin-compliance + developer-portal.

Per-language conventions documented in AGENTS.python.md, AGENTS.go.md,
AGENTS.typescript.md at the repo root — layering, tooling, and explicit
"what you may NOT do" lists. Root CLAUDE.md is prepended with the six
non-negotiable rules. Each of the 10 services gets a README.md.

scripts/check-loc.sh enforces soft 300 / hard 500 and surfaces the
current baseline of 205 hard + 161 soft violations so Phases 1-4 can
drain it incrementally. CI gates only CHANGED files in PRs so the
legacy baseline does not block unrelated work.

## Deprecation sweep

47 files. Pydantic V1 regex= -> pattern= (2 sites), class Config ->
ConfigDict in source_policy_router.py (schemas.py intentionally skipped;
it is the Phase 1 Step 3 split target). datetime.utcnow() ->
datetime.now(timezone.utc) everywhere including SQLAlchemy default=
callables. All DB columns already declare timezone=True, so this is a
latent-bug fix at the Python side, not a schema change.

DeprecationWarning count dropped from 158 to 35.

## Phase 1 Step 1 — Contract test harness

tests/contracts/test_openapi_baseline.py diffs the live FastAPI /openapi.json
against tests/contracts/openapi.baseline.json on every test run. Fails on
removed paths, removed status codes, or new required request body fields.
Regenerate only via tests/contracts/regenerate_baseline.py after a
consumer-updated contract change. This is the safety harness for all
subsequent refactor commits.

## Phase 1 Step 2 — models.py split (1466 -> 85 LOC shim)

compliance/db/models.py is decomposed into seven sibling aggregate modules
following the existing repo pattern (dsr_models.py, vvt_models.py, ...):

  regulation_models.py       (134) — Regulation, Requirement
  control_models.py          (279) — Control, Mapping, Evidence, Risk
  ai_system_models.py        (141) — AISystem, AuditExport
  service_module_models.py   (176) — ServiceModule, ModuleRegulation, ModuleRisk
  audit_session_models.py    (177) — AuditSession, AuditSignOff
  isms_governance_models.py  (323) — ISMSScope, Context, Policy, Objective, SoA
  isms_audit_models.py       (468) — Finding, CAPA, MgmtReview, InternalAudit,
                                     AuditTrail, Readiness

models.py becomes an 85-line re-export shim in dependency order so
existing imports continue to work unchanged. Schema is byte-identical:
__tablename__, column definitions, relationship strings, back_populates,
cascade directives all preserved.

All new sibling files are under the 500-line hard cap; largest is
isms_audit_models.py at 468. No file in compliance/db/ now exceeds
the hard cap.

## Phase 1 Step 3 — infrastructure only

backend-compliance/compliance/{schemas,domain,repositories}/ packages
are created as landing zones with docstrings. compliance/domain/
exports DomainError / NotFoundError / ConflictError / ValidationError /
PermissionError — the base classes services will use to raise
domain-level errors instead of HTTPException.

PHASE1_RUNBOOK.md at backend-compliance/PHASE1_RUNBOOK.md documents
the nine-step execution plan for Phase 1: snapshot baseline,
characterization tests, split models.py (this commit), split schemas.py
(next), extract services, extract repositories, mypy --strict, coverage.

## Verification

  backend-compliance/.venv-phase1: uv python install 3.12 + pip -r requirements.txt
  PYTHONPATH=. pytest compliance/tests/ tests/contracts/
  -> 173 passed, 0 failed, 35 warnings, OpenAPI 360/484 unchanged

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 13:18:29 +02:00

591 lines
22 KiB
Python

"""
Unit tests for Compliance Audit Routes (Sprint 3).
Tests all audit session and sign-off endpoints.
Run with: pytest backend/compliance/tests/test_audit_routes.py -v
"""
import pytest
import hashlib
from datetime import datetime, timezone
from unittest.mock import MagicMock
from uuid import uuid4
from sqlalchemy.orm import Session
# Import the app and dependencies
import sys
sys.path.insert(0, '/Users/benjaminadmin/Projekte/breakpilot-pwa/backend')
from compliance.db.models import (
AuditSessionDB, AuditSignOffDB, AuditResultEnum, AuditSessionStatusEnum,
RequirementDB, RegulationDB
)
# ============================================================================
# Test Fixtures
# ============================================================================
@pytest.fixture
def mock_db():
"""Create a mock database session."""
return MagicMock(spec=Session)
@pytest.fixture
def sample_regulation():
"""Create a sample regulation for testing."""
return RegulationDB(
id=str(uuid4()),
code="GDPR",
name="General Data Protection Regulation",
full_name="Regulation (EU) 2016/679",
is_active=True,
)
@pytest.fixture
def sample_requirement(sample_regulation):
"""Create a sample requirement for testing."""
return RequirementDB(
id=str(uuid4()),
regulation_id=sample_regulation.id,
regulation=sample_regulation,
article="Art. 32",
title="Security of processing",
description="Implement appropriate technical measures",
implementation_status="not_started",
priority=1,
)
@pytest.fixture
def sample_session():
"""Create a sample audit session for testing."""
session_id = str(uuid4())
return AuditSessionDB(
id=session_id,
name="Q1 2026 Compliance Audit",
description="Quarterly compliance review",
auditor_name="Dr. Thomas Mueller",
auditor_email="mueller@audit.de",
auditor_organization="Audit GmbH",
status=AuditSessionStatusEnum.DRAFT,
regulation_ids=["GDPR", "AIACT"],
total_items=100,
completed_items=0,
compliant_count=0,
non_compliant_count=0,
created_at=datetime.now(timezone.utc),
)
@pytest.fixture
def sample_signoff(sample_session, sample_requirement):
"""Create a sample sign-off for testing."""
return AuditSignOffDB(
id=str(uuid4()),
session_id=sample_session.id,
requirement_id=sample_requirement.id,
result=AuditResultEnum.COMPLIANT,
notes="All checks passed",
signature_hash=None,
signed_at=None,
signed_by=None,
created_at=datetime.now(timezone.utc),
)
# ============================================================================
# Test: Audit Session Creation
# ============================================================================
class TestCreateAuditSession:
"""Tests for POST /audit/sessions endpoint."""
def test_create_session_valid_data_returns_session(self, mock_db, sample_regulation):
"""Creating a session with valid data should return the session."""
# Arrange
mock_db.query.return_value.filter.return_value.all.return_value = [(sample_regulation.id,)]
mock_db.query.return_value.count.return_value = 50
request_data = {
"name": "Test Audit Session",
"description": "Test description",
"auditor_name": "Test Auditor",
"auditor_email": "auditor@test.de",
"regulation_codes": ["GDPR"],
}
# The session should be created with correct data
assert request_data["name"] == "Test Audit Session"
assert request_data["auditor_name"] == "Test Auditor"
def test_create_session_minimal_data_returns_session(self):
"""Creating a session with minimal data should work."""
request_data = {
"name": "Minimal Audit",
"auditor_name": "Auditor",
}
assert "name" in request_data
assert "auditor_name" in request_data
assert "description" not in request_data or request_data.get("description") is None
def test_create_session_with_multiple_regulations(self):
"""Creating a session with multiple regulations should filter correctly."""
request_data = {
"name": "Multi-Regulation Audit",
"auditor_name": "Auditor",
"regulation_codes": ["GDPR", "AIACT", "CRA"],
}
assert len(request_data["regulation_codes"]) == 3
# ============================================================================
# Test: Audit Session List
# ============================================================================
class TestListAuditSessions:
"""Tests for GET /audit/sessions endpoint."""
def test_list_sessions_returns_all(self, mock_db, sample_session):
"""Listing sessions without filter should return all sessions."""
mock_db.query.return_value.order_by.return_value.all.return_value = [sample_session]
sessions = [sample_session]
assert len(sessions) == 1
assert sessions[0].name == "Q1 2026 Compliance Audit"
def test_list_sessions_filter_by_status_draft(self, mock_db, sample_session):
"""Filtering by draft status should only return draft sessions."""
sample_session.status = AuditSessionStatusEnum.DRAFT
assert sample_session.status == AuditSessionStatusEnum.DRAFT
def test_list_sessions_filter_by_status_in_progress(self, sample_session):
"""Filtering by in_progress status should only return in_progress sessions."""
sample_session.status = AuditSessionStatusEnum.IN_PROGRESS
assert sample_session.status == AuditSessionStatusEnum.IN_PROGRESS
def test_list_sessions_invalid_status_raises_error(self):
"""Filtering by invalid status should raise an error."""
invalid_status = "invalid_status"
with pytest.raises(ValueError):
AuditSessionStatusEnum(invalid_status)
# ============================================================================
# Test: Audit Session Get
# ============================================================================
class TestGetAuditSession:
"""Tests for GET /audit/sessions/{session_id} endpoint."""
def test_get_session_existing_returns_details(self, sample_session):
"""Getting an existing session should return full details."""
assert sample_session.id is not None
assert sample_session.name == "Q1 2026 Compliance Audit"
assert sample_session.auditor_name == "Dr. Thomas Mueller"
def test_get_session_includes_statistics(self, sample_session, sample_signoff):
"""Getting a session should include statistics."""
# Simulate statistics calculation
signoffs = [sample_signoff]
compliant = sum(1 for s in signoffs if s.result == AuditResultEnum.COMPLIANT)
assert compliant == 1
# ============================================================================
# Test: Audit Session Lifecycle
# ============================================================================
class TestAuditSessionLifecycle:
"""Tests for session status transitions."""
def test_start_session_from_draft_success(self, sample_session):
"""Starting a draft session should change status to in_progress."""
assert sample_session.status == AuditSessionStatusEnum.DRAFT
sample_session.status = AuditSessionStatusEnum.IN_PROGRESS
sample_session.started_at = datetime.now(timezone.utc)
assert sample_session.status == AuditSessionStatusEnum.IN_PROGRESS
assert sample_session.started_at is not None
def test_start_session_from_completed_fails(self, sample_session):
"""Starting a completed session should fail."""
sample_session.status = AuditSessionStatusEnum.COMPLETED
# Can only start from DRAFT
assert sample_session.status != AuditSessionStatusEnum.DRAFT
def test_complete_session_from_in_progress_success(self, sample_session):
"""Completing an in_progress session should succeed."""
sample_session.status = AuditSessionStatusEnum.IN_PROGRESS
sample_session.status = AuditSessionStatusEnum.COMPLETED
sample_session.completed_at = datetime.now(timezone.utc)
assert sample_session.status == AuditSessionStatusEnum.COMPLETED
assert sample_session.completed_at is not None
def test_archive_session_from_completed_success(self, sample_session):
"""Archiving a completed session should succeed."""
sample_session.status = AuditSessionStatusEnum.COMPLETED
sample_session.status = AuditSessionStatusEnum.ARCHIVED
assert sample_session.status == AuditSessionStatusEnum.ARCHIVED
def test_archive_session_from_in_progress_fails(self, sample_session):
"""Archiving an in_progress session should fail."""
sample_session.status = AuditSessionStatusEnum.IN_PROGRESS
# Can only archive from COMPLETED
assert sample_session.status != AuditSessionStatusEnum.COMPLETED
# ============================================================================
# Test: Audit Session Delete
# ============================================================================
class TestDeleteAuditSession:
"""Tests for DELETE /audit/sessions/{session_id} endpoint."""
def test_delete_draft_session_success(self, sample_session):
"""Deleting a draft session should succeed."""
sample_session.status = AuditSessionStatusEnum.DRAFT
assert sample_session.status in [
AuditSessionStatusEnum.DRAFT,
AuditSessionStatusEnum.ARCHIVED
]
def test_delete_archived_session_success(self, sample_session):
"""Deleting an archived session should succeed."""
sample_session.status = AuditSessionStatusEnum.ARCHIVED
assert sample_session.status in [
AuditSessionStatusEnum.DRAFT,
AuditSessionStatusEnum.ARCHIVED
]
def test_delete_in_progress_session_fails(self, sample_session):
"""Deleting an in_progress session should fail."""
sample_session.status = AuditSessionStatusEnum.IN_PROGRESS
assert sample_session.status not in [
AuditSessionStatusEnum.DRAFT,
AuditSessionStatusEnum.ARCHIVED
]
# ============================================================================
# Test: Audit Checklist
# ============================================================================
class TestGetAuditChecklist:
"""Tests for GET /audit/checklist/{session_id} endpoint."""
def test_checklist_returns_paginated_items(self, sample_session, sample_requirement):
"""Checklist should return paginated items."""
page = 1
page_size = 50
# Simulate pagination
offset = (page - 1) * page_size
assert offset == 0
def test_checklist_includes_signoff_status(self, sample_requirement, sample_signoff):
"""Checklist items should include sign-off status."""
signoff_map = {sample_signoff.requirement_id: sample_signoff}
signoff = signoff_map.get(sample_requirement.id)
if signoff:
current_result = signoff.result.value
else:
current_result = "pending"
assert current_result in ["compliant", "pending"]
def test_checklist_filter_by_status(self, sample_signoff):
"""Filtering checklist by status should work."""
status_filter = "compliant"
sample_signoff.result = AuditResultEnum.COMPLIANT
assert sample_signoff.result.value == status_filter
def test_checklist_search_by_title(self, sample_requirement):
"""Searching checklist by title should work."""
search_term = "Security"
sample_requirement.title = "Security of processing"
assert search_term.lower() in sample_requirement.title.lower()
# ============================================================================
# Test: Sign-off
# ============================================================================
class TestSignOff:
"""Tests for PUT /audit/checklist/{session_id}/items/{requirement_id}/sign-off endpoint."""
def test_signoff_compliant_creates_record(self, sample_session, sample_requirement):
"""Signing off as compliant should create a sign-off record."""
signoff = AuditSignOffDB(
id=str(uuid4()),
session_id=sample_session.id,
requirement_id=sample_requirement.id,
result=AuditResultEnum.COMPLIANT,
notes="All requirements met",
)
assert signoff.result == AuditResultEnum.COMPLIANT
assert signoff.notes == "All requirements met"
def test_signoff_with_signature_creates_hash(self, sample_session, sample_requirement):
"""Signing off with signature should create SHA-256 hash."""
result = AuditResultEnum.COMPLIANT
timestamp = datetime.now(timezone.utc).isoformat()
data = f"{result.value}|{sample_requirement.id}|{sample_session.auditor_name}|{timestamp}"
signature_hash = hashlib.sha256(data.encode()).hexdigest()
assert len(signature_hash) == 64 # SHA-256 produces 64 hex chars
assert signature_hash.isalnum()
def test_signoff_non_compliant_increments_count(self, sample_session):
"""Non-compliant sign-off should increment non_compliant_count."""
initial_count = sample_session.non_compliant_count
sample_session.non_compliant_count += 1
assert sample_session.non_compliant_count == initial_count + 1
def test_signoff_updates_completion_items(self, sample_session):
"""Sign-off should increment completed_items."""
initial_completed = sample_session.completed_items
sample_session.completed_items += 1
assert sample_session.completed_items == initial_completed + 1
def test_signoff_auto_starts_session(self, sample_session):
"""First sign-off should auto-start a draft session."""
assert sample_session.status == AuditSessionStatusEnum.DRAFT
# First sign-off should trigger auto-start
sample_session.status = AuditSessionStatusEnum.IN_PROGRESS
sample_session.started_at = datetime.now(timezone.utc)
assert sample_session.status == AuditSessionStatusEnum.IN_PROGRESS
def test_signoff_update_existing_record(self, sample_signoff):
"""Updating an existing sign-off should work."""
sample_signoff.result = AuditResultEnum.NON_COMPLIANT
sample_signoff.notes = "Updated: needs improvement"
sample_signoff.updated_at = datetime.now(timezone.utc)
assert sample_signoff.result == AuditResultEnum.NON_COMPLIANT
assert "Updated" in sample_signoff.notes
def test_signoff_invalid_result_raises_error(self):
"""Sign-off with invalid result should raise an error."""
invalid_result = "super_compliant"
with pytest.raises(ValueError):
AuditResultEnum(invalid_result)
# ============================================================================
# Test: Get Sign-off
# ============================================================================
class TestGetSignOff:
"""Tests for GET /audit/checklist/{session_id}/items/{requirement_id} endpoint."""
def test_get_signoff_existing_returns_details(self, sample_signoff):
"""Getting an existing sign-off should return its details."""
assert sample_signoff.id is not None
assert sample_signoff.result == AuditResultEnum.COMPLIANT
def test_get_signoff_includes_signature_info(self, sample_signoff):
"""Sign-off response should include signature information."""
# Without signature
assert sample_signoff.signature_hash is None
assert sample_signoff.signed_at is None
# With signature
sample_signoff.signature_hash = "abc123"
sample_signoff.signed_at = datetime.now(timezone.utc)
sample_signoff.signed_by = "Test Auditor"
assert sample_signoff.signature_hash == "abc123"
assert sample_signoff.signed_by == "Test Auditor"
# ============================================================================
# Test: AuditResultEnum Values
# ============================================================================
class TestAuditResultEnum:
"""Tests for AuditResultEnum values."""
def test_compliant_value(self):
"""Compliant enum should have correct value."""
assert AuditResultEnum.COMPLIANT.value == "compliant"
def test_compliant_with_notes_value(self):
"""Compliant with notes enum should have correct value."""
assert AuditResultEnum.COMPLIANT_WITH_NOTES.value == "compliant_notes"
def test_non_compliant_value(self):
"""Non-compliant enum should have correct value."""
assert AuditResultEnum.NON_COMPLIANT.value == "non_compliant"
def test_not_applicable_value(self):
"""Not applicable enum should have correct value."""
assert AuditResultEnum.NOT_APPLICABLE.value == "not_applicable"
def test_pending_value(self):
"""Pending enum should have correct value."""
assert AuditResultEnum.PENDING.value == "pending"
# ============================================================================
# Test: AuditSessionStatusEnum Values
# ============================================================================
class TestAuditSessionStatusEnum:
"""Tests for AuditSessionStatusEnum values."""
def test_draft_value(self):
"""Draft enum should have correct value."""
assert AuditSessionStatusEnum.DRAFT.value == "draft"
def test_in_progress_value(self):
"""In progress enum should have correct value."""
assert AuditSessionStatusEnum.IN_PROGRESS.value == "in_progress"
def test_completed_value(self):
"""Completed enum should have correct value."""
assert AuditSessionStatusEnum.COMPLETED.value == "completed"
def test_archived_value(self):
"""Archived enum should have correct value."""
assert AuditSessionStatusEnum.ARCHIVED.value == "archived"
# ============================================================================
# Test: Completion Percentage Calculation
# ============================================================================
class TestCompletionPercentage:
"""Tests for completion percentage calculation."""
def test_completion_percentage_zero_items(self, sample_session):
"""Completion percentage with zero total items should be 0."""
sample_session.total_items = 0
sample_session.completed_items = 0
percentage = 0.0 if sample_session.total_items == 0 else (
sample_session.completed_items / sample_session.total_items * 100
)
assert percentage == 0.0
def test_completion_percentage_partial(self, sample_session):
"""Completion percentage should calculate correctly."""
sample_session.total_items = 100
sample_session.completed_items = 50
percentage = sample_session.completed_items / sample_session.total_items * 100
assert percentage == 50.0
def test_completion_percentage_complete(self, sample_session):
"""Completion percentage at 100% should be correct."""
sample_session.total_items = 100
sample_session.completed_items = 100
percentage = sample_session.completed_items / sample_session.total_items * 100
assert percentage == 100.0
# ============================================================================
# Test: Digital Signature Generation
# ============================================================================
class TestDigitalSignature:
"""Tests for digital signature generation."""
def test_signature_is_sha256(self):
"""Signature should be a valid SHA-256 hash."""
data = "compliant|req-123|Dr. Mueller|2026-01-18T12:00:00"
signature = hashlib.sha256(data.encode()).hexdigest()
assert len(signature) == 64
assert all(c in '0123456789abcdef' for c in signature)
def test_signature_is_deterministic(self):
"""Same input should produce same signature."""
data = "compliant|req-123|Dr. Mueller|2026-01-18T12:00:00"
signature1 = hashlib.sha256(data.encode()).hexdigest()
signature2 = hashlib.sha256(data.encode()).hexdigest()
assert signature1 == signature2
def test_signature_changes_with_input(self):
"""Different input should produce different signature."""
data1 = "compliant|req-123|Dr. Mueller|2026-01-18T12:00:00"
data2 = "non_compliant|req-123|Dr. Mueller|2026-01-18T12:00:00"
signature1 = hashlib.sha256(data1.encode()).hexdigest()
signature2 = hashlib.sha256(data2.encode()).hexdigest()
assert signature1 != signature2
# ============================================================================
# Test: Statistics Calculation
# ============================================================================
class TestStatisticsCalculation:
"""Tests for audit statistics calculation."""
def test_statistics_counts_by_result(self):
"""Statistics should correctly count by result type."""
signoffs = [
MagicMock(result=AuditResultEnum.COMPLIANT),
MagicMock(result=AuditResultEnum.COMPLIANT),
MagicMock(result=AuditResultEnum.COMPLIANT_WITH_NOTES),
MagicMock(result=AuditResultEnum.NON_COMPLIANT),
MagicMock(result=AuditResultEnum.NOT_APPLICABLE),
]
compliant = sum(1 for s in signoffs if s.result == AuditResultEnum.COMPLIANT)
compliant_notes = sum(1 for s in signoffs if s.result == AuditResultEnum.COMPLIANT_WITH_NOTES)
non_compliant = sum(1 for s in signoffs if s.result == AuditResultEnum.NON_COMPLIANT)
not_applicable = sum(1 for s in signoffs if s.result == AuditResultEnum.NOT_APPLICABLE)
assert compliant == 2
assert compliant_notes == 1
assert non_compliant == 1
assert not_applicable == 1
def test_statistics_pending_calculation(self):
"""Pending count should be total minus reviewed."""
total_items = 100
reviewed_items = 75
pending = total_items - reviewed_items
assert pending == 25