Files
breakpilot-compliance/backend-compliance/compliance/tests/test_auto_risk_updater.py
Sharang Parnerkar 3320ef94fc refactor: phase 0 guardrails + phase 1 step 2 (models.py split)
Squash of branch refactor/phase0-guardrails-and-models-split — 4 commits,
81 files, 173/173 pytest green, OpenAPI contract preserved (360 paths /
484 operations).

## Phase 0 — Architecture guardrails

Three defense-in-depth layers to keep the architecture rules enforced
regardless of who opens Claude Code in this repo:

  1. .claude/settings.json PreToolUse hook on Write/Edit blocks any file
     that would exceed the 500-line hard cap. Auto-loads in every Claude
     session in this repo.
  2. scripts/githooks/pre-commit (install via scripts/install-hooks.sh)
     enforces the LOC cap locally, freezes migrations/ without
     [migration-approved], and protects guardrail files without
     [guardrail-change].
  3. .gitea/workflows/ci.yaml gains loc-budget + guardrail-integrity +
     sbom-scan (syft+grype) jobs, adds mypy --strict for the new Python
     packages (compliance/{services,repositories,domain,schemas}), and
     tsc --noEmit for admin-compliance + developer-portal.

Per-language conventions documented in AGENTS.python.md, AGENTS.go.md,
AGENTS.typescript.md at the repo root — layering, tooling, and explicit
"what you may NOT do" lists. Root CLAUDE.md is prepended with the six
non-negotiable rules. Each of the 10 services gets a README.md.

scripts/check-loc.sh enforces soft 300 / hard 500 and surfaces the
current baseline of 205 hard + 161 soft violations so Phases 1-4 can
drain it incrementally. CI gates only CHANGED files in PRs so the
legacy baseline does not block unrelated work.

## Deprecation sweep

47 files. Pydantic V1 regex= -> pattern= (2 sites), class Config ->
ConfigDict in source_policy_router.py (schemas.py intentionally skipped;
it is the Phase 1 Step 3 split target). datetime.utcnow() ->
datetime.now(timezone.utc) everywhere including SQLAlchemy default=
callables. All DB columns already declare timezone=True, so this is a
latent-bug fix at the Python side, not a schema change.

DeprecationWarning count dropped from 158 to 35.

## Phase 1 Step 1 — Contract test harness

tests/contracts/test_openapi_baseline.py diffs the live FastAPI /openapi.json
against tests/contracts/openapi.baseline.json on every test run. Fails on
removed paths, removed status codes, or new required request body fields.
Regenerate only via tests/contracts/regenerate_baseline.py after a
consumer-updated contract change. This is the safety harness for all
subsequent refactor commits.

## Phase 1 Step 2 — models.py split (1466 -> 85 LOC shim)

compliance/db/models.py is decomposed into seven sibling aggregate modules
following the existing repo pattern (dsr_models.py, vvt_models.py, ...):

  regulation_models.py       (134) — Regulation, Requirement
  control_models.py          (279) — Control, Mapping, Evidence, Risk
  ai_system_models.py        (141) — AISystem, AuditExport
  service_module_models.py   (176) — ServiceModule, ModuleRegulation, ModuleRisk
  audit_session_models.py    (177) — AuditSession, AuditSignOff
  isms_governance_models.py  (323) — ISMSScope, Context, Policy, Objective, SoA
  isms_audit_models.py       (468) — Finding, CAPA, MgmtReview, InternalAudit,
                                     AuditTrail, Readiness

models.py becomes an 85-line re-export shim in dependency order so
existing imports continue to work unchanged. Schema is byte-identical:
__tablename__, column definitions, relationship strings, back_populates,
cascade directives all preserved.

All new sibling files are under the 500-line hard cap; largest is
isms_audit_models.py at 468. No file in compliance/db/ now exceeds
the hard cap.

## Phase 1 Step 3 — infrastructure only

backend-compliance/compliance/{schemas,domain,repositories}/ packages
are created as landing zones with docstrings. compliance/domain/
exports DomainError / NotFoundError / ConflictError / ValidationError /
PermissionError — the base classes services will use to raise
domain-level errors instead of HTTPException.

PHASE1_RUNBOOK.md at backend-compliance/PHASE1_RUNBOOK.md documents
the nine-step execution plan for Phase 1: snapshot baseline,
characterization tests, split models.py (this commit), split schemas.py
(next), extract services, extract repositories, mypy --strict, coverage.

## Verification

  backend-compliance/.venv-phase1: uv python install 3.12 + pip -r requirements.txt
  PYTHONPATH=. pytest compliance/tests/ tests/contracts/
  -> 173 passed, 0 failed, 35 warnings, OpenAPI 360/484 unchanged

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 13:18:29 +02:00

431 lines
14 KiB
Python

"""
Tests for the AutoRiskUpdater Service.
Sprint 6: CI/CD Evidence Collection & Automatic Risk Updates (2026-01-18)
"""
from datetime import datetime, timezone
from unittest.mock import MagicMock
from ..services.auto_risk_updater import (
AutoRiskUpdater,
ScanType,
FindingSeverity,
ScanResult,
RiskUpdateResult,
CONTROL_SCAN_MAPPING,
)
from ..db.models import (
ControlStatusEnum,
)
class TestDetermineControlStatus:
"""Tests for _determine_control_status method."""
def test_critical_findings_return_fail(self):
"""Any critical finding should result in FAIL status."""
db = MagicMock()
updater = AutoRiskUpdater(db)
findings = {"critical": 1, "high": 0, "medium": 0, "low": 0}
result = updater._determine_control_status(findings)
assert result == ControlStatusEnum.FAIL.value
def test_multiple_critical_findings_return_fail(self):
"""Multiple critical findings should result in FAIL status."""
db = MagicMock()
updater = AutoRiskUpdater(db)
findings = {"critical": 5, "high": 2, "medium": 10, "low": 50}
result = updater._determine_control_status(findings)
assert result == ControlStatusEnum.FAIL.value
def test_more_than_5_high_findings_return_fail(self):
"""More than 5 HIGH findings should result in FAIL status."""
db = MagicMock()
updater = AutoRiskUpdater(db)
findings = {"critical": 0, "high": 6, "medium": 0, "low": 0}
result = updater._determine_control_status(findings)
assert result == ControlStatusEnum.FAIL.value
def test_exactly_5_high_findings_return_partial(self):
"""Exactly 5 HIGH findings should result in PARTIAL status."""
db = MagicMock()
updater = AutoRiskUpdater(db)
findings = {"critical": 0, "high": 5, "medium": 0, "low": 0}
result = updater._determine_control_status(findings)
assert result == ControlStatusEnum.PARTIAL.value
def test_1_to_5_high_findings_return_partial(self):
"""1-5 HIGH findings should result in PARTIAL status."""
db = MagicMock()
updater = AutoRiskUpdater(db)
for high_count in [1, 2, 3, 4, 5]:
findings = {"critical": 0, "high": high_count, "medium": 0, "low": 0}
result = updater._determine_control_status(findings)
assert result == ControlStatusEnum.PARTIAL.value, f"Failed for {high_count} HIGH findings"
def test_more_than_10_medium_findings_return_partial(self):
"""More than 10 MEDIUM findings should result in PARTIAL status."""
db = MagicMock()
updater = AutoRiskUpdater(db)
findings = {"critical": 0, "high": 0, "medium": 11, "low": 0}
result = updater._determine_control_status(findings)
assert result == ControlStatusEnum.PARTIAL.value
def test_only_medium_and_low_findings_return_pass(self):
"""Only MEDIUM (<=10) and LOW findings should result in PASS status."""
db = MagicMock()
updater = AutoRiskUpdater(db)
findings = {"critical": 0, "high": 0, "medium": 5, "low": 100}
result = updater._determine_control_status(findings)
assert result == ControlStatusEnum.PASS.value
def test_no_findings_return_pass(self):
"""No findings should result in PASS status."""
db = MagicMock()
updater = AutoRiskUpdater(db)
findings = {"critical": 0, "high": 0, "medium": 0, "low": 0}
result = updater._determine_control_status(findings)
assert result == ControlStatusEnum.PASS.value
def test_empty_findings_return_pass(self):
"""Empty findings dict should result in PASS status."""
db = MagicMock()
updater = AutoRiskUpdater(db)
findings = {}
result = updater._determine_control_status(findings)
assert result == ControlStatusEnum.PASS.value
class TestGenerateStatusNotes:
"""Tests for _generate_status_notes method."""
def test_notes_include_tool_name(self):
"""Status notes should include the scan tool name."""
db = MagicMock()
updater = AutoRiskUpdater(db)
scan_result = ScanResult(
scan_type=ScanType.SAST,
tool="Semgrep",
timestamp=datetime(2026, 1, 18, 14, 30),
commit_sha="abc123",
branch="main",
control_id="SDLC-001",
findings={"critical": 1, "high": 2, "medium": 0, "low": 0},
)
notes = updater._generate_status_notes(scan_result)
assert "Semgrep" in notes
assert "1 CRITICAL" in notes
assert "2 HIGH" in notes
def test_notes_include_timestamp(self):
"""Status notes should include scan timestamp."""
db = MagicMock()
updater = AutoRiskUpdater(db)
scan_result = ScanResult(
scan_type=ScanType.DEPENDENCY,
tool="Trivy",
timestamp=datetime(2026, 1, 18, 10, 0),
commit_sha="def456",
branch="develop",
control_id="SDLC-002",
findings={"critical": 0, "high": 3, "medium": 5, "low": 10},
)
notes = updater._generate_status_notes(scan_result)
assert "2026-01-18 10:00" in notes
def test_notes_for_no_findings(self):
"""Status notes for no findings should indicate clean scan."""
db = MagicMock()
updater = AutoRiskUpdater(db)
scan_result = ScanResult(
scan_type=ScanType.SECRET,
tool="Gitleaks",
timestamp=datetime(2026, 1, 18, 12, 0),
commit_sha="ghi789",
branch="main",
control_id="SDLC-003",
findings={"critical": 0, "high": 0, "medium": 0, "low": 0},
)
notes = updater._generate_status_notes(scan_result)
assert "No significant findings" in notes
class TestGenerateAlerts:
"""Tests for _generate_alerts method."""
def test_alert_for_critical_findings(self):
"""Critical findings should generate an alert."""
db = MagicMock()
updater = AutoRiskUpdater(db)
scan_result = ScanResult(
scan_type=ScanType.DEPENDENCY,
tool="Trivy",
timestamp=datetime.now(timezone.utc),
commit_sha="abc123",
branch="main",
control_id="SDLC-002",
findings={"critical": 2, "high": 0, "medium": 0, "low": 0},
)
alerts = updater._generate_alerts(scan_result, ControlStatusEnum.FAIL.value)
assert len(alerts) >= 1
assert any("CRITICAL" in alert for alert in alerts)
assert any("2 critical" in alert.lower() for alert in alerts)
def test_alert_for_fail_status(self):
"""Control status change to FAIL should generate an alert."""
db = MagicMock()
updater = AutoRiskUpdater(db)
scan_result = ScanResult(
scan_type=ScanType.SAST,
tool="Semgrep",
timestamp=datetime.now(timezone.utc),
commit_sha="def456",
branch="main",
control_id="SDLC-001",
findings={"critical": 0, "high": 10, "medium": 0, "low": 0},
)
alerts = updater._generate_alerts(scan_result, ControlStatusEnum.FAIL.value)
assert any("FAIL" in alert for alert in alerts)
def test_alert_for_many_high_findings(self):
"""More than 10 HIGH findings should generate an alert."""
db = MagicMock()
updater = AutoRiskUpdater(db)
scan_result = ScanResult(
scan_type=ScanType.CONTAINER,
tool="Trivy",
timestamp=datetime.now(timezone.utc),
commit_sha="ghi789",
branch="main",
control_id="SDLC-006",
findings={"critical": 0, "high": 15, "medium": 0, "low": 0},
)
alerts = updater._generate_alerts(scan_result, ControlStatusEnum.FAIL.value)
assert any("HIGH" in alert and "15" in alert for alert in alerts)
def test_no_alert_for_pass_with_low_findings(self):
"""No alert should be generated for PASS status with only low findings."""
db = MagicMock()
updater = AutoRiskUpdater(db)
scan_result = ScanResult(
scan_type=ScanType.SAST,
tool="Semgrep",
timestamp=datetime.now(timezone.utc),
commit_sha="jkl012",
branch="main",
control_id="SDLC-001",
findings={"critical": 0, "high": 0, "medium": 5, "low": 20},
)
alerts = updater._generate_alerts(scan_result, ControlStatusEnum.PASS.value)
assert len(alerts) == 0
class TestControlScanMapping:
"""Tests for CONTROL_SCAN_MAPPING constant."""
def test_sdlc_001_maps_to_sast(self):
"""SDLC-001 should map to SAST scan type."""
assert CONTROL_SCAN_MAPPING["SDLC-001"] == ScanType.SAST
def test_sdlc_002_maps_to_dependency(self):
"""SDLC-002 should map to DEPENDENCY scan type."""
assert CONTROL_SCAN_MAPPING["SDLC-002"] == ScanType.DEPENDENCY
def test_sdlc_003_maps_to_secret(self):
"""SDLC-003 should map to SECRET scan type."""
assert CONTROL_SCAN_MAPPING["SDLC-003"] == ScanType.SECRET
def test_sdlc_006_maps_to_container(self):
"""SDLC-006 should map to CONTAINER scan type."""
assert CONTROL_SCAN_MAPPING["SDLC-006"] == ScanType.CONTAINER
def test_cra_001_maps_to_sbom(self):
"""CRA-001 should map to SBOM scan type."""
assert CONTROL_SCAN_MAPPING["CRA-001"] == ScanType.SBOM
class TestProcessEvidenceCollectRequest:
"""Tests for process_evidence_collect_request method."""
def test_parses_iso_timestamp(self):
"""Should correctly parse ISO format timestamps."""
db = MagicMock()
updater = AutoRiskUpdater(db)
# Mock the control repo to return None (control not found)
updater.control_repo.get_by_control_id = MagicMock(return_value=None)
result = updater.process_evidence_collect_request(
tool="Semgrep",
control_id="SDLC-001",
evidence_type="ci_semgrep",
timestamp="2026-01-18T14:30:00Z",
commit_sha="abc123",
findings={"critical": 0, "high": 0, "medium": 0, "low": 0},
)
# Control not found, so control_updated should be False
assert result.control_updated is False
def test_handles_invalid_timestamp(self):
"""Should handle invalid timestamps gracefully."""
db = MagicMock()
updater = AutoRiskUpdater(db)
updater.control_repo.get_by_control_id = MagicMock(return_value=None)
# Should not raise exception
result = updater.process_evidence_collect_request(
tool="Trivy",
control_id="SDLC-002",
evidence_type="ci_trivy",
timestamp="invalid-timestamp",
commit_sha="def456",
findings={"critical": 0, "high": 0, "medium": 0, "low": 0},
)
assert result is not None
def test_control_not_found_returns_result(self):
"""Should return appropriate result when control is not found."""
db = MagicMock()
updater = AutoRiskUpdater(db)
updater.control_repo.get_by_control_id = MagicMock(return_value=None)
result = updater.process_evidence_collect_request(
tool="Gitleaks",
control_id="UNKNOWN-001",
evidence_type="ci_gitleaks",
timestamp="2026-01-18T10:00:00Z",
commit_sha="ghi789",
findings={"critical": 0, "high": 0, "medium": 0, "low": 0},
)
assert result.control_id == "UNKNOWN-001"
assert result.control_updated is False
assert "not found" in result.message
class TestScanResult:
"""Tests for ScanResult dataclass."""
def test_scan_result_creation(self):
"""Should create ScanResult with all required fields."""
result = ScanResult(
scan_type=ScanType.SAST,
tool="Semgrep",
timestamp=datetime(2026, 1, 18, 14, 0),
commit_sha="abc123def456",
branch="main",
control_id="SDLC-001",
findings={"critical": 0, "high": 2, "medium": 5, "low": 10},
)
assert result.scan_type == ScanType.SAST
assert result.tool == "Semgrep"
assert result.control_id == "SDLC-001"
assert result.findings["high"] == 2
def test_scan_result_optional_fields(self):
"""Should handle optional fields correctly."""
result = ScanResult(
scan_type=ScanType.DEPENDENCY,
tool="Trivy",
timestamp=datetime.now(timezone.utc),
commit_sha="xyz789",
branch="develop",
control_id="SDLC-002",
findings={"critical": 1},
raw_report={"vulnerabilities": []},
ci_job_id="github-actions-12345",
)
assert result.raw_report is not None
assert result.ci_job_id == "github-actions-12345"
class TestRiskUpdateResult:
"""Tests for RiskUpdateResult dataclass."""
def test_risk_update_result_creation(self):
"""Should create RiskUpdateResult with all fields."""
result = RiskUpdateResult(
control_id="SDLC-001",
control_updated=True,
old_status="pass",
new_status="fail",
evidence_created=True,
evidence_id="ev-12345",
risks_affected=["RISK-001", "RISK-002"],
alerts_generated=["Critical vulnerability found"],
message="Processed successfully",
)
assert result.control_updated is True
assert result.old_status == "pass"
assert result.new_status == "fail"
assert len(result.risks_affected) == 2
assert len(result.alerts_generated) == 1
class TestFindingSeverity:
"""Tests for FindingSeverity enum."""
def test_severity_levels(self):
"""Should have all expected severity levels."""
assert FindingSeverity.CRITICAL.value == "critical"
assert FindingSeverity.HIGH.value == "high"
assert FindingSeverity.MEDIUM.value == "medium"
assert FindingSeverity.LOW.value == "low"
assert FindingSeverity.INFO.value == "info"
class TestScanType:
"""Tests for ScanType enum."""
def test_scan_types(self):
"""Should have all expected scan types."""
assert ScanType.SAST.value == "sast"
assert ScanType.DEPENDENCY.value == "dependency"
assert ScanType.SECRET.value == "secret"
assert ScanType.CONTAINER.value == "container"
assert ScanType.SBOM.value == "sbom"