17 Commits

Author SHA1 Message Date
Sharang Parnerkar
482e8574ad refactor(backend/db): split repository.py + isms_repository.py per-aggregate
Phase 1 Step 5 of PHASE1_RUNBOOK.md.

compliance/db/repository.py (1547 LOC) decomposed into seven sibling
per-aggregate repository modules:

  regulation_repository.py     (268) — Regulation + Requirement
  control_repository.py        (291) — Control + ControlMapping
  evidence_repository.py       (143)
  risk_repository.py           (148)
  audit_export_repository.py   (110)
  service_module_repository.py (247)
  audit_session_repository.py  (478) — AuditSession + AuditSignOff

compliance/db/isms_repository.py (838 LOC) decomposed into two
sub-aggregate modules mirroring the models split:

  isms_governance_repository.py (354) — Scope, Policy, Objective, SoA
  isms_audit_repository.py      (499) — Finding, CAPA, Review, Internal Audit,
                                         Trail, Readiness

Both original files become thin re-export shims (37 and 25 LOC
respectively) so every existing import continues to work unchanged.
New code SHOULD import from the aggregate module directly.

All new sibling files under the 500-line hard cap; largest is
isms_audit_repository.py at 499 (on the edge; when Phase 1 Step 4
router->service extraction lands, the audit_session repo may split
further if growth exceeds 500).

Verified:
  - 173/173 pytest compliance/tests/ tests/contracts/ pass
  - OpenAPI 360 paths / 484 operations unchanged
  - All repo files under 500 LOC

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 18:08:39 +02:00
Sharang Parnerkar
d9dcfb97ef refactor(backend/api): split schemas.py into per-domain modules (1899 -> 39 LOC shim)
Phase 1 Step 3 of PHASE1_RUNBOOK.md. compliance/api/schemas.py is
decomposed into 16 per-domain Pydantic schema modules under
compliance/schemas/:

  common.py          ( 79) — 6 API enums + PaginationMeta
  regulation.py      ( 52)
  requirement.py     ( 80)
  control.py         (119) — Control + Mapping
  evidence.py        ( 66)
  risk.py            ( 79)
  ai_system.py       ( 63)
  dashboard.py       (195) — Dashboard, Export, Executive Dashboard
  service_module.py  (121)
  bsi.py             ( 58) — BSI + PDF extraction
  audit_session.py   (172)
  report.py          ( 53)
  isms_governance.py (343) — Scope, Context, Policy, Objective, SoA
  isms_audit.py      (431) — Finding, CAPA, Review, Internal Audit, Readiness, Trail, ISO27001
  vvt.py             (168)
  tom.py             ( 71)

compliance/api/schemas.py becomes a 39-line re-export shim so existing
imports (from compliance.api.schemas import RegulationResponse) keep
working unchanged. New code should import from the domain module
directly (from compliance.schemas.regulation import RegulationResponse).

Deferred-from-sweep: all 28 class Config blocks in the original file
were converted to model_config = ConfigDict(...) during the split.
schemas.py-sourced PydanticDeprecatedSince20 warnings are now gone.

Cross-domain references handled via targeted imports (e.g. dashboard.py
imports EvidenceResponse from evidence, RiskResponse from risk). common
API enums + PaginationMeta are imported by every domain module.

Verified:
  - 173/173 pytest compliance/tests/ tests/contracts/ pass
  - OpenAPI 360 paths / 484 operations unchanged (contract test green)
  - All new files under the 500-line hard cap (largest: isms_audit.py
    at 431, isms_governance.py at 343, dashboard.py at 195)
  - No file in compliance/schemas/ or compliance/api/schemas.py
    exceeds the hard cap

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 18:06:27 +02:00
Sharang Parnerkar
3320ef94fc refactor: phase 0 guardrails + phase 1 step 2 (models.py split)
Squash of branch refactor/phase0-guardrails-and-models-split — 4 commits,
81 files, 173/173 pytest green, OpenAPI contract preserved (360 paths /
484 operations).

## Phase 0 — Architecture guardrails

Three defense-in-depth layers to keep the architecture rules enforced
regardless of who opens Claude Code in this repo:

  1. .claude/settings.json PreToolUse hook on Write/Edit blocks any file
     that would exceed the 500-line hard cap. Auto-loads in every Claude
     session in this repo.
  2. scripts/githooks/pre-commit (install via scripts/install-hooks.sh)
     enforces the LOC cap locally, freezes migrations/ without
     [migration-approved], and protects guardrail files without
     [guardrail-change].
  3. .gitea/workflows/ci.yaml gains loc-budget + guardrail-integrity +
     sbom-scan (syft+grype) jobs, adds mypy --strict for the new Python
     packages (compliance/{services,repositories,domain,schemas}), and
     tsc --noEmit for admin-compliance + developer-portal.

Per-language conventions documented in AGENTS.python.md, AGENTS.go.md,
AGENTS.typescript.md at the repo root — layering, tooling, and explicit
"what you may NOT do" lists. Root CLAUDE.md is prepended with the six
non-negotiable rules. Each of the 10 services gets a README.md.

scripts/check-loc.sh enforces soft 300 / hard 500 and surfaces the
current baseline of 205 hard + 161 soft violations so Phases 1-4 can
drain it incrementally. CI gates only CHANGED files in PRs so the
legacy baseline does not block unrelated work.

## Deprecation sweep

47 files. Pydantic V1 regex= -> pattern= (2 sites), class Config ->
ConfigDict in source_policy_router.py (schemas.py intentionally skipped;
it is the Phase 1 Step 3 split target). datetime.utcnow() ->
datetime.now(timezone.utc) everywhere including SQLAlchemy default=
callables. All DB columns already declare timezone=True, so this is a
latent-bug fix at the Python side, not a schema change.

DeprecationWarning count dropped from 158 to 35.

## Phase 1 Step 1 — Contract test harness

tests/contracts/test_openapi_baseline.py diffs the live FastAPI /openapi.json
against tests/contracts/openapi.baseline.json on every test run. Fails on
removed paths, removed status codes, or new required request body fields.
Regenerate only via tests/contracts/regenerate_baseline.py after a
consumer-updated contract change. This is the safety harness for all
subsequent refactor commits.

## Phase 1 Step 2 — models.py split (1466 -> 85 LOC shim)

compliance/db/models.py is decomposed into seven sibling aggregate modules
following the existing repo pattern (dsr_models.py, vvt_models.py, ...):

  regulation_models.py       (134) — Regulation, Requirement
  control_models.py          (279) — Control, Mapping, Evidence, Risk
  ai_system_models.py        (141) — AISystem, AuditExport
  service_module_models.py   (176) — ServiceModule, ModuleRegulation, ModuleRisk
  audit_session_models.py    (177) — AuditSession, AuditSignOff
  isms_governance_models.py  (323) — ISMSScope, Context, Policy, Objective, SoA
  isms_audit_models.py       (468) — Finding, CAPA, MgmtReview, InternalAudit,
                                     AuditTrail, Readiness

models.py becomes an 85-line re-export shim in dependency order so
existing imports continue to work unchanged. Schema is byte-identical:
__tablename__, column definitions, relationship strings, back_populates,
cascade directives all preserved.

All new sibling files are under the 500-line hard cap; largest is
isms_audit_models.py at 468. No file in compliance/db/ now exceeds
the hard cap.

## Phase 1 Step 3 — infrastructure only

backend-compliance/compliance/{schemas,domain,repositories}/ packages
are created as landing zones with docstrings. compliance/domain/
exports DomainError / NotFoundError / ConflictError / ValidationError /
PermissionError — the base classes services will use to raise
domain-level errors instead of HTTPException.

PHASE1_RUNBOOK.md at backend-compliance/PHASE1_RUNBOOK.md documents
the nine-step execution plan for Phase 1: snapshot baseline,
characterization tests, split models.py (this commit), split schemas.py
(next), extract services, extract repositories, mypy --strict, coverage.

## Verification

  backend-compliance/.venv-phase1: uv python install 3.12 + pip -r requirements.txt
  PYTHONPATH=. pytest compliance/tests/ tests/contracts/
  -> 173 passed, 0 failed, 35 warnings, OpenAPI 360/484 unchanged

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-07 13:18:29 +02:00
Sharang Parnerkar
1dfea51919 Remove standalone deploy-coolify.yml — deploy is handled in ci.yaml
Some checks failed
CI/CD / go-lint (pull_request) Failing after 2s
CI/CD / python-lint (pull_request) Failing after 10s
CI/CD / nodejs-lint (pull_request) Failing after 2s
CI/CD / test-go-ai-compliance (pull_request) Failing after 2s
CI/CD / test-python-backend-compliance (pull_request) Failing after 10s
CI/CD / test-python-document-crawler (pull_request) Failing after 12s
CI/CD / test-python-dsms-gateway (pull_request) Failing after 10s
CI/CD / validate-canonical-controls (pull_request) Failing after 10s
CI/CD / Deploy (pull_request) Has been skipped
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 11:26:31 +01:00
Sharang Parnerkar
559d7960a2 Replace deploy-hetzner with Coolify webhook deploy
Some checks failed
CI/CD / go-lint (pull_request) Failing after 15s
CI/CD / python-lint (pull_request) Failing after 12s
CI/CD / nodejs-lint (pull_request) Failing after 2s
CI/CD / test-go-ai-compliance (pull_request) Failing after 2s
CI/CD / test-python-backend-compliance (pull_request) Failing after 11s
CI/CD / test-python-document-crawler (pull_request) Failing after 11s
CI/CD / test-python-dsms-gateway (pull_request) Failing after 10s
CI/CD / validate-canonical-controls (pull_request) Failing after 9s
CI/CD / Deploy (pull_request) Has been skipped
Deploy to Coolify / deploy (push) Has been cancelled
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:39:12 +01:00
Sharang Parnerkar
a101426dba Add traefik.docker.network label to fix routing
Containers are on multiple networks (breakpilot-network, coolify,
gokocgws...). Without traefik.docker.network, Traefik randomly picks
a network and may choose breakpilot-network where it has no access.
This label forces Traefik to always use the coolify network.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:16:52 +01:00
Sharang Parnerkar
f6b22820ce Add coolify network to externally-routed services
Traefik routes traffic via the 'coolify' bridge network, so services
that need public domain access must be on both breakpilot-network
(for inter-service communication) and coolify (for Traefik routing).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:16:52 +01:00
Sharang Parnerkar
86588aff09 Fix SQLAlchemy 2.x compatibility: wrap raw SQL in text()
SQLAlchemy 2.x requires raw SQL strings to be explicitly wrapped
in text(). Fixed 16 instances across 5 route files.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:16:52 +01:00
Sharang Parnerkar
033fa52e5b Add healthcheck to dsms-gateway
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:16:00 +01:00
Sharang Parnerkar
005fb9d219 Add healthchecks to admin-compliance, developer-portal, backend-compliance
Traefik may require healthchecks to route traffic to containers.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:16:00 +01:00
Sharang Parnerkar
0c01f1c96c Remove Traefik labels from coolify compose — Coolify handles routing
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:16:00 +01:00
Sharang Parnerkar
ffd256d420 Sync coolify compose with main: use COMPLIANCE_DATABASE_URL, QDRANT_URL
- Switch to ${COMPLIANCE_DATABASE_URL} for admin-compliance, backend, SDK, crawler
- Add DATABASE_URL to admin-compliance environment
- Switch ai-compliance-sdk from QDRANT_HOST/PORT to QDRANT_URL + QDRANT_API_KEY
- Add MINIO_SECURE to compliance-tts-service
- Update .env.coolify.example with new variable patterns

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:16:00 +01:00
Sharang Parnerkar
d542dbbacd fix: ensure public dir exists in developer-portal build
Next.js standalone COPY fails when no public directory exists in source.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:16:00 +01:00
Sharang Parnerkar
a3d0024d39 fix: use Alpine-compatible addgroup/adduser flags in Dockerfiles
Replace --system/--gid/--uid (Debian syntax) with -S/-g/-u (BusyBox/Alpine).
Coolify ARG injection causes exit code 255 with Debian-style flags.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:13:57 +01:00
Sharang Parnerkar
998d427c3c fix: update alpine base to 3.21 for ai-compliance-sdk
Alpine 3.19 apk mirrors failing during Coolify build.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:13:57 +01:00
Sharang Parnerkar
99f3180ffc refactor(coolify): externalize postgres, qdrant, S3
- Replace bp-core-postgres with POSTGRES_HOST env var
- Replace bp-core-qdrant with QDRANT_HOST env var
- Replace bp-core-minio with S3_ENDPOINT/S3_ACCESS_KEY/S3_SECRET_KEY

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:13:57 +01:00
Sharang Parnerkar
2ec340c64b feat: add Coolify deployment configuration
Add docker-compose.coolify.yml (8 services), .env.coolify.example,
and Gitea Action workflow for Coolify API deployment. Removes
core-health-check and docs. Adds Traefik labels for
*.breakpilot.ai domain routing with Let's Encrypt SSL.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 10:13:57 +01:00
118 changed files with 57974 additions and 6095 deletions

View File

@@ -1,5 +1,17 @@
# BreakPilot Compliance - DSGVO/AI-Act SDK Platform
> **NON-NEGOTIABLE STRUCTURE RULES** (enforced by `.claude/settings.json` hook, git pre-commit, and CI):
> 1. **File-size budget:** soft target **300** lines, **hard cap 500** lines for any non-test, non-generated source file. Anything larger → split it. Exceptions are listed in `.claude/rules/loc-exceptions.txt` and require a written rationale.
> 2. **Clean architecture per service.** Routers/handlers stay thin (≤30 lines per handler) and delegate to services; services use repositories; repositories own DB I/O. See `AGENTS.python.md` / `AGENTS.go.md` / `AGENTS.typescript.md`.
> 3. **Do not touch the database schema.** No new Alembic migrations, no `ALTER TABLE`, no model field renames without an explicit migration plan reviewed by the DB owner. SQLAlchemy `__tablename__` and column names are frozen.
> 4. **Public endpoints are a contract.** Any change to a path, method, status code, request schema, or response schema in `backend-compliance/`, `ai-compliance-sdk/`, `dsms-gateway/`, `document-crawler/`, or `compliance-tts-service/` must be accompanied by a matching update in **every** consumer (`admin-compliance/`, `developer-portal/`, `breakpilot-compliance-sdk/`, `consent-sdk/`). Use the OpenAPI snapshot tests in `tests/contracts/` as the gate.
> 5. **Tests are not optional.** New code without tests fails CI. Refactors must preserve coverage and add a characterization test before splitting an oversized file.
> 6. **Do not bypass the guardrails.** Do not edit `.claude/settings.json`, `scripts/check-loc.sh`, or the loc-exceptions list to silence violations. If a rule is wrong, raise it in a PR description.
>
> These rules apply to **every** Claude Code session opened inside this repository, regardless of who launched it. They are loaded automatically via this `CLAUDE.md`.
## Entwicklungsumgebung (WICHTIG - IMMER ZUERST LESEN)
### Zwei-Rechner-Setup + Hetzner

View File

@@ -0,0 +1,43 @@
# Architecture Rules (auto-loaded)
These rules apply to **every** Claude Code session in this repository, regardless of who launched it. They are non-negotiable.
## File-size budget
- **Soft target:** 300 lines per non-test, non-generated source file.
- **Hard cap:** 500 lines. The PreToolUse hook in `.claude/settings.json` blocks Write/Edit operations that would create or push a file past 500. The git pre-commit hook re-checks. CI is the final gate.
- Exceptions live in `.claude/rules/loc-exceptions.txt` and require a written rationale plus `[guardrail-change]` in the commit message. The exceptions list should shrink over time, not grow.
## Clean architecture
- Python (FastAPI): see `AGENTS.python.md`. Layering: `api → services → repositories → db.models`. Routers ≤30 LOC per handler. Schemas split per domain.
- Go (Gin): see `AGENTS.go.md`. Standard Go Project Layout + hexagonal. `cmd/` thin, wiring in `internal/app`.
- TypeScript (Next.js): see `AGENTS.typescript.md`. Server-by-default, push the client boundary deep, colocate `_components/` and `_hooks/` per route.
## Database is frozen
- No new Alembic migrations. No `ALTER TABLE`. No `__tablename__` or column renames.
- The pre-commit hook blocks any change under `migrations/` or `alembic/versions/` unless the commit message contains `[migration-approved]`.
## Public endpoints are a contract
- Any change to a path/method/status/request schema/response schema in a backend service must update every consumer in the same change set.
- Each backend service has an OpenAPI baseline at `tests/contracts/openapi.baseline.json`. Contract tests fail on drift.
## Tests
- New code without tests fails CI.
- Refactors must preserve coverage. Before splitting an oversized file, add a characterization test that pins current behavior.
- Layout: `tests/unit/`, `tests/integration/`, `tests/contracts/`, `tests/e2e/`.
## Guardrails are themselves protected
- Edits to `.claude/settings.json`, `scripts/check-loc.sh`, `scripts/githooks/pre-commit`, `.claude/rules/loc-exceptions.txt`, or any `AGENTS.*.md` require `[guardrail-change]` in the commit message. The pre-commit hook enforces this.
- If you (Claude) think a rule is wrong, surface it to the user. Do not silently weaken it.
## Tooling baseline
- Python: `ruff`, `mypy --strict` on new modules, `pytest --cov`.
- Go: `golangci-lint` strict config, `go vet`, table-driven tests.
- TS: `tsc --noEmit` strict, ESLint type-aware, Vitest, Playwright.
- All three: dependency caching in CI, license/SBOM scan via `syft`+`grype`.

View File

@@ -0,0 +1,8 @@
# loc-exceptions.txt — files allowed to exceed the 500-line hard cap.
#
# Format: one repo-relative path per line. Comments start with '#' and are ignored.
# Each exception MUST be preceded by a comment explaining why splitting is not viable.
#
# Phase 0 baseline: this list is initially empty. Phases 1-4 will add grandfathered
# entries as we encounter legitimate exceptions (e.g. large generated data tables).
# The goal is for this list to SHRINK over time, never grow.

28
.claude/settings.json Normal file
View File

@@ -0,0 +1,28 @@
{
"hooks": {
"PreToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "command",
"command": "f=$(jq -r '.tool_input.file_path // empty'); [ -z \"$f\" ] && exit 0; lines=$(printf '%s' \"$(jq -r '.tool_input.content // empty')\" | awk 'END{print NR}'); if [ \"${lines:-0}\" -gt 500 ]; then echo '{\"decision\":\"block\",\"reason\":\"breakpilot guardrail: file exceeds the 500-line hard cap. Split it into smaller modules per the layering rules in AGENTS.<lang>.md. If this is generated/data code, add an entry to .claude/rules/loc-exceptions.txt with rationale and reference [guardrail-change].\"}'; exit 0; fi",
"shell": "bash",
"timeout": 5
}
]
},
{
"matcher": "Edit",
"hooks": [
{
"type": "command",
"command": "f=$(jq -r '.tool_input.file_path // empty'); [ -z \"$f\" ] || [ ! -f \"$f\" ] && exit 0; case \"$f\" in *.md|*.json|*.yaml|*.yml|*test*|*tests/*|*node_modules/*|*.next/*|*migrations/*) exit 0 ;; esac; new_str=$(jq -r '.tool_input.new_string // empty'); old_str=$(jq -r '.tool_input.old_string // empty'); old_lines=$(printf '%s' \"$old_str\" | awk 'END{print NR}'); new_lines=$(printf '%s' \"$new_str\" | awk 'END{print NR}'); cur=$(wc -l < \"$f\" | tr -d ' '); proj=$((cur - old_lines + new_lines)); if [ \"$proj\" -gt 500 ]; then echo \"{\\\"decision\\\":\\\"block\\\",\\\"reason\\\":\\\"breakpilot guardrail: this edit would push $f to ~$proj lines (hard cap is 500). Split the file before continuing. See AGENTS.<lang>.md for the layering rules.\\\"}\"; fi; exit 0",
"shell": "bash",
"timeout": 5
}
]
}
]
}
}

61
.env.coolify.example Normal file
View File

@@ -0,0 +1,61 @@
# =========================================================
# BreakPilot Compliance — Coolify Environment Variables
# =========================================================
# Copy these into Coolify's environment variable UI
# for the breakpilot-compliance Docker Compose resource.
# =========================================================
# --- External PostgreSQL (Coolify-managed, same as Core) ---
COMPLIANCE_DATABASE_URL=postgresql://breakpilot:CHANGE_ME@<coolify-postgres-hostname>:5432/breakpilot_db
# --- Security ---
JWT_SECRET=CHANGE_ME_SAME_AS_CORE
# --- External S3 Storage (same as Core) ---
S3_ENDPOINT=<s3-endpoint-host:port>
S3_ACCESS_KEY=CHANGE_ME_SAME_AS_CORE
S3_SECRET_KEY=CHANGE_ME_SAME_AS_CORE
S3_SECURE=true
# --- External Qdrant ---
QDRANT_URL=https://<qdrant-hostname>
QDRANT_API_KEY=CHANGE_ME_QDRANT_API_KEY
# --- Session ---
SESSION_TTL_HOURS=24
# --- SMTP (Real mail server) ---
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_USERNAME=compliance@breakpilot.ai
SMTP_PASSWORD=CHANGE_ME_SMTP_PASSWORD
SMTP_FROM_NAME=BreakPilot Compliance
SMTP_FROM_ADDR=compliance@breakpilot.ai
# --- LLM Configuration ---
COMPLIANCE_LLM_PROVIDER=anthropic
SELF_HOSTED_LLM_URL=
SELF_HOSTED_LLM_MODEL=
COMPLIANCE_LLM_MAX_TOKENS=4096
COMPLIANCE_LLM_TEMPERATURE=0.3
COMPLIANCE_LLM_TIMEOUT=120
ANTHROPIC_API_KEY=CHANGE_ME_ANTHROPIC_KEY
ANTHROPIC_DEFAULT_MODEL=claude-sonnet-4-5-20250929
# --- Ollama (optional) ---
OLLAMA_URL=
OLLAMA_DEFAULT_MODEL=
COMPLIANCE_LLM_MODEL=
# --- LLM Fallback ---
LLM_FALLBACK_PROVIDER=
# --- PII & Audit ---
PII_REDACTION_ENABLED=true
PII_REDACTION_LEVEL=standard
AUDIT_RETENTION_DAYS=365
AUDIT_LOG_PROMPTS=true
# --- Frontend URLs (build args) ---
NEXT_PUBLIC_API_URL=https://api-compliance.breakpilot.ai
NEXT_PUBLIC_SDK_URL=https://sdk.breakpilot.ai

View File

@@ -7,7 +7,7 @@
# Node.js: admin-compliance, developer-portal
#
# Workflow:
# Push auf main → Tests → Build → Deploy (Hetzner)
# Push auf main → Tests → Deploy (Coolify)
# Pull Request → Lint + Tests (kein Deploy)
name: CI/CD
@@ -19,6 +19,55 @@ on:
branches: [main, develop]
jobs:
# ========================================
# Guardrails — LOC budget + architecture gates
# Runs on every push/PR. Fails fast and cheap.
# ========================================
loc-budget:
runs-on: docker
container: alpine:3.20
steps:
- name: Checkout
run: |
apk add --no-cache git bash
git clone --depth 50 --branch ${GITHUB_REF_NAME} ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git .
- name: Enforce 500-line hard cap on changed files
run: |
chmod +x scripts/check-loc.sh
if [ "${GITHUB_EVENT_NAME}" = "pull_request" ]; then
git fetch origin ${GITHUB_BASE_REF}:base
mapfile -t changed < <(git diff --name-only --diff-filter=ACM base...HEAD)
[ ${#changed[@]} -eq 0 ] && { echo "No changed files."; exit 0; }
scripts/check-loc.sh "${changed[@]}"
else
# Push to main: only warn on whole-repo state; blocking gate is on PRs.
scripts/check-loc.sh || true
fi
# Phase 0 intentionally gates only changed files so the 205-file legacy
# baseline doesn't block every PR. Phases 1-4 drain the baseline; Phase 5
# flips this to a whole-repo blocking gate.
guardrail-integrity:
runs-on: docker
container: alpine:3.20
if: github.event_name == 'pull_request'
steps:
- name: Checkout
run: |
apk add --no-cache git bash
git clone --depth 20 --branch ${GITHUB_REF_NAME} ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git .
git fetch origin ${GITHUB_BASE_REF}:base
- name: Require [guardrail-change] label in PR commits touching guardrails
run: |
changed=$(git diff --name-only base...HEAD)
echo "$changed" | grep -E '^(\.claude/settings\.json|\.claude/rules/loc-exceptions\.txt|scripts/check-loc\.sh|scripts/githooks/pre-commit|AGENTS\.(python|go|typescript)\.md)$' || exit 0
if ! git log base..HEAD --format=%B | grep -q '\[guardrail-change\]'; then
echo "::error:: Guardrail files were modified but no commit in this PR carries [guardrail-change]."
echo "If intentional, amend one commit message with [guardrail-change] and explain why in the body."
exit 1
fi
# ========================================
# Lint (nur bei PRs)
# ========================================
@@ -47,13 +96,29 @@ jobs:
run: |
apt-get update -qq && apt-get install -y -qq git > /dev/null 2>&1
git clone --depth 1 --branch ${GITHUB_REF_NAME} ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git .
- name: Lint Python services
- name: Lint Python services (ruff)
run: |
pip install --quiet ruff
for svc in backend-compliance document-crawler dsms-gateway; do
fail=0
for svc in backend-compliance document-crawler dsms-gateway compliance-tts-service; do
if [ -d "$svc" ]; then
echo "=== Linting $svc ==="
ruff check "$svc/" --output-format=github || true
echo "=== ruff: $svc ==="
ruff check "$svc/" --output-format=github || fail=1
fi
done
exit $fail
- name: Type-check new modules (mypy --strict)
# Scoped to the layered packages we own. Expand this list as Phase 1+ refactors land.
run: |
pip install --quiet mypy
for pkg in \
backend-compliance/compliance/services \
backend-compliance/compliance/repositories \
backend-compliance/compliance/domain \
backend-compliance/compliance/schemas; do
if [ -d "$pkg" ]; then
echo "=== mypy --strict: $pkg ==="
mypy --strict --ignore-missing-imports "$pkg" || exit 1
fi
done
@@ -66,17 +131,20 @@ jobs:
run: |
apk add --no-cache git
git clone --depth 1 --branch ${GITHUB_REF_NAME} ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git .
- name: Lint Node.js services
- name: Lint + type-check Node.js services
run: |
fail=0
for svc in admin-compliance developer-portal; do
if [ -d "$svc" ]; then
echo "=== Linting $svc ==="
cd "$svc"
npm ci --silent 2>/dev/null || npm install --silent
npx next lint || true
cd ..
echo "=== $svc: install ==="
(cd "$svc" && (npm ci --silent 2>/dev/null || npm install --silent))
echo "=== $svc: next lint ==="
(cd "$svc" && npx next lint) || fail=1
echo "=== $svc: tsc --noEmit ==="
(cd "$svc" && npx tsc --noEmit) || fail=1
fi
done
exit $fail
# ========================================
# Unit Tests
@@ -169,6 +237,32 @@ jobs:
pip install --quiet --no-cache-dir pytest pytest-asyncio
python -m pytest test_main.py -v --tb=short
# ========================================
# SBOM + license scan (compliance product → we eat our own dog food)
# ========================================
sbom-scan:
runs-on: docker
if: github.event_name == 'pull_request'
container: alpine:3.20
steps:
- name: Checkout
run: |
apk add --no-cache git curl bash
git clone --depth 1 --branch ${GITHUB_REF_NAME} ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git .
- name: Install syft + grype
run: |
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
- name: Generate SBOM
run: |
mkdir -p sbom-out
syft dir:. -o cyclonedx-json=sbom-out/sbom.cdx.json -q
- name: Vulnerability scan (fail on high+)
run: |
grype sbom:sbom-out/sbom.cdx.json --fail-on high -q || true
# Initially non-blocking ('|| true'). Flip to blocking after baseline is clean.
# ========================================
# Validate Canonical Controls
# ========================================
@@ -186,104 +280,25 @@ jobs:
python scripts/validate-controls.py
# ========================================
# Build & Deploy auf Hetzner (nur main, kein PR)
# Deploy via Coolify (nur main, kein PR)
# ========================================
deploy-hetzner:
deploy-coolify:
name: Deploy
runs-on: docker
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
needs:
- loc-budget
- test-go-ai-compliance
- test-python-backend-compliance
- test-python-document-crawler
- test-python-dsms-gateway
- validate-canonical-controls
container: docker:27-cli
container:
image: alpine:latest
steps:
- name: Deploy
- name: Trigger Coolify deploy
run: |
set -euo pipefail
DEPLOY_DIR="/opt/breakpilot-compliance"
COMPOSE_FILES="-f docker-compose.yml -f docker-compose.hetzner.yml"
COMMIT_SHA="${GITHUB_SHA:-unknown}"
SHORT_SHA="${COMMIT_SHA:0:8}"
REPO_URL="${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git"
echo "=== BreakPilot Compliance Deploy ==="
echo "Commit: ${SHORT_SHA}"
echo "Deploy Dir: ${DEPLOY_DIR}"
echo ""
# Der Runner laeuft in einem Container mit Docker-Socket-Zugriff,
# hat aber KEINEN direkten Zugriff auf das Host-Dateisystem.
# Loesung: Alpine-Helper-Container mit Host-Bind-Mount fuer Git-Ops.
# 1. Repo auf dem Host erstellen/aktualisieren via Helper-Container
echo "=== Updating code on host ==="
docker run --rm \
-v "${DEPLOY_DIR}:${DEPLOY_DIR}" \
--entrypoint sh \
alpine/git:latest \
-c "
if [ ! -d '${DEPLOY_DIR}/.git' ]; then
echo 'Erstmaliges Klonen nach ${DEPLOY_DIR}...'
git clone '${REPO_URL}' '${DEPLOY_DIR}'
else
cd '${DEPLOY_DIR}'
git fetch origin main
git reset --hard origin/main
fi
"
echo "Code aktualisiert auf ${SHORT_SHA}"
# 2. .env sicherstellen (muss einmalig manuell angelegt werden)
docker run --rm -v "${DEPLOY_DIR}:${DEPLOY_DIR}" alpine \
sh -c "
if [ ! -f '${DEPLOY_DIR}/.env' ]; then
echo 'WARNUNG: ${DEPLOY_DIR}/.env fehlt!'
echo 'Bitte einmalig auf dem Host anlegen.'
echo 'Deploy wird fortgesetzt (Services starten ggf. mit Defaults).'
else
echo '.env vorhanden'
fi
"
# 3. Build + Deploy via Helper-Container mit Docker-Socket + Deploy-Dir
# docker compose muss die YAML-Dateien lesen koennen, daher
# alles in einem Container mit beiden Mounts ausfuehren.
echo ""
echo "=== Building + Deploying ==="
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "${DEPLOY_DIR}:${DEPLOY_DIR}" \
-w "${DEPLOY_DIR}" \
docker:27-cli \
sh -c "
COMPOSE_FILES='-f docker-compose.yml -f docker-compose.hetzner.yml'
echo '=== Building Docker Images ==='
docker compose \${COMPOSE_FILES} build --parallel \
admin-compliance \
backend-compliance \
ai-compliance-sdk \
developer-portal
echo ''
echo '=== Starting containers ==='
docker compose \${COMPOSE_FILES} up -d --remove-orphans \
admin-compliance \
backend-compliance \
ai-compliance-sdk \
developer-portal
echo ''
echo '=== Health Checks ==='
sleep 10
for svc in bp-compliance-admin bp-compliance-backend bp-compliance-ai-sdk bp-compliance-developer-portal; do
STATUS=\$(docker inspect --format='{{.State.Status}}' \"\${svc}\" 2>/dev/null || echo 'not found')
echo \"\${svc}: \${STATUS}\"
done
"
echo ""
echo "=== Deploy abgeschlossen: ${SHORT_SHA} ==="
apk add --no-cache curl
curl -sf "${{ secrets.COOLIFY_WEBHOOK }}" \
-H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"

126
AGENTS.go.md Normal file
View File

@@ -0,0 +1,126 @@
# AGENTS.go.md — Go Service Conventions
Applies to: `ai-compliance-sdk/`.
## Layered architecture (Gin)
Follows [Standard Go Project Layout](https://github.com/golang-standards/project-layout) + hexagonal/clean-arch.
```
ai-compliance-sdk/
├── cmd/server/main.go # Thin: parse flags → app.New → app.Run. <50 LOC.
├── internal/
│ ├── app/ # Wiring: config + DI graph + lifecycle.
│ ├── domain/ # Pure types, interfaces, errors. No I/O imports.
│ │ └── <aggregate>/
│ ├── service/ # Business logic. Depends on domain interfaces only.
│ │ └── <aggregate>/
│ ├── repository/postgres/ # Concrete repo implementations.
│ │ └── <aggregate>/
│ ├── transport/http/ # Gin handlers. Thin. One handler per file group.
│ │ ├── handler/<aggregate>/
│ │ ├── middleware/
│ │ └── router.go
│ └── platform/ # DB pool, logger, config, tracing.
└── pkg/ # Importable by other repos. Empty unless needed.
```
**Dependency direction:** `transport → service → domain ← repository`. `domain` imports nothing from siblings.
## Handlers
- One handler = one Gin function. ≤40 LOC.
- Bind → call service → map domain error to HTTP via `httperr.Write(c, err)` → respond.
- Return early on errors. No business logic, no SQL.
```go
func (h *IACEHandler) Create(c *gin.Context) {
var req CreateIACERequest
if err := c.ShouldBindJSON(&req); err != nil {
httperr.Write(c, httperr.BadRequest(err))
return
}
out, err := h.svc.Create(c.Request.Context(), req.ToInput())
if err != nil {
httperr.Write(c, err)
return
}
c.JSON(http.StatusCreated, out)
}
```
## Services
- Struct + constructor + interface methods. No package-level state.
- Take `context.Context` as first arg always. Propagate to repos.
- Return `(value, error)`. Wrap with `fmt.Errorf("create iace: %w", err)`.
- Domain errors implemented as sentinel vars or typed errors; matched with `errors.Is` / `errors.As`.
## Repositories
- Interface lives in `domain/<aggregate>/repository.go`. Implementation in `repository/postgres/<aggregate>/`.
- One file per query group; no file >500 LOC.
- Use `pgx`/`sqlc` over hand-rolled string SQL when feasible. No ORM globals.
- All queries take `ctx`. No background goroutines without explicit lifecycle.
## Errors
Single `internal/platform/httperr` package maps `error` → HTTP status:
```go
switch {
case errors.Is(err, domain.ErrNotFound): return 404
case errors.Is(err, domain.ErrConflict): return 409
case errors.As(err, &validationErr): return 422
default: return 500
}
```
Never `panic` in request handling. `recover` middleware logs and returns 500.
## Tests
- Co-located `*_test.go`.
- **Table-driven** tests for service logic; use `t.Run(tt.name, ...)`.
- Handlers tested with `httptest.NewRecorder`.
- Repos tested with `testcontainers-go` (or the existing compose Postgres) — never mocks at the SQL boundary.
- Coverage target: 80% on `service/`. CI fails on regression.
```go
func TestIACEService_Create(t *testing.T) {
tests := []struct {
name string
input service.CreateInput
setup func(*mockRepo)
wantErr error
}{
{"happy path", validInput(), func(r *mockRepo) { r.createReturns(nil) }, nil},
{"conflict", validInput(), func(r *mockRepo) { r.createReturns(domain.ErrConflict) }, domain.ErrConflict},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) { /* ... */ })
}
}
```
## Tooling
- `golangci-lint` with: `errcheck, govet, staticcheck, revive, gosec, gocyclo (max 15), gocognit (max 20), unused, ineffassign, errorlint, nilerr, nolintlint, contextcheck`.
- `gofumpt` formatting.
- `go vet ./...` clean.
- `go mod tidy` clean — no unused deps.
## Concurrency
- Goroutines must have a clear lifecycle owner (struct method that started them must stop them).
- Pass `ctx` everywhere. Cancellation respected.
- No global mutexes for request data. Use per-request context.
## What you may NOT do
- Touch DB schema/migrations.
- Add a new top-level package directly under `internal/` without architectural review.
- `import "C"`, unsafe, reflection-heavy code.
- Use `init()` for non-trivial setup. Wire it in `internal/app`.
- Create a file >500 lines.
- Change a public route's contract without updating consumers.

94
AGENTS.python.md Normal file
View File

@@ -0,0 +1,94 @@
# AGENTS.python.md — Python Service Conventions
Applies to: `backend-compliance/`, `document-crawler/`, `dsms-gateway/`, `compliance-tts-service/`.
## Layered architecture (FastAPI)
```
compliance/
├── api/ # HTTP layer — routers only. Thin (≤30 LOC per handler).
│ └── <domain>_routes.py
├── services/ # Business logic. Pure-ish; no FastAPI imports.
│ └── <domain>_service.py
├── repositories/ # DB access. Owns SQLAlchemy session usage.
│ └── <domain>_repository.py
├── domain/ # Value objects, enums, domain exceptions.
├── schemas/ # Pydantic models, split per domain. NEVER one giant schemas.py.
│ └── <domain>.py
└── db/
└── models/ # SQLAlchemy ORM, one module per aggregate. __tablename__ frozen.
```
**Dependency direction:** `api → services → repositories → db.models`. Lower layers must not import upper layers.
## Routers
- One `APIRouter` per domain file.
- Handlers do exactly: parse request → call service → map domain errors to HTTPException → return response model.
- Inject services via `Depends`. No globals.
- Tag routes; document with summary + response_model.
```python
@router.post("/dsr/requests", response_model=DSRRequestRead, status_code=201)
async def create_dsr_request(
payload: DSRRequestCreate,
service: DSRService = Depends(get_dsr_service),
tenant_id: UUID = Depends(get_tenant_id),
) -> DSRRequestRead:
try:
return await service.create(tenant_id, payload)
except DSRConflict as exc:
raise HTTPException(409, str(exc)) from exc
```
## Services
- Constructor takes the repository (interface, not concrete).
- No `Request`, `Response`, or HTTP knowledge.
- Raise domain exceptions (e.g. `DSRConflict`, `DSRNotFound`), never `HTTPException`.
- Return domain objects or Pydantic schemas — pick one and stay consistent inside a service.
## Repositories
- Methods are intent-named (`get_pending_for_tenant`), not CRUD-named (`select_where`).
- Sessions injected, not constructed inside.
- No business logic. No cross-aggregate joins for unrelated workflows — that belongs in a service.
- Return ORM models or domain VOs; never `Row`.
## Schemas (Pydantic v2)
- One module per domain. Module ≤300 lines.
- Use `model_config = ConfigDict(from_attributes=True, frozen=True)` for read models.
- Separate `*Create`, `*Update`, `*Read`. No giant union schemas.
## Tests (`pytest`)
- Layout: `tests/unit/`, `tests/integration/`, `tests/contracts/`.
- Unit tests mock the repository. Use `pytest.fixture` + `unittest.mock.AsyncMock`.
- Integration tests run against the real Postgres from `docker-compose.yml` via a transactional fixture (rollback after each test).
- Contract tests diff `/openapi.json` against `tests/contracts/openapi.baseline.json`.
- Naming: `test_<unit>_<scenario>_<expected>.py::TestClass::test_method`.
- `pytest-asyncio` mode = `auto`. Mark slow tests with `@pytest.mark.slow`.
- Coverage target: 80% for new code; never decrease the service baseline.
## Tooling
- `ruff check` + `ruff format` (line length 100).
- `mypy --strict` on `services/`, `repositories/`, `domain/`. Expand outward.
- `pip-audit` in CI.
- Async-first: prefer `httpx.AsyncClient`, `asyncpg`/`SQLAlchemy 2.x async`.
## Errors & logging
- Domain errors inherit from a single `DomainError` base per service.
- Log via `structlog` with bound context (`tenant_id`, `request_id`). Never log secrets, PII, or full request bodies.
- Audit-relevant actions go through the audit logger, not the application logger.
## What you may NOT do
- Add a new Alembic migration.
- Rename a `__tablename__`, column, or enum value.
- Change a public route's path/method/status/schema without simultaneous dashboard fix.
- Catch `Exception` broadly — catch the specific domain or library error.
- Put business logic in a router or in a Pydantic validator.
- Create a new file >500 lines. Period.

85
AGENTS.typescript.md Normal file
View File

@@ -0,0 +1,85 @@
# AGENTS.typescript.md — TypeScript / Next.js Conventions
Applies to: `admin-compliance/`, `developer-portal/`, `breakpilot-compliance-sdk/`, `consent-sdk/`, `dsms-node/` (where applicable).
## Layered architecture (Next.js 15 App Router)
```
app/
├── <route>/
│ ├── page.tsx # Server Component by default. ≤200 LOC.
│ ├── layout.tsx
│ ├── _components/ # Private folder; not routable. Colocated UI.
│ │ └── <Component>.tsx # Each file ≤300 LOC.
│ ├── _hooks/ # Client hooks for this route.
│ ├── _server/ # Server actions, data loaders for this route.
│ └── loading.tsx / error.tsx
├── api/
│ └── <domain>/route.ts # Thin handler. Delegates to lib/server/<domain>/.
lib/
├── <domain>/ # Pure helpers, types, schemas (zod). Reusable.
└── server/<domain>/ # Server-only logic; uses "server-only" import.
components/ # Truly shared, app-wide components.
```
**Server vs Client:** Default is Server Component. Add `"use client"` only when you need state, effects, or browser APIs. Push the boundary as deep as possible.
## API routes (route.ts)
- One handler per HTTP method, ≤40 LOC.
- Validate input with `zod`. Reject invalid → 400.
- Delegate to `lib/server/<domain>/`. No business logic in `route.ts`.
- Always return `NextResponse.json(..., { status })`. Never throw to the framework.
```ts
export async function POST(req: Request) {
const parsed = CreateDSRSchema.safeParse(await req.json());
if (!parsed.success) return NextResponse.json({ error: parsed.error.flatten() }, { status: 400 });
const result = await dsrService.create(parsed.data);
return NextResponse.json(result, { status: 201 });
}
```
## Page components
- Pages >300 lines must be split into colocated `_components/`.
- Server Components fetch data; pass plain objects to Client Components.
- No data fetching in `useEffect` for server-renderable data.
- State management: prefer URL state (`searchParams`) and Server Components over global stores.
## Types
- `lib/sdk/types.ts` is being split into `lib/sdk/types/<domain>.ts`. Mirror backend domain boundaries.
- All API DTOs are zod schemas; infer types via `z.infer`.
- No `any`. No `as unknown as`. If you reach for it, the type is wrong.
## Tests
- Unit: **Vitest** (`*.test.ts`/`*.test.tsx`), colocated.
- Hooks: `@testing-library/react` `renderHook`.
- E2E: **Playwright** (`tests/e2e/`), one spec per top-level page, smoke happy path minimum.
- Snapshot tests sparingly — only for stable output (CSV, JSON-LD).
- Coverage target: 70% on `lib/`, smoke coverage on `app/`.
## Tooling
- `tsc --noEmit` clean (strict mode, `noUncheckedIndexedAccess: true`).
- ESLint with `@typescript-eslint`, `eslint-config-next`, type-aware rules on.
- `prettier`.
- `next build` clean. No `// @ts-ignore`. `// @ts-expect-error` only with a comment explaining why.
## Performance
- Use `next/dynamic` for heavy client-only components.
- Image: `next/image` with explicit width/height.
- Avoid waterfalls — `Promise.all` for parallel data fetches in Server Components.
## What you may NOT do
- Put business logic in a `page.tsx` or `route.ts`.
- Reach across module boundaries (e.g. `admin-compliance` importing from `developer-portal`).
- Use `dangerouslySetInnerHTML` without explicit sanitization.
- Call backend APIs directly from Client Components when a Server Component or Server Action would do.
- Change a public API route's path/method/schema without updating SDK consumers in the same change.
- Create a file >500 lines.
- Disable a lint or type rule globally to silence a finding — fix the root cause.

View File

@@ -37,8 +37,8 @@ WORKDIR /app
ENV NODE_ENV=production
# Create non-root user
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
RUN addgroup -S -g 1001 nodejs
RUN adduser -S -u 1001 -G nodejs nextjs
# Copy built assets
COPY --from=builder /app/public ./public

View File

@@ -0,0 +1,51 @@
# admin-compliance
Next.js 15 dashboard for BreakPilot Compliance — SDK module UI, company profile, DSR, DSFA, VVT, TOM, consent, AI Act, training, audit, change requests, etc. Also hosts 96+ API routes that proxy/orchestrate backend services.
**Port:** `3007` (container: `bp-compliance-admin`)
**Stack:** Next.js 15 App Router, React 18, TailwindCSS, TypeScript strict.
## Architecture (target — Phase 3)
```
app/
├── <route>/
│ ├── page.tsx # Server Component (≤200 LOC)
│ ├── _components/ # Colocated UI, each ≤300 LOC
│ ├── _hooks/ # Client hooks
│ └── _server/ # Server actions
├── api/<domain>/route.ts # Thin handlers → lib/server/<domain>/
lib/
├── <domain>/ # Pure helpers, zod schemas
└── server/<domain>/ # "server-only" logic
components/ # App-wide shared UI
```
See `../AGENTS.typescript.md`.
## Run locally
```bash
cd admin-compliance
npm install
npm run dev # http://localhost:3007
```
## Tests
```bash
npm test # Vitest unit + component tests
npx playwright test # E2E
npx tsc --noEmit # Type-check
npx next lint
```
## Known debt (Phase 3 targets)
- `app/sdk/company-profile/page.tsx` (3017 LOC), `tom-generator/controls/loader.ts` (2521), `lib/sdk/types.ts` (2511), `app/sdk/loeschfristen/page.tsx` (2322), `app/sdk/dsb-portal/page.tsx` (2068) — all must be split.
- 0 test files for 182 monolithic pages. Phase 3 adds Playwright smoke + Vitest unit coverage.
## Don't touch
- Backend API paths without updating `backend-compliance/` in the same change.
- `lib/sdk/types.ts` in large contiguous chunks — it's being domain-split.

View File

@@ -17,7 +17,7 @@ COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o /ai-compliance-sdk ./cmd/server
# Runtime stage
FROM alpine:3.19
FROM alpine:3.21
WORKDIR /app

View File

@@ -0,0 +1,55 @@
# ai-compliance-sdk
Go/Gin service providing AI-Act compliance analysis: iACE impact assessments, UCCA rules engine, hazard library, training/academy, audit, escalation, portfolio, RBAC, RAG, whistleblower, workshop.
**Port:** `8090` → exposed `8093` (container: `bp-compliance-ai-sdk`)
**Stack:** Go 1.24, Gin, pgx, Postgres.
## Architecture (target — Phase 2)
```
cmd/server/main.go # Thin entrypoint (<50 LOC)
internal/
├── app/ # Wiring + lifecycle
├── domain/<aggregate>/ # Types, interfaces, errors
├── service/<aggregate>/ # Business logic
├── repository/postgres/ # Repo implementations
├── transport/http/ # Gin handlers + middleware + router
└── platform/ # DB pool, logger, config, httperr
```
See `../AGENTS.go.md` for the full convention.
## Run locally
```bash
cd ai-compliance-sdk
go mod download
export COMPLIANCE_DATABASE_URL=...
go run ./cmd/server
```
## Tests
```bash
go test -race -cover ./...
golangci-lint run --timeout 5m ./...
```
Co-located `*_test.go`, table-driven. Repo layer uses testcontainers-go (or the compose Postgres) — no SQL mocks.
## Public API surface
Handlers under `internal/api/handlers/` (Phase 2 moves to `internal/transport/http/handler/`). Health at `GET /health`. iACE, UCCA, training, academy, portfolio, escalation, audit, rag, whistleblower, workshop subresources. Every route is a contract.
## Environment
| Var | Purpose |
|-----|---------|
| `COMPLIANCE_DATABASE_URL` | Postgres DSN |
| `LLM_GATEWAY_URL` | LLM router for rag/iACE |
| `QDRANT_URL` | Vector search |
## Don't touch
DB schema. Hand-rolled migrations elsewhere own it.

View File

@@ -0,0 +1,181 @@
# Phase 1 Runbook — backend-compliance refactor
This document is the step-by-step execution guide for Phase 1 of the repo refactor plan at `~/.claude/plans/vectorized-purring-barto.md`. It exists because the refactor must be driven from a session that can actually run `pytest` against the service, and every step must be verified green before moving to the next.
## Prerequisites
- Python 3.12 venv with `backend-compliance/requirements.txt` installed.
- Local Postgres reachable via `COMPLIANCE_DATABASE_URL` (use the compose db).
- Existing 48 pytest test files pass from a clean checkout: `pytest compliance/tests/ -v` → all green. **Do not proceed until this is true.**
## Step 0 — Record the baseline
```bash
cd backend-compliance
pytest compliance/tests/ -v --tb=short | tee /tmp/baseline.txt
pytest --cov=compliance --cov-report=term | tee /tmp/baseline-coverage.txt
python tests/contracts/regenerate_baseline.py # creates openapi.baseline.json
git add tests/contracts/openapi.baseline.json
git commit -m "phase1: pin OpenAPI baseline before refactor"
```
The baseline file is the contract. From this point forward, `pytest tests/contracts/` MUST stay green.
## Step 1 — Characterization tests (before any code move)
For each oversized route file we will refactor, add a happy-path + 1-error-path test **before** touching the source. These are called "characterization tests" and their purpose is to freeze current observable behavior so the refactor cannot change it silently.
Oversized route files to cover (ordered by size):
| File | LOC | Endpoints to cover |
|---|---:|---|
| `compliance/api/isms_routes.py` | 1676 | one happy + one 4xx per route |
| `compliance/api/dsr_routes.py` | 1176 | same |
| `compliance/api/vvt_routes.py` | *N* | same |
| `compliance/api/dsfa_routes.py` | *N* | same |
| `compliance/api/tom_routes.py` | *N* | same |
| `compliance/api/schemas.py` | 1899 | N/A (covered transitively) |
| `compliance/db/models.py` | 1466 | N/A (covered by existing + route tests) |
| `compliance/db/repository.py` | 1547 | add unit tests per repo class as they are extracted |
Use `httpx.AsyncClient` + factory fixtures; see `AGENTS.python.md`. Place under `tests/integration/test_<domain>_contract.py`.
Commit: `phase1: characterization tests for <domain> routes`.
## Step 2 — Split `compliance/db/models.py` (1466 → <500 per file)
⚠️ **Atomic step.** A `compliance/db/models/` package CANNOT coexist with the existing `compliance/db/models.py` module — Python's import system shadows the module with the package, breaking every `from compliance.db.models import X` call. The directory skeleton was intentionally NOT pre-created for this reason. Do the following in **one commit**:
1. Create `compliance/db/models/` directory with `__init__.py` (re-export shim — see template below).
2. Move aggregate model classes into `compliance/db/models/<aggregate>.py` modules.
3. Delete the old `compliance/db/models.py` file in the same commit.
Strategy uses a **re-export shim** so no import sites change:
1. For each aggregate, create `compliance/db/models/<aggregate>.py` containing the model classes. Copy verbatim; do not rename `__tablename__`, columns, or relationship strings.
2. Aggregate suggestions (verify by reading `models.py`):
- `dsr.py` (DSR requests, exports)
- `dsfa.py`
- `vvt.py`
- `tom.py`
- `ai.py` (AI systems, compliance checks)
- `consent.py`
- `evidence.py`
- `vendor.py`
- `audit.py`
- `policy.py`
- `project.py`
3. After every aggregate is moved, replace `compliance/db/models.py` with:
```python
"""Re-export shim — see compliance.db.models package."""
from compliance.db.models.dsr import * # noqa: F401,F403
from compliance.db.models.dsfa import * # noqa: F401,F403
# ... one per module
```
This keeps `from compliance.db.models import XYZ` working everywhere it's used today.
4. Run `pytest` after every move. Green → commit. Red → revert that move and investigate.
5. Existing aggregate-level files (`compliance/db/dsr_models.py`, `vvt_models.py`, `tom_models.py`, etc.) should be folded into the new `compliance/db/models/` package in the same pass — do not leave two parallel naming conventions.
**Do not** add `__init__.py` star-imports that change `Base.metadata` discovery order. Alembic's autogenerate depends on it. Verify via: `alembic check` if the env is set up.
## Step 3 — Split `compliance/api/schemas.py` (1899 → per domain)
Mirror the models split:
1. For each domain, create `compliance/schemas/<domain>.py` with the Pydantic models.
2. Replace `compliance/api/schemas.py` with a re-export shim.
3. Keep `Create`/`Update`/`Read` variants separated; do not merge them into unions.
4. Run `pytest` + contract test after each domain. Green → commit.
## Step 4 — Extract services (router → service delegation)
For each route file > 500 LOC, pull handler bodies into a service class under `compliance/services/<domain>_service.py` (new-style domain services, not the utility `compliance/services/` modules that already exist — consider renaming those to `compliance/services/_legacy/` if collisions arise).
Router handlers become:
```python
@router.post("/dsr/requests", response_model=DSRRequestRead, status_code=201)
async def create_dsr_request(
payload: DSRRequestCreate,
service: DSRService = Depends(get_dsr_service),
tenant_id: UUID = Depends(get_tenant_id),
) -> DSRRequestRead:
try:
return await service.create(tenant_id, payload)
except ConflictError as exc:
raise HTTPException(409, str(exc)) from exc
except NotFoundError as exc:
raise HTTPException(404, str(exc)) from exc
```
Rules:
- Handler body ≤ 30 LOC.
- Service raises domain errors (`compliance.domain`), never `HTTPException`.
- Inject service via `Depends` on a factory that wires the repository.
Run tests after each router is thinned. Contract test must stay green.
## Step 5 — Extract repositories
`compliance/db/repository.py` (1547) and `compliance/db/isms_repository.py` (838) split into:
```
compliance/repositories/
├── dsr_repository.py
├── dsfa_repository.py
├── vvt_repository.py
├── isms_repository.py # <500 LOC, split if needed
└── ...
```
Each repository class:
- Takes `AsyncSession` (or equivalent) in constructor.
- Exposes intent-named methods (`get_pending_for_tenant`, not `select_where`).
- Returns ORM instances or domain VOs. No `Row`.
- No business logic.
Unit-test every repo class against the compose Postgres with a transactional fixture (begin → rollback).
## Step 6 — mypy --strict on new packages
CI already runs `mypy --strict` against `compliance/{services,repositories,domain,schemas}/`. After every extraction, verify locally:
```bash
mypy --strict --ignore-missing-imports compliance/schemas compliance/repositories compliance/domain compliance/services
```
If you have type errors, fix them in the extracted module. **Do not** add `# type: ignore` blanket waivers. If a third-party lib is poorly typed, add it to `[mypy.overrides]` in `pyproject.toml`/`mypy.ini` with a one-line rationale.
## Step 7 — Expand test coverage
- Unit tests per service (mocked repo).
- Integration tests per repository (real db, transactional).
- Contract test stays green.
- Target: 80% coverage on new code. Never decrease the service baseline.
## Step 8 — Guardrail enforcement
After Phase 1 completes, `compliance/db/models.py`, `compliance/db/repository.py`, and `compliance/api/schemas.py` are either re-export shims (≤50 LOC each) or deleted. No file in `backend-compliance/compliance/` exceeds 500 LOC. Run:
```bash
../scripts/check-loc.sh backend-compliance/
```
Any remaining hard violations → document in `.claude/rules/loc-exceptions.txt` with rationale, or keep splitting.
## Done when
- `pytest compliance/tests/ tests/ -v` all green.
- `pytest tests/contracts/` green — OpenAPI has no removals, no renames, no new required request fields.
- Coverage ≥ baseline.
- `mypy --strict` clean on new packages.
- `scripts/check-loc.sh backend-compliance/` reports 0 hard violations in new/touched files (legacy allowlisted in `loc-exceptions.txt` only with rationale).
- CI all green on PR.
## Pitfalls
- **Do not change `__tablename__` or column names.** Even a rename breaks the DB contract.
- **Do not change relationship back_populates / backref strings.** SQLAlchemy resolves these by name at mapper configuration.
- **Do not change route paths or pydantic field names.** Contract test will catch most — but JSON field aliasing (`Field(alias=...)`) is easy to break accidentally.
- **Do not eagerly reformat unrelated code.** Keep the diff reviewable. One PR per major step.
- **Do not bypass the pre-commit hook.** If a file legitimately must be >500 LOC during an intermediate step, squash commits at the end so the final state is clean.

View File

@@ -0,0 +1,55 @@
# backend-compliance
Python/FastAPI service implementing the DSGVO compliance API: DSR, DSFA, consent, controls, risks, evidence, audit, vendor management, ISMS, change requests, document generation.
**Port:** `8002` (container: `bp-compliance-backend`)
**Stack:** Python 3.12, FastAPI, SQLAlchemy 2.x, Alembic, Keycloak auth.
## Architecture (target — Phase 1)
```
compliance/
├── api/ # Routers (thin, ≤30 LOC per handler)
├── services/ # Business logic
├── repositories/ # DB access
├── domain/ # Value objects, domain errors
├── schemas/ # Pydantic models, split per domain
└── db/models/ # SQLAlchemy ORM, one module per aggregate
```
See `../AGENTS.python.md` for the full convention and `../.claude/rules/architecture.md` for the non-negotiable rules.
## Run locally
```bash
cd backend-compliance
pip install -r requirements.txt
export COMPLIANCE_DATABASE_URL=... # Postgres (Hetzner or local)
uvicorn main:app --reload --port 8002
```
## Tests
```bash
pytest compliance/tests/ -v
pytest --cov=compliance --cov-report=term-missing
```
Layout: `tests/unit/`, `tests/integration/`, `tests/contracts/`. Contract tests diff `/openapi.json` against `tests/contracts/openapi.baseline.json`.
## Public API surface
404+ endpoints across `/api/v1/*`. Grouped by domain: `ai`, `audit`, `consent`, `dsfa`, `dsr`, `gdpr`, `vendor`, `evidence`, `change-requests`, `generation`, `projects`, `company-profile`, `isms`. Every path is a contract — see the "Public endpoints" rule in the root `CLAUDE.md`.
## Environment
| Var | Purpose |
|-----|---------|
| `COMPLIANCE_DATABASE_URL` | Postgres DSN, `sslmode=require` |
| `KEYCLOAK_*` | Auth verification |
| `QDRANT_URL`, `QDRANT_API_KEY` | Vector search |
| `CORE_VALKEY_URL` | Session cache |
## Don't touch
Database schema, `__tablename__`, column names, existing migrations under `migrations/`. See root `CLAUDE.md` rule 3.

View File

@@ -186,7 +186,7 @@ async def update_ai_system(
if hasattr(system, key):
setattr(system, key, value)
system.updated_at = datetime.utcnow()
system.updated_at = datetime.now(timezone.utc)
db.commit()
db.refresh(system)
@@ -266,7 +266,7 @@ async def assess_ai_system(
except ValueError:
system.classification = AIClassificationEnum.UNCLASSIFIED
system.assessment_date = datetime.utcnow()
system.assessment_date = datetime.now(timezone.utc)
system.assessment_result = assessment_result
system.obligations = _derive_obligations(classification)
system.risk_factors = assessment_result.get("risk_factors", [])

View File

@@ -9,7 +9,7 @@ Endpoints:
"""
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional, List
from uuid import uuid4
import hashlib
@@ -204,7 +204,7 @@ async def start_audit_session(
)
session.status = AuditSessionStatusEnum.IN_PROGRESS
session.started_at = datetime.utcnow()
session.started_at = datetime.now(timezone.utc)
db.commit()
return {"success": True, "message": "Audit session started", "status": "in_progress"}
@@ -229,7 +229,7 @@ async def complete_audit_session(
)
session.status = AuditSessionStatusEnum.COMPLETED
session.completed_at = datetime.utcnow()
session.completed_at = datetime.now(timezone.utc)
db.commit()
return {"success": True, "message": "Audit session completed", "status": "completed"}
@@ -482,7 +482,7 @@ async def sign_off_item(
# Update existing sign-off
signoff.result = result_enum
signoff.notes = request.notes
signoff.updated_at = datetime.utcnow()
signoff.updated_at = datetime.now(timezone.utc)
else:
# Create new sign-off
signoff = AuditSignOffDB(
@@ -497,11 +497,11 @@ async def sign_off_item(
# Create digital signature if requested
signature = None
if request.sign:
timestamp = datetime.utcnow().isoformat()
timestamp = datetime.now(timezone.utc).isoformat()
data = f"{result_enum.value}|{requirement_id}|{session.auditor_name}|{timestamp}"
signature = hashlib.sha256(data.encode()).hexdigest()
signoff.signature_hash = signature
signoff.signed_at = datetime.utcnow()
signoff.signed_at = datetime.now(timezone.utc)
signoff.signed_by = session.auditor_name
# Update session statistics
@@ -523,7 +523,7 @@ async def sign_off_item(
# Auto-start session if this is the first sign-off
if session.status == AuditSessionStatusEnum.DRAFT:
session.status = AuditSessionStatusEnum.IN_PROGRESS
session.started_at = datetime.utcnow()
session.started_at = datetime.now(timezone.utc)
db.commit()
db.refresh(signoff)
@@ -587,7 +587,7 @@ async def get_sign_off(
@router.get("/sessions/{session_id}/report/pdf")
async def generate_audit_pdf_report(
session_id: str,
language: str = Query("de", regex="^(de|en)$"),
language: str = Query("de", pattern="^(de|en)$"),
include_signatures: bool = Query(True),
db: Session = Depends(get_db),
):

View File

@@ -6,7 +6,7 @@ Public SDK-Endpoints (fuer Einbettung) + Admin-Endpoints (Konfiguration & Stats)
import uuid
import hashlib
from datetime import datetime, timedelta
from datetime import datetime, timedelta, timezone
from typing import Optional, List
from fastapi import APIRouter, Depends, HTTPException, Query, Header
@@ -206,8 +206,8 @@ async def record_consent(
existing.ip_hash = ip_hash
existing.user_agent = body.user_agent
existing.consent_string = body.consent_string
existing.expires_at = datetime.utcnow() + timedelta(days=365)
existing.updated_at = datetime.utcnow()
existing.expires_at = datetime.now(timezone.utc) + timedelta(days=365)
existing.updated_at = datetime.now(timezone.utc)
db.flush()
_log_banner_audit(
@@ -227,7 +227,7 @@ async def record_consent(
ip_hash=ip_hash,
user_agent=body.user_agent,
consent_string=body.consent_string,
expires_at=datetime.utcnow() + timedelta(days=365),
expires_at=datetime.now(timezone.utc) + timedelta(days=365),
)
db.add(consent)
db.flush()
@@ -476,7 +476,7 @@ async def update_site_config(
if val is not None:
setattr(config, field, val)
config.updated_at = datetime.utcnow()
config.updated_at = datetime.now(timezone.utc)
db.commit()
db.refresh(config)
return _site_config_to_dict(config)

View File

@@ -15,6 +15,7 @@ from typing import Any, Optional
from fastapi import APIRouter, HTTPException, Header
from pydantic import BaseModel
from sqlalchemy import text
from database import SessionLocal
@@ -75,13 +76,13 @@ async def get_compliance_scope(
db = SessionLocal()
try:
row = db.execute(
"""SELECT tenant_id,
text("""SELECT tenant_id,
state->'compliance_scope' AS scope,
created_at,
updated_at
FROM sdk_states
WHERE tenant_id = :tid
AND state ? 'compliance_scope'""",
AND state ? 'compliance_scope'"""),
{"tid": tid},
).fetchone()
@@ -106,22 +107,22 @@ async def upsert_compliance_scope(
db = SessionLocal()
try:
db.execute(
"""INSERT INTO sdk_states (tenant_id, state)
text("""INSERT INTO sdk_states (tenant_id, state)
VALUES (:tid, jsonb_build_object('compliance_scope', :scope::jsonb))
ON CONFLICT (tenant_id) DO UPDATE
SET state = sdk_states.state || jsonb_build_object('compliance_scope', :scope::jsonb),
updated_at = NOW()""",
updated_at = NOW()"""),
{"tid": tid, "scope": scope_json},
)
db.commit()
row = db.execute(
"""SELECT tenant_id,
text("""SELECT tenant_id,
state->'compliance_scope' AS scope,
created_at,
updated_at
FROM sdk_states
WHERE tenant_id = :tid""",
WHERE tenant_id = :tid"""),
{"tid": tid},
).fetchone()

View File

@@ -11,7 +11,7 @@ Endpoints:
"""
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional
from fastapi import APIRouter, Depends, HTTPException, Header
@@ -173,7 +173,7 @@ async def update_consent_template(
set_clauses = ", ".join(f"{k} = :{k}" for k in updates)
updates["id"] = template_id
updates["tenant_id"] = tenant_id
updates["now"] = datetime.utcnow()
updates["now"] = datetime.now(timezone.utc)
row = db.execute(
text(f"""

View File

@@ -186,7 +186,7 @@ async def list_jobs(
@router.get("/generate/review-queue")
async def get_review_queue(
release_state: str = Query("needs_review", regex="^(needs_review|too_close|duplicate)$"),
release_state: str = Query("needs_review", pattern="^(needs_review|too_close|duplicate)$"),
limit: int = Query(50, ge=1, le=200),
):
"""Get controls that need manual review."""

View File

@@ -20,7 +20,7 @@ Usage:
"""
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Any, Dict, List, Optional
from fastapi import APIRouter, Depends, HTTPException, Query
@@ -171,7 +171,7 @@ def create_crud_router(
updates: Dict[str, Any] = {
"id": item_id,
"tenant_id": tenant_id,
"updated_at": datetime.utcnow(),
"updated_at": datetime.now(timezone.utc),
}
set_clauses = ["updated_at = :updated_at"]

View File

@@ -10,7 +10,7 @@ Endpoints:
"""
import logging
from datetime import datetime, timedelta
from datetime import datetime, timedelta, timezone
from calendar import month_abbr
from typing import Optional
@@ -167,7 +167,7 @@ async def get_executive_dashboard(db: Session = Depends(get_db)):
# Trend data — only show current score, no simulated history
trend_data = []
if total > 0:
now = datetime.utcnow()
now = datetime.now(timezone.utc)
trend_data.append(TrendDataPoint(
date=now.strftime("%Y-%m-%d"),
score=round(score, 1),
@@ -204,7 +204,7 @@ async def get_executive_dashboard(db: Session = Depends(get_db)):
# Get upcoming deadlines
controls = ctrl_repo.get_all()
upcoming_deadlines = []
today = datetime.utcnow().date()
today = datetime.now(timezone.utc).date()
for ctrl in controls:
if ctrl.next_review_at:
@@ -280,7 +280,7 @@ async def get_executive_dashboard(db: Session = Depends(get_db)):
top_risks=top_risks,
upcoming_deadlines=upcoming_deadlines,
team_workload=team_workload,
last_updated=datetime.utcnow().isoformat(),
last_updated=datetime.now(timezone.utc).isoformat(),
)
@@ -305,7 +305,7 @@ async def get_compliance_trend(
# Trend data — only current score, no simulated history
trend_data = []
if total > 0:
now = datetime.utcnow()
now = datetime.now(timezone.utc)
trend_data.append({
"date": now.strftime("%Y-%m-%d"),
"score": round(current_score, 1),
@@ -318,7 +318,7 @@ async def get_compliance_trend(
"current_score": round(current_score, 1),
"trend": trend_data,
"period_months": months,
"generated_at": datetime.utcnow().isoformat(),
"generated_at": datetime.now(timezone.utc).isoformat(),
}

View File

@@ -20,7 +20,7 @@ Endpoints:
"""
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional, List
from fastapi import APIRouter, Depends, HTTPException, Query
@@ -691,7 +691,7 @@ async def update_dsfa_status(
params: dict = {
"id": dsfa_id, "tid": tid,
"status": request.status,
"approved_at": datetime.utcnow() if request.status == "approved" else None,
"approved_at": datetime.now(timezone.utc) if request.status == "approved" else None,
"approved_by": request.approved_by,
}
row = db.execute(
@@ -906,7 +906,7 @@ async def export_dsfa_json(
dsfa_data = _dsfa_to_response(row)
return {
"exported_at": datetime.utcnow().isoformat(),
"exported_at": datetime.now(timezone.utc).isoformat(),
"format": format,
"dsfa": dsfa_data,
}

View File

@@ -7,7 +7,7 @@ Native Python/FastAPI Implementierung, ersetzt Go consent-service Proxy.
import io
import csv
import uuid
from datetime import datetime, timedelta
from datetime import datetime, timedelta, timezone
from typing import Optional, List, Dict, Any
from fastapi import APIRouter, Depends, HTTPException, Query, Header
@@ -168,7 +168,7 @@ def _get_tenant(x_tenant_id: Optional[str] = Header(None, alias='X-Tenant-ID'))
def _generate_request_number(db: Session, tenant_id: str) -> str:
"""Generate next request number: DSR-YYYY-NNNNNN"""
year = datetime.utcnow().year
year = datetime.now(timezone.utc).year
try:
result = db.execute(text("SELECT nextval('compliance_dsr_request_number_seq')"))
seq = result.scalar()
@@ -275,7 +275,7 @@ async def create_dsr(
if body.priority and body.priority not in VALID_PRIORITIES:
raise HTTPException(status_code=400, detail=f"Invalid priority. Must be one of: {VALID_PRIORITIES}")
now = datetime.utcnow()
now = datetime.now(timezone.utc)
deadline_days = DEADLINE_DAYS.get(body.request_type, 30)
request_number = _generate_request_number(db, tenant_id)
@@ -348,7 +348,7 @@ async def list_dsrs(
query = query.filter(DSRRequestDB.priority == priority)
if overdue_only:
query = query.filter(
DSRRequestDB.deadline_at < datetime.utcnow(),
DSRRequestDB.deadline_at < datetime.now(timezone.utc),
DSRRequestDB.status.notin_(["completed", "rejected", "cancelled"]),
)
if search:
@@ -399,7 +399,7 @@ async def get_dsr_stats(
by_type[t] = base.filter(DSRRequestDB.request_type == t).count()
# Overdue
now = datetime.utcnow()
now = datetime.now(timezone.utc)
overdue = base.filter(
DSRRequestDB.deadline_at < now,
DSRRequestDB.status.notin_(["completed", "rejected", "cancelled"]),
@@ -459,7 +459,7 @@ async def export_dsrs(
if format == "json":
return {
"exported_at": datetime.utcnow().isoformat(),
"exported_at": datetime.now(timezone.utc).isoformat(),
"total": len(dsrs),
"requests": [_dsr_to_dict(d) for d in dsrs],
}
@@ -506,7 +506,7 @@ async def process_deadlines(
db: Session = Depends(get_db),
):
"""Verarbeitet Fristen und markiert ueberfaellige DSRs."""
now = datetime.utcnow()
now = datetime.now(timezone.utc)
tid = uuid.UUID(tenant_id)
overdue = db.query(DSRRequestDB).filter(
@@ -714,7 +714,7 @@ async def publish_template_version(
if not version:
raise HTTPException(status_code=404, detail="Version not found")
now = datetime.utcnow()
now = datetime.now(timezone.utc)
version.status = "published"
version.published_at = now
version.published_by = "admin"
@@ -766,7 +766,7 @@ async def update_dsr(
dsr.internal_notes = body.internal_notes
if body.assigned_to is not None:
dsr.assigned_to = body.assigned_to
dsr.assigned_at = datetime.utcnow()
dsr.assigned_at = datetime.now(timezone.utc)
if body.request_text is not None:
dsr.request_text = body.request_text
if body.affected_systems is not None:
@@ -778,7 +778,7 @@ async def update_dsr(
if body.objection_details is not None:
dsr.objection_details = body.objection_details
dsr.updated_at = datetime.utcnow()
dsr.updated_at = datetime.now(timezone.utc)
db.commit()
db.refresh(dsr)
return _dsr_to_dict(dsr)
@@ -797,7 +797,7 @@ async def delete_dsr(
_record_history(db, dsr, "cancelled", comment="DSR storniert")
dsr.status = "cancelled"
dsr.updated_at = datetime.utcnow()
dsr.updated_at = datetime.now(timezone.utc)
db.commit()
return {"success": True, "message": "DSR cancelled"}
@@ -820,7 +820,7 @@ async def change_status(
dsr = _get_dsr_or_404(db, dsr_id, tenant_id)
_record_history(db, dsr, body.status, comment=body.comment)
dsr.status = body.status
dsr.updated_at = datetime.utcnow()
dsr.updated_at = datetime.now(timezone.utc)
db.commit()
db.refresh(dsr)
return _dsr_to_dict(dsr)
@@ -835,7 +835,7 @@ async def verify_identity(
):
"""Verifiziert die Identitaet des Antragstellers."""
dsr = _get_dsr_or_404(db, dsr_id, tenant_id)
now = datetime.utcnow()
now = datetime.now(timezone.utc)
dsr.identity_verified = True
dsr.verification_method = body.method
@@ -868,9 +868,9 @@ async def assign_dsr(
"""Weist eine DSR einem Bearbeiter zu."""
dsr = _get_dsr_or_404(db, dsr_id, tenant_id)
dsr.assigned_to = body.assignee_id
dsr.assigned_at = datetime.utcnow()
dsr.assigned_at = datetime.now(timezone.utc)
dsr.assigned_by = "admin"
dsr.updated_at = datetime.utcnow()
dsr.updated_at = datetime.now(timezone.utc)
db.commit()
db.refresh(dsr)
return _dsr_to_dict(dsr)
@@ -888,7 +888,7 @@ async def extend_deadline(
if dsr.status in ("completed", "rejected", "cancelled"):
raise HTTPException(status_code=400, detail="Cannot extend deadline for closed DSR")
now = datetime.utcnow()
now = datetime.now(timezone.utc)
current_deadline = dsr.extended_deadline_at or dsr.deadline_at
new_deadline = current_deadline + timedelta(days=body.days or 60)
@@ -916,7 +916,7 @@ async def complete_dsr(
if dsr.status in ("completed", "cancelled"):
raise HTTPException(status_code=400, detail="DSR already completed or cancelled")
now = datetime.utcnow()
now = datetime.now(timezone.utc)
_record_history(db, dsr, "completed", comment=body.summary)
dsr.status = "completed"
dsr.completed_at = now
@@ -941,7 +941,7 @@ async def reject_dsr(
if dsr.status in ("completed", "rejected", "cancelled"):
raise HTTPException(status_code=400, detail="DSR already closed")
now = datetime.utcnow()
now = datetime.now(timezone.utc)
_record_history(db, dsr, "rejected", comment=f"{body.reason} ({body.legal_basis})")
dsr.status = "rejected"
dsr.rejection_reason = body.reason
@@ -1024,7 +1024,7 @@ async def send_communication(
):
"""Sendet eine Kommunikation."""
dsr = _get_dsr_or_404(db, dsr_id, tenant_id)
now = datetime.utcnow()
now = datetime.now(timezone.utc)
comm = DSRCommunicationDB(
tenant_id=uuid.UUID(tenant_id),
@@ -1158,7 +1158,7 @@ async def update_exception_check(
check.applies = body.applies
check.notes = body.notes
check.checked_by = "admin"
check.checked_at = datetime.utcnow()
check.checked_at = datetime.now(timezone.utc)
db.commit()
db.refresh(check)

View File

@@ -15,7 +15,7 @@ Endpoints:
"""
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional, List, Any, Dict
from fastapi import APIRouter, Depends, HTTPException, Query, Header
@@ -131,7 +131,7 @@ async def upsert_catalog(
if record:
record.selected_data_point_ids = request.selected_data_point_ids
record.custom_data_points = request.custom_data_points
record.updated_at = datetime.utcnow()
record.updated_at = datetime.now(timezone.utc)
else:
record = EinwilligungenCatalogDB(
tenant_id=tenant_id,
@@ -184,7 +184,7 @@ async def upsert_company(
if record:
record.data = request.data
record.updated_at = datetime.utcnow()
record.updated_at = datetime.now(timezone.utc)
else:
record = EinwilligungenCompanyDB(tenant_id=tenant_id, data=request.data)
db.add(record)
@@ -233,7 +233,7 @@ async def upsert_cookies(
if record:
record.categories = request.categories
record.config = request.config
record.updated_at = datetime.utcnow()
record.updated_at = datetime.now(timezone.utc)
else:
record = EinwilligungenCookiesDB(
tenant_id=tenant_id,
@@ -374,7 +374,7 @@ async def create_consent(
user_id=request.user_id,
data_point_id=request.data_point_id,
granted=request.granted,
granted_at=datetime.utcnow(),
granted_at=datetime.now(timezone.utc),
consent_version=request.consent_version,
source=request.source,
ip_address=request.ip_address,
@@ -443,7 +443,7 @@ async def revoke_consent(
if consent.revoked_at:
raise HTTPException(status_code=400, detail="Consent is already revoked")
consent.revoked_at = datetime.utcnow()
consent.revoked_at = datetime.now(timezone.utc)
_record_history(db, consent, 'revoked')
db.commit()
db.refresh(consent)

View File

@@ -6,7 +6,7 @@ Inklusive Versionierung, Approval-Workflow, Vorschau und Send-Logging.
"""
import uuid
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional, Dict
from fastapi import APIRouter, Depends, HTTPException, Query, Header
@@ -271,7 +271,7 @@ async def update_settings(
if val is not None:
setattr(settings, field, val)
settings.updated_at = datetime.utcnow()
settings.updated_at = datetime.now(timezone.utc)
db.commit()
db.refresh(settings)
@@ -638,7 +638,7 @@ async def submit_version(
raise HTTPException(status_code=400, detail="Only draft versions can be submitted")
v.status = "review"
v.submitted_at = datetime.utcnow()
v.submitted_at = datetime.now(timezone.utc)
v.submitted_by = "admin"
db.commit()
db.refresh(v)
@@ -730,7 +730,7 @@ async def publish_version(
if v.status not in ("approved", "review", "draft"):
raise HTTPException(status_code=400, detail="Version cannot be published")
now = datetime.utcnow()
now = datetime.now(timezone.utc)
v.status = "published"
v.published_at = now
v.published_by = "admin"

View File

@@ -12,7 +12,7 @@ Endpoints:
"""
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional, Any, Dict
from fastapi import APIRouter, Depends, HTTPException, Query, Header
@@ -244,7 +244,7 @@ async def update_escalation(
set_clauses = ", ".join(f"{k} = :{k}" for k in updates.keys())
updates["id"] = escalation_id
updates["updated_at"] = datetime.utcnow()
updates["updated_at"] = datetime.now(timezone.utc)
row = db.execute(
text(
@@ -277,7 +277,7 @@ async def update_status(
resolved_at = request.resolved_at
if request.status in ('resolved', 'closed') and resolved_at is None:
resolved_at = datetime.utcnow()
resolved_at = datetime.now(timezone.utc)
row = db.execute(
text(
@@ -288,7 +288,7 @@ async def update_status(
{
"status": request.status,
"resolved_at": resolved_at,
"updated_at": datetime.utcnow(),
"updated_at": datetime.now(timezone.utc),
"id": escalation_id,
},
).fetchone()

View File

@@ -10,7 +10,7 @@ Endpoints:
import logging
import os
from datetime import datetime, timedelta
from datetime import datetime, timedelta, timezone
from typing import Optional
from collections import defaultdict
import uuid as uuid_module
@@ -370,8 +370,8 @@ def _store_evidence(
mime_type="application/json",
source="ci_pipeline",
ci_job_id=ci_job_id,
valid_from=datetime.utcnow(),
valid_until=datetime.utcnow() + timedelta(days=90),
valid_from=datetime.now(timezone.utc),
valid_until=datetime.now(timezone.utc) + timedelta(days=90),
status=EvidenceStatusEnum(parsed["evidence_status"]),
)
db.add(evidence)
@@ -455,7 +455,7 @@ def _update_risks(db: Session, *, source: str, control_id: str, ci_job_id: str,
tool=source,
control_id=control_id,
evidence_type=f"ci_{source}",
timestamp=datetime.utcnow().isoformat(),
timestamp=datetime.now(timezone.utc).isoformat(),
commit_sha=report_data.get("commit_sha", "unknown") if report_data else "unknown",
ci_job_id=ci_job_id,
findings=findings_detail,
@@ -571,7 +571,7 @@ async def get_ci_evidence_status(
Returns overview of recent evidence collected from CI/CD pipelines,
useful for dashboards and monitoring.
"""
cutoff_date = datetime.utcnow() - timedelta(days=days)
cutoff_date = datetime.now(timezone.utc) - timedelta(days=days)
# Build query
query = db.query(EvidenceDB).filter(

View File

@@ -18,7 +18,7 @@ import logging
import re
import asyncio
from typing import Optional, List, Dict
from datetime import datetime
from datetime import datetime, timezone
from fastapi import APIRouter, Depends
from pydantic import BaseModel
@@ -171,7 +171,7 @@ def _get_or_create_regulation(
code=regulation_code,
name=regulation_name or regulation_code,
regulation_type=reg_type,
description=f"Auto-created from RAG extraction ({datetime.utcnow().date()})",
description=f"Auto-created from RAG extraction ({datetime.now(timezone.utc).date()})",
)
return reg

View File

@@ -15,6 +15,7 @@ from typing import Optional
import httpx
from fastapi import APIRouter, File, Form, Header, UploadFile, HTTPException
from pydantic import BaseModel
from sqlalchemy import text
from database import SessionLocal
@@ -291,11 +292,11 @@ async def analyze_document(
db = SessionLocal()
try:
db.execute(
"""INSERT INTO compliance_imported_documents
text("""INSERT INTO compliance_imported_documents
(id, tenant_id, filename, file_type, file_size, detected_type, detection_confidence,
extracted_text, extracted_entities, recommendations, status, analyzed_at)
VALUES (:id, :tenant_id, :filename, :file_type, :file_size, :detected_type, :confidence,
:text, :entities::jsonb, :recommendations::jsonb, 'analyzed', NOW())""",
:text, :entities::jsonb, :recommendations::jsonb, 'analyzed', NOW())"""),
{
"id": doc_id,
"tenant_id": tenant_id,
@@ -313,9 +314,9 @@ async def analyze_document(
if total_gaps > 0:
import json
db.execute(
"""INSERT INTO compliance_gap_analyses
text("""INSERT INTO compliance_gap_analyses
(tenant_id, document_id, total_gaps, critical_gaps, high_gaps, medium_gaps, low_gaps, gaps, recommended_packages)
VALUES (:tenant_id, :document_id, :total, :critical, :high, :medium, :low, :gaps::jsonb, :packages::jsonb)""",
VALUES (:tenant_id, :document_id, :total, :critical, :high, :medium, :low, :gaps::jsonb, :packages::jsonb)"""),
{
"tenant_id": tenant_id,
"document_id": doc_id,
@@ -358,7 +359,7 @@ async def get_gap_analysis(
db = SessionLocal()
try:
result = db.execute(
"SELECT * FROM compliance_gap_analyses WHERE document_id = :doc_id AND tenant_id = :tid",
text("SELECT * FROM compliance_gap_analyses WHERE document_id = :doc_id AND tenant_id = :tid"),
{"doc_id": document_id, "tid": tid},
).fetchone()
if not result:
@@ -374,11 +375,11 @@ async def list_documents(tenant_id: str = "default"):
db = SessionLocal()
try:
result = db.execute(
"""SELECT id, filename, file_type, file_size, detected_type, detection_confidence,
text("""SELECT id, filename, file_type, file_size, detected_type, detection_confidence,
extracted_entities, recommendations, status, analyzed_at, created_at
FROM compliance_imported_documents
WHERE tenant_id = :tenant_id
ORDER BY created_at DESC""",
ORDER BY created_at DESC"""),
{"tenant_id": tenant_id},
)
rows = result.fetchall()
@@ -424,11 +425,11 @@ async def delete_document(
try:
# Delete gap analysis first (FK dependency)
db.execute(
"DELETE FROM compliance_gap_analyses WHERE document_id = :doc_id AND tenant_id = :tid",
text("DELETE FROM compliance_gap_analyses WHERE document_id = :doc_id AND tenant_id = :tid"),
{"doc_id": document_id, "tid": tid},
)
result = db.execute(
"DELETE FROM compliance_imported_documents WHERE id = :doc_id AND tenant_id = :tid",
text("DELETE FROM compliance_imported_documents WHERE id = :doc_id AND tenant_id = :tid"),
{"doc_id": document_id, "tid": tid},
)
db.commit()

View File

@@ -13,7 +13,7 @@ Provides endpoints for ISO 27001 certification-ready ISMS management:
import uuid
import hashlib
from datetime import datetime, date
from datetime import datetime, date, timezone
from typing import Optional
from fastapi import APIRouter, HTTPException, Query, Depends
@@ -102,7 +102,7 @@ def log_audit_trail(
new_value=new_value,
change_summary=change_summary,
performed_by=performed_by,
performed_at=datetime.utcnow(),
performed_at=datetime.now(timezone.utc),
checksum=create_signature(f"{entity_type}|{entity_id}|{action}|{performed_by}")
)
db.add(trail)
@@ -190,7 +190,7 @@ async def update_isms_scope(
setattr(scope, field, value)
scope.updated_by = updated_by
scope.updated_at = datetime.utcnow()
scope.updated_at = datetime.now(timezone.utc)
# Increment version if significant changes
version_parts = scope.version.split(".")
@@ -221,11 +221,11 @@ async def approve_isms_scope(
scope.status = ApprovalStatusEnum.APPROVED
scope.approved_by = data.approved_by
scope.approved_at = datetime.utcnow()
scope.approved_at = datetime.now(timezone.utc)
scope.effective_date = data.effective_date
scope.review_date = data.review_date
scope.approval_signature = create_signature(
f"{scope.scope_statement}|{data.approved_by}|{datetime.utcnow().isoformat()}"
f"{scope.scope_statement}|{data.approved_by}|{datetime.now(timezone.utc).isoformat()}"
)
log_audit_trail(db, "isms_scope", scope.id, "ISMS Scope", "approve", data.approved_by)
@@ -403,7 +403,7 @@ async def approve_policy(
policy.reviewed_by = data.reviewed_by
policy.approved_by = data.approved_by
policy.approved_at = datetime.utcnow()
policy.approved_at = datetime.now(timezone.utc)
policy.effective_date = data.effective_date
policy.next_review_date = date(
data.effective_date.year + (policy.review_frequency_months // 12),
@@ -412,7 +412,7 @@ async def approve_policy(
)
policy.status = ApprovalStatusEnum.APPROVED
policy.approval_signature = create_signature(
f"{policy.policy_id}|{data.approved_by}|{datetime.utcnow().isoformat()}"
f"{policy.policy_id}|{data.approved_by}|{datetime.now(timezone.utc).isoformat()}"
)
log_audit_trail(db, "isms_policy", policy.id, policy.policy_id, "approve", data.approved_by)
@@ -634,9 +634,9 @@ async def approve_soa_entry(
raise HTTPException(status_code=404, detail="SoA entry not found")
entry.reviewed_by = data.reviewed_by
entry.reviewed_at = datetime.utcnow()
entry.reviewed_at = datetime.now(timezone.utc)
entry.approved_by = data.approved_by
entry.approved_at = datetime.utcnow()
entry.approved_at = datetime.now(timezone.utc)
log_audit_trail(db, "soa", entry.id, entry.annex_a_control, "approve", data.approved_by)
db.commit()
@@ -812,7 +812,7 @@ async def close_finding(
finding.verification_method = data.verification_method
finding.verification_evidence = data.verification_evidence
finding.verified_by = data.closed_by
finding.verified_at = datetime.utcnow()
finding.verified_at = datetime.now(timezone.utc)
log_audit_trail(db, "audit_finding", finding.id, finding.finding_id, "close", data.closed_by)
db.commit()
@@ -1080,7 +1080,7 @@ async def approve_management_review(
review.status = "approved"
review.approved_by = data.approved_by
review.approved_at = datetime.utcnow()
review.approved_at = datetime.now(timezone.utc)
review.next_review_date = data.next_review_date
review.minutes_document_path = data.minutes_document_path
@@ -1392,7 +1392,7 @@ async def run_readiness_check(
# Save check result
check = ISMSReadinessCheckDB(
id=generate_id(),
check_date=datetime.utcnow(),
check_date=datetime.now(timezone.utc),
triggered_by=data.triggered_by,
overall_status=overall_status,
certification_possible=certification_possible,

View File

@@ -6,7 +6,7 @@ Extended with: Public endpoints, User Consents, Consent Audit Log, Cookie Catego
import uuid as uuid_mod
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional, List, Any, Dict
from fastapi import APIRouter, Depends, HTTPException, Query, Header, UploadFile, File
@@ -285,7 +285,7 @@ async def update_version(
for field, value in request.dict(exclude_none=True).items():
setattr(version, field, value)
version.updated_at = datetime.utcnow()
version.updated_at = datetime.now(timezone.utc)
db.commit()
db.refresh(version)
@@ -346,7 +346,7 @@ def _transition(
)
version.status = to_status
version.updated_at = datetime.utcnow()
version.updated_at = datetime.now(timezone.utc)
if extra_updates:
for k, v in extra_updates.items():
setattr(version, k, v)
@@ -378,7 +378,7 @@ async def approve_version(
return _transition(
db, version_id, ['review'], 'approved', 'approved',
request.approver, request.comment,
extra_updates={'approved_by': request.approver, 'approved_at': datetime.utcnow()}
extra_updates={'approved_by': request.approver, 'approved_at': datetime.now(timezone.utc)}
)
@@ -728,7 +728,7 @@ async def withdraw_consent(
if consent.withdrawn_at:
raise HTTPException(status_code=400, detail="Consent already withdrawn")
consent.withdrawn_at = datetime.utcnow()
consent.withdrawn_at = datetime.now(timezone.utc)
consent.consented = False
_log_consent_audit(
@@ -903,7 +903,7 @@ async def update_cookie_category(
if val is not None:
setattr(cat, field, val)
cat.updated_at = datetime.utcnow()
cat.updated_at = datetime.now(timezone.utc)
db.commit()
db.refresh(cat)
return _cookie_cat_to_dict(cat)

View File

@@ -15,7 +15,7 @@ Endpoints:
import json
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional, List, Any, Dict
from fastapi import APIRouter, Depends, HTTPException, Query
@@ -322,7 +322,7 @@ async def update_legal_template(
params: Dict[str, Any] = {
"id": template_id,
"tenant_id": tenant_id,
"updated_at": datetime.utcnow(),
"updated_at": datetime.now(timezone.utc),
}
jsonb_fields = {"placeholders", "inspiration_sources"}

View File

@@ -13,7 +13,7 @@ Endpoints:
import json
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional, List, Any, Dict
from fastapi import APIRouter, Depends, HTTPException, Query
@@ -253,7 +253,7 @@ async def update_loeschfrist(
):
"""Full update of a Loeschfrist policy."""
updates: Dict[str, Any] = {"id": policy_id, "tenant_id": tenant_id, "updated_at": datetime.utcnow()}
updates: Dict[str, Any] = {"id": policy_id, "tenant_id": tenant_id, "updated_at": datetime.now(timezone.utc)}
set_clauses = ["updated_at = :updated_at"]
for field, value in payload.model_dump(exclude_unset=True).items():
@@ -302,7 +302,7 @@ async def update_loeschfrist_status(
WHERE id = :id AND tenant_id = :tenant_id
RETURNING *
"""),
{"status": payload.status, "now": datetime.utcnow(), "id": policy_id, "tenant_id": tenant_id},
{"status": payload.status, "now": datetime.now(timezone.utc), "id": policy_id, "tenant_id": tenant_id},
).fetchone()
db.commit()

View File

@@ -21,7 +21,7 @@ Endpoints:
import json
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional, List, Any
from fastapi import APIRouter, Depends, HTTPException, Query, Header
@@ -852,11 +852,11 @@ async def update_incident(
# Auto-set timestamps based on status transitions
if updates.get("status") == "reported" and not updates.get("reported_to_authority_at"):
updates["reported_to_authority_at"] = datetime.utcnow().isoformat()
updates["reported_to_authority_at"] = datetime.now(timezone.utc).isoformat()
if updates.get("status") == "closed" and not updates.get("closed_at"):
updates["closed_at"] = datetime.utcnow().isoformat()
updates["closed_at"] = datetime.now(timezone.utc).isoformat()
updates["updated_at"] = datetime.utcnow().isoformat()
updates["updated_at"] = datetime.now(timezone.utc).isoformat()
set_parts = []
for k in updates:
@@ -984,7 +984,7 @@ async def update_template(
if not updates:
raise HTTPException(status_code=400, detail="No fields to update")
updates["updated_at"] = datetime.utcnow().isoformat()
updates["updated_at"] = datetime.now(timezone.utc).isoformat()
set_clauses = ", ".join(f"{k} = :{k}" for k in updates)
updates["id"] = template_id
updates["tenant_id"] = tenant_id

View File

@@ -12,7 +12,7 @@ Endpoints:
"""
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional, List, Any, Dict
from fastapi import APIRouter, Depends, HTTPException, Query, Header
@@ -228,7 +228,7 @@ async def update_obligation(
logger.info("update_obligation user_id=%s tenant_id=%s id=%s", x_user_id, tenant_id, obligation_id)
import json
updates: Dict[str, Any] = {"id": obligation_id, "tenant_id": tenant_id, "updated_at": datetime.utcnow()}
updates: Dict[str, Any] = {"id": obligation_id, "tenant_id": tenant_id, "updated_at": datetime.now(timezone.utc)}
set_clauses = ["updated_at = :updated_at"]
for field, value in payload.model_dump(exclude_unset=True).items():
@@ -274,7 +274,7 @@ async def update_obligation_status(
SET status = :status, updated_at = :now
WHERE id = :id AND tenant_id = :tenant_id
RETURNING *
"""), {"status": payload.status, "now": datetime.utcnow(), "id": obligation_id, "tenant_id": tenant_id}).fetchone()
"""), {"status": payload.status, "now": datetime.now(timezone.utc), "id": obligation_id, "tenant_id": tenant_id}).fetchone()
db.commit()
if not row:

View File

@@ -10,7 +10,7 @@ Endpoints:
"""
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional, Any, Dict
from fastapi import APIRouter, Depends, HTTPException, Query
@@ -177,7 +177,7 @@ async def create_metric(
"threshold": payload.threshold,
"trend": payload.trend,
"ai_system": payload.ai_system,
"last_measured": payload.last_measured or datetime.utcnow(),
"last_measured": payload.last_measured or datetime.now(timezone.utc),
}).fetchone()
db.commit()
return _row_to_dict(row)
@@ -192,7 +192,7 @@ async def update_metric(
):
"""Update a quality metric."""
updates: Dict[str, Any] = {"id": metric_id, "tenant_id": tenant_id, "updated_at": datetime.utcnow()}
updates: Dict[str, Any] = {"id": metric_id, "tenant_id": tenant_id, "updated_at": datetime.now(timezone.utc)}
set_clauses = ["updated_at = :updated_at"]
for field, value in payload.model_dump(exclude_unset=True).items():
@@ -296,7 +296,7 @@ async def create_test(
"duration": payload.duration,
"ai_system": payload.ai_system,
"details": payload.details,
"last_run": payload.last_run or datetime.utcnow(),
"last_run": payload.last_run or datetime.now(timezone.utc),
}).fetchone()
db.commit()
return _row_to_dict(row)
@@ -311,7 +311,7 @@ async def update_test(
):
"""Update a quality test."""
updates: Dict[str, Any] = {"id": test_id, "tenant_id": tenant_id, "updated_at": datetime.utcnow()}
updates: Dict[str, Any] = {"id": test_id, "tenant_id": tenant_id, "updated_at": datetime.now(timezone.utc)}
set_clauses = ["updated_at = :updated_at"]
for field, value in payload.model_dump(exclude_unset=True).items():

View File

@@ -16,7 +16,7 @@ import logging
logger = logging.getLogger(__name__)
import os
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional
from fastapi import APIRouter, Depends, HTTPException, Query, BackgroundTasks
@@ -393,11 +393,11 @@ async def update_requirement(requirement_id: str, updates: dict, db: Session = D
# Track audit changes
if 'audit_status' in updates:
requirement.last_audit_date = datetime.utcnow()
requirement.last_audit_date = datetime.now(timezone.utc)
# TODO: Get auditor from auth
requirement.last_auditor = updates.get('auditor_name', 'api_user')
requirement.updated_at = datetime.utcnow()
requirement.updated_at = datetime.now(timezone.utc)
db.commit()
db.refresh(requirement)

File diff suppressed because it is too large Load Diff

View File

@@ -17,6 +17,7 @@ from typing import Optional
import httpx
from fastapi import APIRouter, File, Form, UploadFile, HTTPException
from pydantic import BaseModel
from sqlalchemy import text
from database import SessionLocal
@@ -366,13 +367,13 @@ async def scan_dependencies(
db = SessionLocal()
try:
db.execute(
"""INSERT INTO compliance_screenings
text("""INSERT INTO compliance_screenings
(id, tenant_id, status, sbom_format, sbom_version,
total_components, total_issues, critical_issues, high_issues, medium_issues, low_issues,
sbom_data, started_at, completed_at)
VALUES (:id, :tenant_id, 'completed', 'CycloneDX', '1.5',
:total_components, :total_issues, :critical, :high, :medium, :low,
:sbom_data::jsonb, :started_at, :completed_at)""",
:sbom_data::jsonb, :started_at, :completed_at)"""),
{
"id": screening_id,
"tenant_id": tenant_id,
@@ -391,11 +392,11 @@ async def scan_dependencies(
# Persist security issues
for issue in issues:
db.execute(
"""INSERT INTO compliance_security_issues
text("""INSERT INTO compliance_security_issues
(id, screening_id, severity, title, description, cve, cvss,
affected_component, affected_version, fixed_in, remediation, status)
VALUES (:id, :screening_id, :severity, :title, :description, :cve, :cvss,
:component, :version, :fixed_in, :remediation, :status)""",
:component, :version, :fixed_in, :remediation, :status)"""),
{
"id": issue["id"],
"screening_id": screening_id,
@@ -486,10 +487,10 @@ async def get_screening(screening_id: str):
db = SessionLocal()
try:
result = db.execute(
"""SELECT id, status, sbom_format, sbom_version,
text("""SELECT id, status, sbom_format, sbom_version,
total_components, total_issues, critical_issues, high_issues,
medium_issues, low_issues, sbom_data, started_at, completed_at
FROM compliance_screenings WHERE id = :id""",
FROM compliance_screenings WHERE id = :id"""),
{"id": screening_id},
)
row = result.fetchone()
@@ -498,9 +499,9 @@ async def get_screening(screening_id: str):
# Fetch issues
issues_result = db.execute(
"""SELECT id, severity, title, description, cve, cvss,
text("""SELECT id, severity, title, description, cve, cvss,
affected_component, affected_version, fixed_in, remediation, status
FROM compliance_security_issues WHERE screening_id = :id""",
FROM compliance_security_issues WHERE screening_id = :id"""),
{"id": screening_id},
)
issues_rows = issues_result.fetchall()
@@ -566,12 +567,12 @@ async def list_screenings(tenant_id: str = "default"):
db = SessionLocal()
try:
result = db.execute(
"""SELECT id, status, total_components, total_issues,
text("""SELECT id, status, total_components, total_issues,
critical_issues, high_issues, medium_issues, low_issues,
started_at, completed_at, created_at
FROM compliance_screenings
WHERE tenant_id = :tenant_id
ORDER BY created_at DESC""",
ORDER BY created_at DESC"""),
{"tenant_id": tenant_id},
)
rows = result.fetchall()

View File

@@ -10,7 +10,7 @@ Endpoints:
"""
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional, Any, Dict
from fastapi import APIRouter, Depends, HTTPException, Query
@@ -207,7 +207,7 @@ async def update_security_item(
):
"""Update a security backlog item."""
updates: Dict[str, Any] = {"id": item_id, "tenant_id": tenant_id, "updated_at": datetime.utcnow()}
updates: Dict[str, Any] = {"id": item_id, "tenant_id": tenant_id, "updated_at": datetime.now(timezone.utc)}
set_clauses = ["updated_at = :updated_at"]
for field, value in payload.model_dump(exclude_unset=True).items():

View File

@@ -21,11 +21,11 @@ Endpoints:
GET /api/v1/admin/compliance-report — Compliance report
"""
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional
from fastapi import APIRouter, HTTPException, Depends, Query
from pydantic import BaseModel, Field
from pydantic import BaseModel, ConfigDict, Field
from sqlalchemy.orm import Session
from database import get_db
@@ -83,8 +83,7 @@ class SourceResponse(BaseModel):
created_at: str
updated_at: Optional[str] = None
class Config:
from_attributes = True
model_config = ConfigDict(from_attributes=True)
class OperationUpdate(BaseModel):
@@ -530,7 +529,7 @@ async def get_policy_stats(db: Session = Depends(get_db)):
pii_rules = db.query(PIIRuleDB).filter(PIIRuleDB.active).count()
# Count blocked content entries from today
today_start = datetime.utcnow().replace(hour=0, minute=0, second=0, microsecond=0)
today_start = datetime.now(timezone.utc).replace(hour=0, minute=0, second=0, microsecond=0)
blocked_today = db.query(BlockedContentDB).filter(
BlockedContentDB.created_at >= today_start,
).count()
@@ -553,7 +552,7 @@ async def get_compliance_report(db: Session = Depends(get_db)):
pii_rules = db.query(PIIRuleDB).filter(PIIRuleDB.active).all()
return {
"report_date": datetime.utcnow().isoformat(),
"report_date": datetime.now(timezone.utc).isoformat(),
"summary": {
"active_sources": len(sources),
"active_pii_rules": len(pii_rules),

View File

@@ -49,7 +49,7 @@ vendor_findings, vendor_control_instances).
import json
import logging
import uuid
from datetime import datetime
from datetime import datetime, timezone
from typing import Optional
from fastapi import APIRouter, Depends, HTTPException, Query
@@ -69,7 +69,7 @@ DEFAULT_TENANT_ID = "9282a473-5c95-4b3a-bf78-0ecc0ec71d3e"
# =============================================================================
def _now_iso() -> str:
return datetime.utcnow().isoformat() + "Z"
return datetime.now(timezone.utc).isoformat() + "Z"
def _ok(data, status_code: int = 200):
@@ -418,7 +418,7 @@ def create_vendor(body: dict = {}, db: Session = Depends(get_db)):
data = _to_snake(body)
vid = str(uuid.uuid4())
tid = data.get("tenant_id", DEFAULT_TENANT_ID)
now = datetime.utcnow().isoformat()
now = datetime.now(timezone.utc).isoformat()
db.execute(text("""
INSERT INTO vendor_vendors (
@@ -498,7 +498,7 @@ def update_vendor(vendor_id: str, body: dict = {}, db: Session = Depends(get_db)
raise HTTPException(404, "Vendor not found")
data = _to_snake(body)
now = datetime.utcnow().isoformat()
now = datetime.now(timezone.utc).isoformat()
# Build dynamic SET clause
allowed = [
@@ -558,7 +558,7 @@ def patch_vendor_status(vendor_id: str, body: dict = {}, db: Session = Depends(g
result = db.execute(text("""
UPDATE vendor_vendors SET status = :status, updated_at = :now WHERE id = :id
"""), {"id": vendor_id, "status": new_status, "now": datetime.utcnow().isoformat()})
"""), {"id": vendor_id, "status": new_status, "now": datetime.now(timezone.utc).isoformat()})
db.commit()
if result.rowcount == 0:
raise HTTPException(404, "Vendor not found")
@@ -620,7 +620,7 @@ def create_contract(body: dict = {}, db: Session = Depends(get_db)):
data = _to_snake(body)
cid = str(uuid.uuid4())
tid = data.get("tenant_id", DEFAULT_TENANT_ID)
now = datetime.utcnow().isoformat()
now = datetime.now(timezone.utc).isoformat()
db.execute(text("""
INSERT INTO vendor_contracts (
@@ -682,7 +682,7 @@ def update_contract(contract_id: str, body: dict = {}, db: Session = Depends(get
raise HTTPException(404, "Contract not found")
data = _to_snake(body)
now = datetime.utcnow().isoformat()
now = datetime.now(timezone.utc).isoformat()
allowed = [
"vendor_id", "file_name", "original_name", "mime_type", "file_size",
@@ -781,7 +781,7 @@ def create_finding(body: dict = {}, db: Session = Depends(get_db)):
data = _to_snake(body)
fid = str(uuid.uuid4())
tid = data.get("tenant_id", DEFAULT_TENANT_ID)
now = datetime.utcnow().isoformat()
now = datetime.now(timezone.utc).isoformat()
db.execute(text("""
INSERT INTO vendor_findings (
@@ -831,7 +831,7 @@ def update_finding(finding_id: str, body: dict = {}, db: Session = Depends(get_d
raise HTTPException(404, "Finding not found")
data = _to_snake(body)
now = datetime.utcnow().isoformat()
now = datetime.now(timezone.utc).isoformat()
allowed = [
"vendor_id", "contract_id", "finding_type", "category", "severity",
@@ -920,7 +920,7 @@ def create_control_instance(body: dict = {}, db: Session = Depends(get_db)):
data = _to_snake(body)
ciid = str(uuid.uuid4())
tid = data.get("tenant_id", DEFAULT_TENANT_ID)
now = datetime.utcnow().isoformat()
now = datetime.now(timezone.utc).isoformat()
db.execute(text("""
INSERT INTO vendor_control_instances (
@@ -965,7 +965,7 @@ def update_control_instance(instance_id: str, body: dict = {}, db: Session = Dep
raise HTTPException(404, "Control instance not found")
data = _to_snake(body)
now = datetime.utcnow().isoformat()
now = datetime.now(timezone.utc).isoformat()
allowed = [
"vendor_id", "control_id", "control_domain",
@@ -1050,7 +1050,7 @@ def list_controls(
def create_control(body: dict = {}, db: Session = Depends(get_db)):
cid = str(uuid.uuid4())
tid = body.get("tenantId", body.get("tenant_id", DEFAULT_TENANT_ID))
now = datetime.utcnow().isoformat()
now = datetime.now(timezone.utc).isoformat()
db.execute(text("""
INSERT INTO vendor_compliance_controls (

View File

@@ -119,7 +119,7 @@ async def upsert_organization(
else:
for field, value in request.dict(exclude_none=True).items():
setattr(org, field, value)
org.updated_at = datetime.utcnow()
org.updated_at = datetime.now(timezone.utc)
db.commit()
db.refresh(org)
@@ -291,7 +291,7 @@ async def update_activity(
updates = request.dict(exclude_none=True)
for field, value in updates.items():
setattr(act, field, value)
act.updated_at = datetime.utcnow()
act.updated_at = datetime.now(timezone.utc)
_log_audit(
db,
@@ -408,7 +408,7 @@ async def export_activities(
return _export_csv(activities)
return {
"exported_at": datetime.utcnow().isoformat(),
"exported_at": datetime.now(timezone.utc).isoformat(),
"organization": {
"name": org.organization_name if org else "",
"dpo_name": org.dpo_name if org else "",
@@ -482,7 +482,7 @@ def _export_csv(activities: list) -> StreamingResponse:
iter([output.getvalue()]),
media_type='text/csv; charset=utf-8',
headers={
'Content-Disposition': f'attachment; filename="vvt_export_{datetime.utcnow().strftime("%Y%m%d")}.csv"'
'Content-Disposition': f'attachment; filename="vvt_export_{datetime.now(timezone.utc).strftime("%Y%m%d")}.csv"'
},
)

View File

@@ -0,0 +1,141 @@
"""
AI System & Audit Export models — extracted from compliance/db/models.py.
Covers AI Act system registration/classification and the audit export package
tracker. Re-exported from ``compliance.db.models`` for backwards compatibility.
DO NOT change __tablename__, column names, or relationship strings.
"""
import uuid
import enum
from datetime import datetime, timezone
from sqlalchemy import (
Column, String, Text, Integer, DateTime, Date,
Enum, JSON, Index, Float,
)
from classroom_engine.database import Base
# ============================================================================
# ENUMS
# ============================================================================
class AIClassificationEnum(str, enum.Enum):
"""AI Act risk classification."""
PROHIBITED = "prohibited"
HIGH_RISK = "high-risk"
LIMITED_RISK = "limited-risk"
MINIMAL_RISK = "minimal-risk"
UNCLASSIFIED = "unclassified"
class AISystemStatusEnum(str, enum.Enum):
"""Status of an AI system in compliance tracking."""
DRAFT = "draft"
CLASSIFIED = "classified"
COMPLIANT = "compliant"
NON_COMPLIANT = "non-compliant"
class ExportStatusEnum(str, enum.Enum):
"""Status of audit export."""
PENDING = "pending"
GENERATING = "generating"
COMPLETED = "completed"
FAILED = "failed"
# ============================================================================
# MODELS
# ============================================================================
class AISystemDB(Base):
"""
AI System registry for AI Act compliance.
Tracks AI systems, their risk classification, and compliance status.
"""
__tablename__ = 'compliance_ai_systems'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
name = Column(String(300), nullable=False)
description = Column(Text)
purpose = Column(String(500))
sector = Column(String(100))
# AI Act classification
classification = Column(Enum(AIClassificationEnum), default=AIClassificationEnum.UNCLASSIFIED)
status = Column(Enum(AISystemStatusEnum), default=AISystemStatusEnum.DRAFT)
# Assessment
assessment_date = Column(DateTime)
assessment_result = Column(JSON) # Full assessment result
obligations = Column(JSON) # List of AI Act obligations
risk_factors = Column(JSON) # Risk factors from assessment
recommendations = Column(JSON) # Recommendations from assessment
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
__table_args__ = (
Index('ix_ai_system_classification', 'classification'),
Index('ix_ai_system_status', 'status'),
)
def __repr__(self):
return f"<AISystem {self.name} ({self.classification.value})>"
class AuditExportDB(Base):
"""
Tracks audit export packages generated for external auditors.
"""
__tablename__ = 'compliance_audit_exports'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
export_type = Column(String(50), nullable=False) # "full", "controls_only", "evidence_only"
export_name = Column(String(200)) # User-friendly name
# Scope
included_regulations = Column(JSON) # List of regulation codes
included_domains = Column(JSON) # List of control domains
date_range_start = Column(Date)
date_range_end = Column(Date)
# Generation
requested_by = Column(String(100), nullable=False)
requested_at = Column(DateTime, nullable=False, default=lambda: datetime.now(timezone.utc))
completed_at = Column(DateTime)
# Output
file_path = Column(String(500))
file_hash = Column(String(64)) # SHA-256 of ZIP
file_size_bytes = Column(Integer)
status = Column(Enum(ExportStatusEnum), default=ExportStatusEnum.PENDING)
error_message = Column(Text)
# Statistics
total_controls = Column(Integer)
total_evidence = Column(Integer)
compliance_score = Column(Float)
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
def __repr__(self):
return f"<AuditExport {self.export_type} by {self.requested_by}>"
__all__ = [
"AIClassificationEnum",
"AISystemStatusEnum",
"ExportStatusEnum",
"AISystemDB",
"AuditExportDB",
]

View File

@@ -0,0 +1,110 @@
"""
Compliance repositories — extracted from compliance/db/repository.py.
Phase 1 Step 5: the monolithic repository module is decomposed per
aggregate. Every repository class is re-exported from
``compliance.db.repository`` for backwards compatibility.
"""
import uuid
from datetime import datetime, date, timezone
from typing import List, Optional, Dict, Any, Tuple
from sqlalchemy.orm import Session as DBSession, selectinload, joinedload
from sqlalchemy import func, and_, or_
from compliance.db.models import (
RegulationDB, RequirementDB, ControlDB, ControlMappingDB,
EvidenceDB, RiskDB, AuditExportDB,
AuditSessionDB, AuditSignOffDB, AuditResultEnum, AuditSessionStatusEnum,
RegulationTypeEnum, ControlDomainEnum, ControlStatusEnum,
RiskLevelEnum, EvidenceStatusEnum, ExportStatusEnum,
ServiceModuleDB, ModuleRegulationMappingDB,
)
class AuditExportRepository:
"""Repository for audit exports."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
export_type: str,
requested_by: str,
export_name: Optional[str] = None,
included_regulations: Optional[List[str]] = None,
included_domains: Optional[List[str]] = None,
date_range_start: Optional[date] = None,
date_range_end: Optional[date] = None,
) -> AuditExportDB:
"""Create an export request."""
export = AuditExportDB(
id=str(uuid.uuid4()),
export_type=export_type,
export_name=export_name or f"audit_export_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
requested_by=requested_by,
included_regulations=included_regulations,
included_domains=included_domains,
date_range_start=date_range_start,
date_range_end=date_range_end,
)
self.db.add(export)
self.db.commit()
self.db.refresh(export)
return export
def get_by_id(self, export_id: str) -> Optional[AuditExportDB]:
"""Get export by ID."""
return self.db.query(AuditExportDB).filter(AuditExportDB.id == export_id).first()
def get_all(self, limit: int = 50) -> List[AuditExportDB]:
"""Get all exports."""
return (
self.db.query(AuditExportDB)
.order_by(AuditExportDB.requested_at.desc())
.limit(limit)
.all()
)
def update_status(
self,
export_id: str,
status: ExportStatusEnum,
file_path: Optional[str] = None,
file_hash: Optional[str] = None,
file_size_bytes: Optional[int] = None,
error_message: Optional[str] = None,
total_controls: Optional[int] = None,
total_evidence: Optional[int] = None,
compliance_score: Optional[float] = None,
) -> Optional[AuditExportDB]:
"""Update export status."""
export = self.get_by_id(export_id)
if not export:
return None
export.status = status
if file_path:
export.file_path = file_path
if file_hash:
export.file_hash = file_hash
if file_size_bytes:
export.file_size_bytes = file_size_bytes
if error_message:
export.error_message = error_message
if total_controls is not None:
export.total_controls = total_controls
if total_evidence is not None:
export.total_evidence = total_evidence
if compliance_score is not None:
export.compliance_score = compliance_score
if status == ExportStatusEnum.COMPLETED:
export.completed_at = datetime.now(timezone.utc)
export.updated_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(export)
return export

View File

@@ -0,0 +1,177 @@
"""
Audit Session & Sign-Off models — Sprint 3 Phase 3.
Extracted from compliance/db/models.py as the first worked example of the
Phase 1 model split. The classes are re-exported from compliance.db.models
for backwards compatibility, so existing imports continue to work unchanged.
Tables:
- compliance_audit_sessions: Structured compliance audit sessions
- compliance_audit_signoffs: Per-requirement sign-offs with digital signatures
DO NOT change __tablename__, column names, or relationship strings — the
database schema is frozen.
"""
import uuid
import enum
from datetime import datetime, timezone
from sqlalchemy import (
Column, String, Text, Integer, DateTime,
ForeignKey, Enum, JSON, Index,
)
from sqlalchemy.orm import relationship
from classroom_engine.database import Base
# ============================================================================
# ENUMS
# ============================================================================
class AuditResultEnum(str, enum.Enum):
"""Result of an audit sign-off for a requirement."""
COMPLIANT = "compliant" # Fully compliant
COMPLIANT_WITH_NOTES = "compliant_notes" # Compliant with observations
NON_COMPLIANT = "non_compliant" # Not compliant - remediation required
NOT_APPLICABLE = "not_applicable" # Not applicable to this audit
PENDING = "pending" # Not yet reviewed
class AuditSessionStatusEnum(str, enum.Enum):
"""Status of an audit session."""
DRAFT = "draft" # Session created, not started
IN_PROGRESS = "in_progress" # Audit in progress
COMPLETED = "completed" # All items reviewed
ARCHIVED = "archived" # Historical record
# ============================================================================
# MODELS
# ============================================================================
class AuditSessionDB(Base):
"""
Audit session for structured compliance reviews.
Enables auditors to:
- Create named audit sessions (e.g., "Q1 2026 GDPR Audit")
- Track progress through requirements
- Sign off individual items with digital signatures
- Generate audit reports
"""
__tablename__ = 'compliance_audit_sessions'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
name = Column(String(200), nullable=False) # e.g., "Q1 2026 Compliance Audit"
description = Column(Text)
# Auditor information
auditor_name = Column(String(100), nullable=False) # e.g., "Dr. Thomas Müller"
auditor_email = Column(String(200))
auditor_organization = Column(String(200)) # External auditor company
# Session scope
status = Column(Enum(AuditSessionStatusEnum), default=AuditSessionStatusEnum.DRAFT)
regulation_ids = Column(JSON) # Filter: ["GDPR", "AIACT"] or null for all
# Progress tracking
total_items = Column(Integer, default=0)
completed_items = Column(Integer, default=0)
compliant_count = Column(Integer, default=0)
non_compliant_count = Column(Integer, default=0)
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
started_at = Column(DateTime) # When audit began
completed_at = Column(DateTime) # When audit finished
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
signoffs = relationship("AuditSignOffDB", back_populates="session", cascade="all, delete-orphan")
__table_args__ = (
Index('ix_audit_session_status', 'status'),
Index('ix_audit_session_auditor', 'auditor_name'),
)
def __repr__(self):
return f"<AuditSession {self.name} ({self.status.value})>"
@property
def completion_percentage(self) -> float:
"""Calculate completion percentage."""
if self.total_items == 0:
return 0.0
return round((self.completed_items / self.total_items) * 100, 1)
class AuditSignOffDB(Base):
"""
Individual sign-off for a requirement within an audit session.
Features:
- Records audit result (compliant, non-compliant, etc.)
- Stores auditor notes and observations
- Creates digital signature (SHA-256 hash) for tamper evidence
"""
__tablename__ = 'compliance_audit_signoffs'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
session_id = Column(String(36), ForeignKey('compliance_audit_sessions.id'), nullable=False, index=True)
requirement_id = Column(String(36), ForeignKey('compliance_requirements.id'), nullable=False, index=True)
# Audit result
result = Column(Enum(AuditResultEnum), default=AuditResultEnum.PENDING)
notes = Column(Text) # Auditor observations
# Evidence references for this sign-off
evidence_ids = Column(JSON) # List of evidence IDs reviewed
# Digital signature (SHA-256 hash of result + auditor + timestamp)
signature_hash = Column(String(64)) # SHA-256 hex string
signed_at = Column(DateTime)
signed_by = Column(String(100)) # Auditor name at time of signing
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
session = relationship("AuditSessionDB", back_populates="signoffs")
requirement = relationship("RequirementDB")
__table_args__ = (
Index('ix_signoff_session_requirement', 'session_id', 'requirement_id', unique=True),
Index('ix_signoff_result', 'result'),
)
def __repr__(self):
return f"<AuditSignOff {self.requirement_id}: {self.result.value}>"
def create_signature(self, auditor_name: str) -> str:
"""
Create a digital signature for this sign-off.
Returns SHA-256 hash of: result + requirement_id + auditor_name + timestamp
"""
import hashlib
timestamp = datetime.now(timezone.utc).isoformat()
data = f"{self.result.value}|{self.requirement_id}|{auditor_name}|{timestamp}"
signature = hashlib.sha256(data.encode()).hexdigest()
self.signature_hash = signature
self.signed_at = datetime.now(timezone.utc)
self.signed_by = auditor_name
return signature
__all__ = [
"AuditResultEnum",
"AuditSessionStatusEnum",
"AuditSessionDB",
"AuditSignOffDB",
]

View File

@@ -0,0 +1,478 @@
"""
Compliance repositories — extracted from compliance/db/repository.py.
Phase 1 Step 5: the monolithic repository module is decomposed per
aggregate. Every repository class is re-exported from
``compliance.db.repository`` for backwards compatibility.
"""
import uuid
from datetime import datetime, date, timezone
from typing import List, Optional, Dict, Any, Tuple
from sqlalchemy.orm import Session as DBSession, selectinload, joinedload
from sqlalchemy import func, and_, or_
from compliance.db.models import (
RegulationDB, RequirementDB, ControlDB, ControlMappingDB,
EvidenceDB, RiskDB, AuditExportDB,
AuditSessionDB, AuditSignOffDB, AuditResultEnum, AuditSessionStatusEnum,
RegulationTypeEnum, ControlDomainEnum, ControlStatusEnum,
RiskLevelEnum, EvidenceStatusEnum, ExportStatusEnum,
ServiceModuleDB, ModuleRegulationMappingDB,
)
class AuditSessionRepository:
"""Repository for audit sessions (Sprint 3: Auditor-Verbesserungen)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
name: str,
auditor_name: str,
description: Optional[str] = None,
auditor_email: Optional[str] = None,
regulation_ids: Optional[List[str]] = None,
) -> AuditSessionDB:
"""Create a new audit session."""
session = AuditSessionDB(
id=str(uuid.uuid4()),
name=name,
description=description,
auditor_name=auditor_name,
auditor_email=auditor_email,
regulation_ids=regulation_ids,
status=AuditSessionStatusEnum.DRAFT,
)
self.db.add(session)
self.db.commit()
self.db.refresh(session)
return session
def get_by_id(self, session_id: str) -> Optional[AuditSessionDB]:
"""Get audit session by ID with eager-loaded signoffs."""
return (
self.db.query(AuditSessionDB)
.options(
selectinload(AuditSessionDB.signoffs)
.selectinload(AuditSignOffDB.requirement)
)
.filter(AuditSessionDB.id == session_id)
.first()
)
def get_all(
self,
status: Optional[AuditSessionStatusEnum] = None,
limit: int = 50,
) -> List[AuditSessionDB]:
"""Get all audit sessions with optional status filter."""
query = self.db.query(AuditSessionDB)
if status:
query = query.filter(AuditSessionDB.status == status)
return query.order_by(AuditSessionDB.created_at.desc()).limit(limit).all()
def update_status(
self,
session_id: str,
status: AuditSessionStatusEnum,
) -> Optional[AuditSessionDB]:
"""Update session status and set appropriate timestamps."""
session = self.get_by_id(session_id)
if not session:
return None
session.status = status
if status == AuditSessionStatusEnum.IN_PROGRESS and not session.started_at:
session.started_at = datetime.now(timezone.utc)
elif status == AuditSessionStatusEnum.COMPLETED:
session.completed_at = datetime.now(timezone.utc)
session.updated_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(session)
return session
def update_progress(
self,
session_id: str,
total_items: Optional[int] = None,
completed_items: Optional[int] = None,
) -> Optional[AuditSessionDB]:
"""Update session progress counters."""
session = self.db.query(AuditSessionDB).filter(
AuditSessionDB.id == session_id
).first()
if not session:
return None
if total_items is not None:
session.total_items = total_items
if completed_items is not None:
session.completed_items = completed_items
session.updated_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(session)
return session
def start_session(self, session_id: str) -> Optional[AuditSessionDB]:
"""
Start an audit session:
- Set status to IN_PROGRESS
- Initialize total_items based on requirements count
"""
session = self.get_by_id(session_id)
if not session:
return None
# Count requirements for this session
query = self.db.query(func.count(RequirementDB.id))
if session.regulation_ids:
query = query.join(RegulationDB).filter(
RegulationDB.id.in_(session.regulation_ids)
)
total_requirements = query.scalar() or 0
session.status = AuditSessionStatusEnum.IN_PROGRESS
session.started_at = datetime.now(timezone.utc)
session.total_items = total_requirements
session.updated_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(session)
return session
def delete(self, session_id: str) -> bool:
"""Delete an audit session (cascades to signoffs)."""
session = self.db.query(AuditSessionDB).filter(
AuditSessionDB.id == session_id
).first()
if not session:
return False
self.db.delete(session)
self.db.commit()
return True
def get_statistics(self, session_id: str) -> Dict[str, Any]:
"""Get detailed statistics for an audit session."""
session = self.get_by_id(session_id)
if not session:
return {}
signoffs = session.signoffs or []
stats = {
"total": session.total_items or 0,
"completed": len([s for s in signoffs if s.result != AuditResultEnum.PENDING]),
"compliant": len([s for s in signoffs if s.result == AuditResultEnum.COMPLIANT]),
"compliant_with_notes": len([s for s in signoffs if s.result == AuditResultEnum.COMPLIANT_WITH_NOTES]),
"non_compliant": len([s for s in signoffs if s.result == AuditResultEnum.NON_COMPLIANT]),
"not_applicable": len([s for s in signoffs if s.result == AuditResultEnum.NOT_APPLICABLE]),
"pending": len([s for s in signoffs if s.result == AuditResultEnum.PENDING]),
"signed": len([s for s in signoffs if s.signature_hash]),
}
total = stats["total"] if stats["total"] > 0 else 1
stats["completion_percentage"] = round(
(stats["completed"] / total) * 100, 1
)
return stats
class AuditSignOffRepository:
"""Repository for audit sign-offs (Sprint 3: Auditor-Verbesserungen)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
session_id: str,
requirement_id: str,
result: AuditResultEnum = AuditResultEnum.PENDING,
notes: Optional[str] = None,
) -> AuditSignOffDB:
"""Create a new sign-off for a requirement."""
signoff = AuditSignOffDB(
id=str(uuid.uuid4()),
session_id=session_id,
requirement_id=requirement_id,
result=result,
notes=notes,
)
self.db.add(signoff)
self.db.commit()
self.db.refresh(signoff)
return signoff
def get_by_id(self, signoff_id: str) -> Optional[AuditSignOffDB]:
"""Get sign-off by ID."""
return (
self.db.query(AuditSignOffDB)
.options(joinedload(AuditSignOffDB.requirement))
.filter(AuditSignOffDB.id == signoff_id)
.first()
)
def get_by_session_and_requirement(
self,
session_id: str,
requirement_id: str,
) -> Optional[AuditSignOffDB]:
"""Get sign-off by session and requirement ID."""
return (
self.db.query(AuditSignOffDB)
.filter(
and_(
AuditSignOffDB.session_id == session_id,
AuditSignOffDB.requirement_id == requirement_id,
)
)
.first()
)
def get_by_session(
self,
session_id: str,
result_filter: Optional[AuditResultEnum] = None,
) -> List[AuditSignOffDB]:
"""Get all sign-offs for a session."""
query = (
self.db.query(AuditSignOffDB)
.options(joinedload(AuditSignOffDB.requirement))
.filter(AuditSignOffDB.session_id == session_id)
)
if result_filter:
query = query.filter(AuditSignOffDB.result == result_filter)
return query.order_by(AuditSignOffDB.created_at).all()
def update(
self,
signoff_id: str,
result: Optional[AuditResultEnum] = None,
notes: Optional[str] = None,
sign: bool = False,
signed_by: Optional[str] = None,
) -> Optional[AuditSignOffDB]:
"""Update a sign-off with optional digital signature."""
signoff = self.db.query(AuditSignOffDB).filter(
AuditSignOffDB.id == signoff_id
).first()
if not signoff:
return None
if result is not None:
signoff.result = result
if notes is not None:
signoff.notes = notes
if sign and signed_by:
signoff.create_signature(signed_by)
signoff.updated_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(signoff)
# Update session progress
self._update_session_progress(signoff.session_id)
return signoff
def sign_off(
self,
session_id: str,
requirement_id: str,
result: AuditResultEnum,
notes: Optional[str] = None,
sign: bool = False,
signed_by: Optional[str] = None,
) -> AuditSignOffDB:
"""
Create or update a sign-off for a requirement.
This is the main method for auditors to record their findings.
"""
# Check if sign-off already exists
signoff = self.get_by_session_and_requirement(session_id, requirement_id)
if signoff:
# Update existing
signoff.result = result
if notes is not None:
signoff.notes = notes
if sign and signed_by:
signoff.create_signature(signed_by)
signoff.updated_at = datetime.now(timezone.utc)
else:
# Create new
signoff = AuditSignOffDB(
id=str(uuid.uuid4()),
session_id=session_id,
requirement_id=requirement_id,
result=result,
notes=notes,
)
if sign and signed_by:
signoff.create_signature(signed_by)
self.db.add(signoff)
self.db.commit()
self.db.refresh(signoff)
# Update session progress
self._update_session_progress(session_id)
return signoff
def _update_session_progress(self, session_id: str) -> None:
"""Update the session's completed_items count."""
completed = (
self.db.query(func.count(AuditSignOffDB.id))
.filter(
and_(
AuditSignOffDB.session_id == session_id,
AuditSignOffDB.result != AuditResultEnum.PENDING,
)
)
.scalar()
) or 0
session = self.db.query(AuditSessionDB).filter(
AuditSessionDB.id == session_id
).first()
if session:
session.completed_items = completed
session.updated_at = datetime.now(timezone.utc)
self.db.commit()
def get_checklist(
self,
session_id: str,
page: int = 1,
page_size: int = 50,
result_filter: Optional[AuditResultEnum] = None,
regulation_code: Optional[str] = None,
search: Optional[str] = None,
) -> Tuple[List[Dict[str, Any]], int]:
"""
Get audit checklist items for a session with pagination.
Returns requirements with their sign-off status.
"""
session = self.db.query(AuditSessionDB).filter(
AuditSessionDB.id == session_id
).first()
if not session:
return [], 0
# Base query for requirements
query = (
self.db.query(RequirementDB)
.options(
joinedload(RequirementDB.regulation),
selectinload(RequirementDB.control_mappings),
)
)
# Filter by session's regulation_ids if set
if session.regulation_ids:
query = query.filter(RequirementDB.regulation_id.in_(session.regulation_ids))
# Filter by regulation code
if regulation_code:
query = query.join(RegulationDB).filter(RegulationDB.code == regulation_code)
# Search
if search:
search_term = f"%{search}%"
query = query.filter(
or_(
RequirementDB.title.ilike(search_term),
RequirementDB.article.ilike(search_term),
)
)
# Get existing sign-offs for this session
signoffs_map = {}
signoffs = (
self.db.query(AuditSignOffDB)
.filter(AuditSignOffDB.session_id == session_id)
.all()
)
for s in signoffs:
signoffs_map[s.requirement_id] = s
# Filter by result if specified
if result_filter:
if result_filter == AuditResultEnum.PENDING:
# Requirements without sign-off or with pending status
signed_req_ids = [
s.requirement_id for s in signoffs
if s.result != AuditResultEnum.PENDING
]
if signed_req_ids:
query = query.filter(~RequirementDB.id.in_(signed_req_ids))
else:
# Requirements with specific result
matching_req_ids = [
s.requirement_id for s in signoffs
if s.result == result_filter
]
if matching_req_ids:
query = query.filter(RequirementDB.id.in_(matching_req_ids))
else:
return [], 0
# Count and paginate
total = query.count()
requirements = (
query
.order_by(RequirementDB.article, RequirementDB.paragraph)
.offset((page - 1) * page_size)
.limit(page_size)
.all()
)
# Build checklist items
items = []
for req in requirements:
signoff = signoffs_map.get(req.id)
items.append({
"requirement_id": req.id,
"regulation_code": req.regulation.code if req.regulation else None,
"regulation_name": req.regulation.name if req.regulation else None,
"article": req.article,
"paragraph": req.paragraph,
"title": req.title,
"description": req.description,
"current_result": signoff.result.value if signoff else AuditResultEnum.PENDING.value,
"notes": signoff.notes if signoff else None,
"is_signed": bool(signoff.signature_hash) if signoff else False,
"signed_at": signoff.signed_at if signoff else None,
"signed_by": signoff.signed_by if signoff else None,
"evidence_count": len(req.control_mappings) if req.control_mappings else 0,
"controls_mapped": len(req.control_mappings) if req.control_mappings else 0,
})
return items, total
def delete(self, signoff_id: str) -> bool:
"""Delete a sign-off."""
signoff = self.db.query(AuditSignOffDB).filter(
AuditSignOffDB.id == signoff_id
).first()
if not signoff:
return False
session_id = signoff.session_id
self.db.delete(signoff)
self.db.commit()
# Update session progress
self._update_session_progress(session_id)
return True

View File

@@ -0,0 +1,279 @@
"""
Control, Evidence, and Risk models — extracted from compliance/db/models.py.
Covers the control framework (ControlDB), requirement↔control mappings,
evidence artifacts, and the risk register. Re-exported from
``compliance.db.models`` for backwards compatibility.
DO NOT change __tablename__, column names, or relationship strings.
"""
import uuid
import enum
from datetime import datetime, date, timezone
from sqlalchemy import (
Column, String, Text, Integer, Boolean, DateTime, Date,
ForeignKey, Enum, JSON, Index,
)
from sqlalchemy.orm import relationship
from classroom_engine.database import Base
# ============================================================================
# ENUMS
# ============================================================================
class ControlTypeEnum(str, enum.Enum):
"""Type of security control."""
PREVENTIVE = "preventive" # Prevents incidents
DETECTIVE = "detective" # Detects incidents
CORRECTIVE = "corrective" # Corrects after incidents
class ControlDomainEnum(str, enum.Enum):
"""Domain/category of control."""
GOVERNANCE = "gov" # Governance & Organization
PRIVACY = "priv" # Privacy & Data Protection
IAM = "iam" # Identity & Access Management
CRYPTO = "crypto" # Cryptography & Key Management
SDLC = "sdlc" # Secure Development Lifecycle
OPS = "ops" # Operations & Monitoring
AI = "ai" # AI-specific controls
CRA = "cra" # CRA & Supply Chain
AUDIT = "aud" # Audit & Traceability
class ControlStatusEnum(str, enum.Enum):
"""Implementation status of a control."""
PASS = "pass" # Fully implemented & passing
PARTIAL = "partial" # Partially implemented
FAIL = "fail" # Not passing
NOT_APPLICABLE = "n/a" # Not applicable
PLANNED = "planned" # Planned for implementation
class RiskLevelEnum(str, enum.Enum):
"""Risk severity level."""
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CRITICAL = "critical"
class EvidenceStatusEnum(str, enum.Enum):
"""Status of evidence artifact."""
VALID = "valid" # Currently valid
EXPIRED = "expired" # Past validity date
PENDING = "pending" # Awaiting validation
FAILED = "failed" # Failed validation
# ============================================================================
# MODELS
# ============================================================================
class ControlDB(Base):
"""
Technical or organizational security control.
Examples: PRIV-001 (Verarbeitungsverzeichnis), SDLC-001 (SAST Scanning)
"""
__tablename__ = 'compliance_controls'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
control_id = Column(String(20), unique=True, nullable=False, index=True) # e.g., "PRIV-001"
domain = Column(Enum(ControlDomainEnum), nullable=False, index=True)
control_type = Column(Enum(ControlTypeEnum), nullable=False)
title = Column(String(300), nullable=False)
description = Column(Text)
pass_criteria = Column(Text, nullable=False) # Measurable pass criteria
implementation_guidance = Column(Text) # How to implement
# Code/Evidence references
code_reference = Column(String(500)) # e.g., "backend/middleware/pii_redactor.py:45"
documentation_url = Column(String(500)) # Link to internal docs
# Automation
is_automated = Column(Boolean, default=False)
automation_tool = Column(String(100)) # e.g., "Semgrep", "Trivy"
automation_config = Column(JSON) # Tool-specific config
# Status
status = Column(Enum(ControlStatusEnum), default=ControlStatusEnum.PLANNED)
status_notes = Column(Text)
# Ownership & Review
owner = Column(String(100)) # Responsible person/team
review_frequency_days = Column(Integer, default=90)
last_reviewed_at = Column(DateTime)
next_review_at = Column(DateTime)
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
mappings = relationship("ControlMappingDB", back_populates="control", cascade="all, delete-orphan")
evidence = relationship("EvidenceDB", back_populates="control", cascade="all, delete-orphan")
__table_args__ = (
Index('ix_control_domain_status', 'domain', 'status'),
)
def __repr__(self):
return f"<Control {self.control_id}: {self.title}>"
class ControlMappingDB(Base):
"""
Maps requirements to controls (many-to-many with metadata).
"""
__tablename__ = 'compliance_control_mappings'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
requirement_id = Column(String(36), ForeignKey('compliance_requirements.id'), nullable=False, index=True)
control_id = Column(String(36), ForeignKey('compliance_controls.id'), nullable=False, index=True)
coverage_level = Column(String(20), default="full") # "full", "partial", "planned"
notes = Column(Text) # Explanation of coverage
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
requirement = relationship("RequirementDB", back_populates="control_mappings")
control = relationship("ControlDB", back_populates="mappings")
__table_args__ = (
Index('ix_mapping_req_ctrl', 'requirement_id', 'control_id', unique=True),
)
class EvidenceDB(Base):
"""
Audit evidence for controls.
Types: scan_report, policy_document, config_snapshot, test_result,
manual_upload, screenshot, external_link
"""
__tablename__ = 'compliance_evidence'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
control_id = Column(String(36), ForeignKey('compliance_controls.id'), nullable=False, index=True)
evidence_type = Column(String(50), nullable=False) # Type of evidence
title = Column(String(300), nullable=False)
description = Column(Text)
# File/Link storage
artifact_path = Column(String(500)) # Local file path
artifact_url = Column(String(500)) # External URL
artifact_hash = Column(String(64)) # SHA-256 hash
file_size_bytes = Column(Integer)
mime_type = Column(String(100))
# Validity period
valid_from = Column(DateTime, nullable=False, default=lambda: datetime.now(timezone.utc))
valid_until = Column(DateTime) # NULL = no expiry
status = Column(Enum(EvidenceStatusEnum), default=EvidenceStatusEnum.VALID)
# Source tracking
source = Column(String(100)) # "ci_pipeline", "manual", "api"
ci_job_id = Column(String(100)) # CI/CD job reference
uploaded_by = Column(String(100)) # User who uploaded
# Timestamps
collected_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
control = relationship("ControlDB", back_populates="evidence")
__table_args__ = (
Index('ix_evidence_control_type', 'control_id', 'evidence_type'),
Index('ix_evidence_status', 'status'),
)
def __repr__(self):
return f"<Evidence {self.evidence_type}: {self.title}>"
class RiskDB(Base):
"""
Risk register entry with likelihood x impact scoring.
"""
__tablename__ = 'compliance_risks'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
risk_id = Column(String(20), unique=True, nullable=False, index=True) # e.g., "RISK-001"
title = Column(String(300), nullable=False)
description = Column(Text)
category = Column(String(50), nullable=False) # "data_breach", "compliance_gap", etc.
# Inherent risk (before controls)
likelihood = Column(Integer, nullable=False) # 1-5
impact = Column(Integer, nullable=False) # 1-5
inherent_risk = Column(Enum(RiskLevelEnum), nullable=False)
# Mitigating controls
mitigating_controls = Column(JSON) # List of control_ids
# Residual risk (after controls)
residual_likelihood = Column(Integer)
residual_impact = Column(Integer)
residual_risk = Column(Enum(RiskLevelEnum))
# Management
owner = Column(String(100))
status = Column(String(20), default="open") # "open", "mitigated", "accepted", "transferred"
treatment_plan = Column(Text)
# Review
identified_date = Column(Date, default=date.today)
review_date = Column(Date)
last_assessed_at = Column(DateTime)
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
__table_args__ = (
Index('ix_risk_category_status', 'category', 'status'),
Index('ix_risk_inherent', 'inherent_risk'),
)
def __repr__(self):
return f"<Risk {self.risk_id}: {self.title}>"
@staticmethod
def calculate_risk_level(likelihood: int, impact: int) -> RiskLevelEnum:
"""Calculate risk level from likelihood x impact matrix."""
score = likelihood * impact
if score >= 20:
return RiskLevelEnum.CRITICAL
elif score >= 12:
return RiskLevelEnum.HIGH
elif score >= 6:
return RiskLevelEnum.MEDIUM
else:
return RiskLevelEnum.LOW
__all__ = [
"ControlTypeEnum",
"ControlDomainEnum",
"ControlStatusEnum",
"RiskLevelEnum",
"EvidenceStatusEnum",
"ControlDB",
"ControlMappingDB",
"EvidenceDB",
"RiskDB",
]

View File

@@ -0,0 +1,291 @@
"""
Compliance repositories — extracted from compliance/db/repository.py.
Phase 1 Step 5: the monolithic repository module is decomposed per
aggregate. Every repository class is re-exported from
``compliance.db.repository`` for backwards compatibility.
"""
import uuid
from datetime import datetime, date, timezone
from typing import List, Optional, Dict, Any, Tuple
from sqlalchemy.orm import Session as DBSession, selectinload, joinedload
from sqlalchemy import func, and_, or_
from compliance.db.models import (
RegulationDB, RequirementDB, ControlDB, ControlMappingDB,
EvidenceDB, RiskDB, AuditExportDB,
AuditSessionDB, AuditSignOffDB, AuditResultEnum, AuditSessionStatusEnum,
RegulationTypeEnum, ControlDomainEnum, ControlStatusEnum,
RiskLevelEnum, EvidenceStatusEnum, ExportStatusEnum,
ServiceModuleDB, ModuleRegulationMappingDB,
)
class ControlRepository:
"""Repository for controls."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
control_id: str,
domain: ControlDomainEnum,
control_type: str,
title: str,
pass_criteria: str,
description: Optional[str] = None,
implementation_guidance: Optional[str] = None,
code_reference: Optional[str] = None,
is_automated: bool = False,
automation_tool: Optional[str] = None,
owner: Optional[str] = None,
review_frequency_days: int = 90,
) -> ControlDB:
"""Create a new control."""
control = ControlDB(
id=str(uuid.uuid4()),
control_id=control_id,
domain=domain,
control_type=control_type,
title=title,
description=description,
pass_criteria=pass_criteria,
implementation_guidance=implementation_guidance,
code_reference=code_reference,
is_automated=is_automated,
automation_tool=automation_tool,
owner=owner,
review_frequency_days=review_frequency_days,
)
self.db.add(control)
self.db.commit()
self.db.refresh(control)
return control
def get_by_id(self, control_uuid: str) -> Optional[ControlDB]:
"""Get control by UUID with eager-loaded relationships."""
return (
self.db.query(ControlDB)
.options(
selectinload(ControlDB.mappings).selectinload(ControlMappingDB.requirement),
selectinload(ControlDB.evidence)
)
.filter(ControlDB.id == control_uuid)
.first()
)
def get_by_control_id(self, control_id: str) -> Optional[ControlDB]:
"""Get control by control_id (e.g., 'PRIV-001') with eager-loaded relationships."""
return (
self.db.query(ControlDB)
.options(
selectinload(ControlDB.mappings).selectinload(ControlMappingDB.requirement),
selectinload(ControlDB.evidence)
)
.filter(ControlDB.control_id == control_id)
.first()
)
def get_all(
self,
domain: Optional[ControlDomainEnum] = None,
status: Optional[ControlStatusEnum] = None,
is_automated: Optional[bool] = None,
) -> List[ControlDB]:
"""Get all controls with optional filters and eager-loading."""
query = (
self.db.query(ControlDB)
.options(
selectinload(ControlDB.mappings),
selectinload(ControlDB.evidence)
)
)
if domain:
query = query.filter(ControlDB.domain == domain)
if status:
query = query.filter(ControlDB.status == status)
if is_automated is not None:
query = query.filter(ControlDB.is_automated == is_automated)
return query.order_by(ControlDB.control_id).all()
def get_paginated(
self,
page: int = 1,
page_size: int = 50,
domain: Optional[ControlDomainEnum] = None,
status: Optional[ControlStatusEnum] = None,
is_automated: Optional[bool] = None,
search: Optional[str] = None,
) -> Tuple[List[ControlDB], int]:
"""
Get paginated controls with eager-loaded relationships.
Returns tuple of (items, total_count).
"""
query = (
self.db.query(ControlDB)
.options(
selectinload(ControlDB.mappings),
selectinload(ControlDB.evidence)
)
)
if domain:
query = query.filter(ControlDB.domain == domain)
if status:
query = query.filter(ControlDB.status == status)
if is_automated is not None:
query = query.filter(ControlDB.is_automated == is_automated)
if search:
search_term = f"%{search}%"
query = query.filter(
or_(
ControlDB.title.ilike(search_term),
ControlDB.description.ilike(search_term),
ControlDB.control_id.ilike(search_term),
)
)
total = query.count()
items = (
query
.order_by(ControlDB.control_id)
.offset((page - 1) * page_size)
.limit(page_size)
.all()
)
return items, total
def get_by_domain(self, domain: ControlDomainEnum) -> List[ControlDB]:
"""Get all controls in a domain."""
return self.get_all(domain=domain)
def get_by_status(self, status: ControlStatusEnum) -> List[ControlDB]:
"""Get all controls with a specific status."""
return self.get_all(status=status)
def update_status(
self,
control_id: str,
status: ControlStatusEnum,
status_notes: Optional[str] = None
) -> Optional[ControlDB]:
"""Update control status."""
control = self.get_by_control_id(control_id)
if not control:
return None
control.status = status
if status_notes:
control.status_notes = status_notes
control.updated_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(control)
return control
def mark_reviewed(self, control_id: str) -> Optional[ControlDB]:
"""Mark control as reviewed."""
control = self.get_by_control_id(control_id)
if not control:
return None
control.last_reviewed_at = datetime.now(timezone.utc)
from datetime import timedelta
control.next_review_at = datetime.now(timezone.utc) + timedelta(days=control.review_frequency_days)
control.updated_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(control)
return control
def get_due_for_review(self) -> List[ControlDB]:
"""Get controls due for review."""
return (
self.db.query(ControlDB)
.filter(
or_(
ControlDB.next_review_at is None,
ControlDB.next_review_at <= datetime.now(timezone.utc)
)
)
.order_by(ControlDB.next_review_at)
.all()
)
def get_statistics(self) -> Dict[str, Any]:
"""Get control statistics by status and domain."""
total = self.db.query(func.count(ControlDB.id)).scalar()
by_status = dict(
self.db.query(ControlDB.status, func.count(ControlDB.id))
.group_by(ControlDB.status)
.all()
)
by_domain = dict(
self.db.query(ControlDB.domain, func.count(ControlDB.id))
.group_by(ControlDB.domain)
.all()
)
passed = by_status.get(ControlStatusEnum.PASS, 0)
partial = by_status.get(ControlStatusEnum.PARTIAL, 0)
score = 0.0
if total > 0:
score = ((passed + (partial * 0.5)) / total) * 100
return {
"total": total,
"by_status": {str(k.value) if k else "none": v for k, v in by_status.items()},
"by_domain": {str(k.value) if k else "none": v for k, v in by_domain.items()},
"compliance_score": round(score, 1),
}
class ControlMappingRepository:
"""Repository for requirement-control mappings."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
requirement_id: str,
control_id: str,
coverage_level: str = "full",
notes: Optional[str] = None,
) -> ControlMappingDB:
"""Create a mapping."""
# Get the control UUID from control_id
control = self.db.query(ControlDB).filter(ControlDB.control_id == control_id).first()
if not control:
raise ValueError(f"Control {control_id} not found")
mapping = ControlMappingDB(
id=str(uuid.uuid4()),
requirement_id=requirement_id,
control_id=control.id,
coverage_level=coverage_level,
notes=notes,
)
self.db.add(mapping)
self.db.commit()
self.db.refresh(mapping)
return mapping
def get_by_requirement(self, requirement_id: str) -> List[ControlMappingDB]:
"""Get all mappings for a requirement."""
return (
self.db.query(ControlMappingDB)
.filter(ControlMappingDB.requirement_id == requirement_id)
.all()
)
def get_by_control(self, control_uuid: str) -> List[ControlMappingDB]:
"""Get all mappings for a control."""
return (
self.db.query(ControlMappingDB)
.filter(ControlMappingDB.control_id == control_uuid)
.all()
)

View File

@@ -0,0 +1,143 @@
"""
Compliance repositories — extracted from compliance/db/repository.py.
Phase 1 Step 5: the monolithic repository module is decomposed per
aggregate. Every repository class is re-exported from
``compliance.db.repository`` for backwards compatibility.
"""
import uuid
from datetime import datetime, date, timezone
from typing import List, Optional, Dict, Any, Tuple
from sqlalchemy.orm import Session as DBSession, selectinload, joinedload
from sqlalchemy import func, and_, or_
from compliance.db.models import (
RegulationDB, RequirementDB, ControlDB, ControlMappingDB,
EvidenceDB, RiskDB, AuditExportDB,
AuditSessionDB, AuditSignOffDB, AuditResultEnum, AuditSessionStatusEnum,
RegulationTypeEnum, ControlDomainEnum, ControlStatusEnum,
RiskLevelEnum, EvidenceStatusEnum, ExportStatusEnum,
ServiceModuleDB, ModuleRegulationMappingDB,
)
class EvidenceRepository:
"""Repository for evidence."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
control_id: str,
evidence_type: str,
title: str,
description: Optional[str] = None,
artifact_path: Optional[str] = None,
artifact_url: Optional[str] = None,
artifact_hash: Optional[str] = None,
file_size_bytes: Optional[int] = None,
mime_type: Optional[str] = None,
valid_until: Optional[datetime] = None,
source: str = "manual",
ci_job_id: Optional[str] = None,
uploaded_by: Optional[str] = None,
) -> EvidenceDB:
"""Create evidence record."""
# Get control UUID
control = self.db.query(ControlDB).filter(ControlDB.control_id == control_id).first()
if not control:
raise ValueError(f"Control {control_id} not found")
evidence = EvidenceDB(
id=str(uuid.uuid4()),
control_id=control.id,
evidence_type=evidence_type,
title=title,
description=description,
artifact_path=artifact_path,
artifact_url=artifact_url,
artifact_hash=artifact_hash,
file_size_bytes=file_size_bytes,
mime_type=mime_type,
valid_until=valid_until,
source=source,
ci_job_id=ci_job_id,
uploaded_by=uploaded_by,
)
self.db.add(evidence)
self.db.commit()
self.db.refresh(evidence)
return evidence
def get_by_id(self, evidence_id: str) -> Optional[EvidenceDB]:
"""Get evidence by ID."""
return self.db.query(EvidenceDB).filter(EvidenceDB.id == evidence_id).first()
def get_by_control(
self,
control_id: str,
status: Optional[EvidenceStatusEnum] = None
) -> List[EvidenceDB]:
"""Get all evidence for a control."""
control = self.db.query(ControlDB).filter(ControlDB.control_id == control_id).first()
if not control:
return []
query = self.db.query(EvidenceDB).filter(EvidenceDB.control_id == control.id)
if status:
query = query.filter(EvidenceDB.status == status)
return query.order_by(EvidenceDB.collected_at.desc()).all()
def get_all(
self,
evidence_type: Optional[str] = None,
status: Optional[EvidenceStatusEnum] = None,
limit: int = 100,
) -> List[EvidenceDB]:
"""Get all evidence with filters."""
query = self.db.query(EvidenceDB)
if evidence_type:
query = query.filter(EvidenceDB.evidence_type == evidence_type)
if status:
query = query.filter(EvidenceDB.status == status)
return query.order_by(EvidenceDB.collected_at.desc()).limit(limit).all()
def update_status(self, evidence_id: str, status: EvidenceStatusEnum) -> Optional[EvidenceDB]:
"""Update evidence status."""
evidence = self.get_by_id(evidence_id)
if not evidence:
return None
evidence.status = status
evidence.updated_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(evidence)
return evidence
def get_statistics(self) -> Dict[str, Any]:
"""Get evidence statistics."""
total = self.db.query(func.count(EvidenceDB.id)).scalar()
by_type = dict(
self.db.query(EvidenceDB.evidence_type, func.count(EvidenceDB.id))
.group_by(EvidenceDB.evidence_type)
.all()
)
by_status = dict(
self.db.query(EvidenceDB.status, func.count(EvidenceDB.id))
.group_by(EvidenceDB.status)
.all()
)
valid = by_status.get(EvidenceStatusEnum.VALID, 0)
coverage = (valid / total * 100) if total > 0 else 0
return {
"total": total,
"by_type": by_type,
"by_status": {str(k.value) if k else "none": v for k, v in by_status.items()},
"coverage_percent": round(coverage, 1),
}

View File

@@ -0,0 +1,468 @@
"""
ISMS Audit Execution models (ISO 27001 Kapitel 9-10) — extracted from
compliance/db/models.py.
Covers findings, corrective actions (CAPA), management reviews, internal
audits, audit trail, and readiness checks. The governance side (scope,
context, policies, objectives, SoA) lives in ``isms_governance_models.py``.
Re-exported from ``compliance.db.models`` for backwards compatibility.
DO NOT change __tablename__, column names, or relationship strings.
"""
import uuid
import enum
from datetime import datetime, date, timezone
from sqlalchemy import (
Column, String, Text, Integer, Boolean, DateTime, Date,
ForeignKey, Enum, JSON, Index, Float,
)
from sqlalchemy.orm import relationship
from classroom_engine.database import Base
# ============================================================================
# ENUMS
# ============================================================================
class FindingTypeEnum(str, enum.Enum):
"""ISO 27001 audit finding classification."""
MAJOR = "major" # Major nonconformity - blocks certification
MINOR = "minor" # Minor nonconformity - requires CAPA
OFI = "ofi" # Opportunity for Improvement
POSITIVE = "positive" # Positive observation
class FindingStatusEnum(str, enum.Enum):
"""Status of an audit finding."""
OPEN = "open"
IN_PROGRESS = "in_progress"
CORRECTIVE_ACTION_PENDING = "capa_pending"
VERIFICATION_PENDING = "verification_pending"
VERIFIED = "verified"
CLOSED = "closed"
class CAPATypeEnum(str, enum.Enum):
"""Type of corrective/preventive action."""
CORRECTIVE = "corrective" # Fix the nonconformity
PREVENTIVE = "preventive" # Prevent recurrence
BOTH = "both"
# ============================================================================
# MODELS
# ============================================================================
class AuditFindingDB(Base):
"""
Audit Finding with ISO 27001 Classification (Major/Minor/OFI)
Tracks findings from internal and external audits with proper
classification and CAPA workflow.
"""
__tablename__ = 'compliance_audit_findings'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
finding_id = Column(String(30), unique=True, nullable=False, index=True) # e.g., "FIND-2026-001"
# Source
audit_session_id = Column(String(36), ForeignKey('compliance_audit_sessions.id'), index=True)
internal_audit_id = Column(String(36), ForeignKey('compliance_internal_audits.id'), index=True)
# Classification (CRITICAL for ISO 27001!)
finding_type = Column(Enum(FindingTypeEnum), nullable=False)
# ISO reference
iso_chapter = Column(String(20)) # e.g., "6.1.2", "9.2"
annex_a_control = Column(String(20)) # e.g., "A.8.2"
# Finding details
title = Column(String(300), nullable=False)
description = Column(Text, nullable=False)
objective_evidence = Column(Text, nullable=False) # What the auditor observed
# Root cause analysis
root_cause = Column(Text)
root_cause_method = Column(String(50)) # "5-why", "fishbone", "pareto"
# Impact assessment
impact_description = Column(Text)
affected_processes = Column(JSON)
affected_assets = Column(JSON)
# Status tracking
status = Column(Enum(FindingStatusEnum), default=FindingStatusEnum.OPEN)
# Responsibility
owner = Column(String(100)) # Person responsible for closure
auditor = Column(String(100)) # Auditor who raised finding
# Dates
identified_date = Column(Date, nullable=False, default=date.today)
due_date = Column(Date) # Deadline for closure
closed_date = Column(Date)
# Verification
verification_method = Column(Text)
verified_by = Column(String(100))
verified_at = Column(DateTime)
verification_evidence = Column(Text)
# Closure
closure_notes = Column(Text)
closed_by = Column(String(100))
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
corrective_actions = relationship("CorrectiveActionDB", back_populates="finding", cascade="all, delete-orphan")
__table_args__ = (
Index('ix_finding_type_status', 'finding_type', 'status'),
Index('ix_finding_due_date', 'due_date'),
)
def __repr__(self):
return f"<AuditFinding {self.finding_id}: {self.finding_type.value}>"
@property
def is_blocking(self) -> bool:
"""Major findings block certification."""
return self.finding_type == FindingTypeEnum.MAJOR and self.status != FindingStatusEnum.CLOSED
class CorrectiveActionDB(Base):
"""
Corrective & Preventive Actions (CAPA) - ISO 27001 10.1
Tracks actions taken to address nonconformities.
"""
__tablename__ = 'compliance_corrective_actions'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
capa_id = Column(String(30), unique=True, nullable=False, index=True) # e.g., "CAPA-2026-001"
# Link to finding
finding_id = Column(String(36), ForeignKey('compliance_audit_findings.id'), nullable=False, index=True)
# Type
capa_type = Column(Enum(CAPATypeEnum), nullable=False)
# Action details
title = Column(String(300), nullable=False)
description = Column(Text, nullable=False)
expected_outcome = Column(Text)
# Responsibility
assigned_to = Column(String(100), nullable=False)
approved_by = Column(String(100))
# Timeline
planned_start = Column(Date)
planned_completion = Column(Date, nullable=False)
actual_completion = Column(Date)
# Status
status = Column(String(30), default="planned") # planned, in_progress, completed, verified, cancelled
progress_percentage = Column(Integer, default=0)
# Resources
estimated_effort_hours = Column(Integer)
actual_effort_hours = Column(Integer)
resources_required = Column(Text)
# Evidence of implementation
implementation_evidence = Column(Text)
evidence_ids = Column(JSON)
# Effectiveness review
effectiveness_criteria = Column(Text)
effectiveness_verified = Column(Boolean, default=False)
effectiveness_verification_date = Column(Date)
effectiveness_notes = Column(Text)
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
finding = relationship("AuditFindingDB", back_populates="corrective_actions")
__table_args__ = (
Index('ix_capa_status', 'status'),
Index('ix_capa_due', 'planned_completion'),
)
def __repr__(self):
return f"<CAPA {self.capa_id}: {self.capa_type.value}>"
class ManagementReviewDB(Base):
"""
Management Review (ISO 27001 Kapitel 9.3)
Records mandatory management reviews of the ISMS.
"""
__tablename__ = 'compliance_management_reviews'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
review_id = Column(String(30), unique=True, nullable=False, index=True) # e.g., "MR-2026-Q1"
# Review details
title = Column(String(200), nullable=False)
review_date = Column(Date, nullable=False)
review_period_start = Column(Date) # Period being reviewed
review_period_end = Column(Date)
# Participants
chairperson = Column(String(100), nullable=False) # Usually top management
attendees = Column(JSON) # List of {"name": "", "role": ""}
# 9.3 Review Inputs (mandatory!)
input_previous_actions = Column(Text) # Status of previous review actions
input_isms_changes = Column(Text) # Changes in internal/external issues
input_security_performance = Column(Text) # Nonconformities, monitoring, audit results
input_interested_party_feedback = Column(Text)
input_risk_assessment_results = Column(Text)
input_improvement_opportunities = Column(Text)
# Additional inputs
input_policy_effectiveness = Column(Text)
input_objective_achievement = Column(Text)
input_resource_adequacy = Column(Text)
# 9.3 Review Outputs (mandatory!)
output_improvement_decisions = Column(Text) # Decisions for improvement
output_isms_changes = Column(Text) # Changes needed to ISMS
output_resource_needs = Column(Text) # Resource requirements
# Action items
action_items = Column(JSON) # List of {"action": "", "owner": "", "due_date": ""}
# Overall assessment
isms_effectiveness_rating = Column(String(20)) # "effective", "partially_effective", "not_effective"
key_decisions = Column(Text)
# Approval
status = Column(String(30), default="draft") # draft, conducted, approved
approved_by = Column(String(100))
approved_at = Column(DateTime)
minutes_document_path = Column(String(500)) # Link to meeting minutes
# Next review
next_review_date = Column(Date)
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
__table_args__ = (
Index('ix_mgmt_review_date', 'review_date'),
Index('ix_mgmt_review_status', 'status'),
)
def __repr__(self):
return f"<ManagementReview {self.review_id}: {self.review_date}>"
class InternalAuditDB(Base):
"""
Internal Audit (ISO 27001 Kapitel 9.2)
Tracks internal audit program and individual audits.
"""
__tablename__ = 'compliance_internal_audits'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
audit_id = Column(String(30), unique=True, nullable=False, index=True) # e.g., "IA-2026-001"
# Audit details
title = Column(String(200), nullable=False)
audit_type = Column(String(50), nullable=False) # "scheduled", "surveillance", "special"
# Scope
scope_description = Column(Text, nullable=False)
iso_chapters_covered = Column(JSON) # e.g., ["4", "5", "6.1"]
annex_a_controls_covered = Column(JSON) # e.g., ["A.5", "A.6"]
processes_covered = Column(JSON)
departments_covered = Column(JSON)
# Audit criteria
criteria = Column(Text) # Standards, policies being audited against
# Timeline
planned_date = Column(Date, nullable=False)
actual_start_date = Column(Date)
actual_end_date = Column(Date)
# Audit team
lead_auditor = Column(String(100), nullable=False)
audit_team = Column(JSON) # List of auditor names
auditee_representatives = Column(JSON) # Who was interviewed
# Status
status = Column(String(30), default="planned") # planned, in_progress, completed, cancelled
# Results summary
total_findings = Column(Integer, default=0)
major_findings = Column(Integer, default=0)
minor_findings = Column(Integer, default=0)
ofi_count = Column(Integer, default=0)
positive_observations = Column(Integer, default=0)
# Conclusion
audit_conclusion = Column(Text)
overall_assessment = Column(String(30)) # "conforming", "minor_nc", "major_nc"
# Report
report_date = Column(Date)
report_document_path = Column(String(500))
# Sign-off
report_approved_by = Column(String(100))
report_approved_at = Column(DateTime)
# Follow-up
follow_up_audit_required = Column(Boolean, default=False)
follow_up_audit_id = Column(String(36))
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
findings = relationship("AuditFindingDB", backref="internal_audit", foreign_keys=[AuditFindingDB.internal_audit_id])
__table_args__ = (
Index('ix_internal_audit_date', 'planned_date'),
Index('ix_internal_audit_status', 'status'),
)
def __repr__(self):
return f"<InternalAudit {self.audit_id}: {self.title}>"
class AuditTrailDB(Base):
"""
Comprehensive Audit Trail for ISMS Changes
Tracks all changes to compliance-relevant data for
accountability and forensic analysis.
"""
__tablename__ = 'compliance_audit_trail'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
# What changed
entity_type = Column(String(50), nullable=False, index=True) # "control", "risk", "policy", etc.
entity_id = Column(String(36), nullable=False, index=True)
entity_name = Column(String(200)) # Human-readable identifier
# Action
action = Column(String(20), nullable=False) # "create", "update", "delete", "approve", "sign"
# Change details
field_changed = Column(String(100)) # Which field (for updates)
old_value = Column(Text)
new_value = Column(Text)
change_summary = Column(Text) # Human-readable summary
# Who & When
performed_by = Column(String(100), nullable=False)
performed_at = Column(DateTime, nullable=False, default=lambda: datetime.now(timezone.utc))
# Context
ip_address = Column(String(45))
user_agent = Column(String(500))
session_id = Column(String(100))
# Integrity
checksum = Column(String(64)) # SHA-256 of the change
# Timestamps (immutable after creation)
created_at = Column(DateTime, nullable=False, default=lambda: datetime.now(timezone.utc))
__table_args__ = (
Index('ix_audit_trail_entity', 'entity_type', 'entity_id'),
Index('ix_audit_trail_time', 'performed_at'),
Index('ix_audit_trail_user', 'performed_by'),
)
def __repr__(self):
return f"<AuditTrail {self.action} on {self.entity_type}/{self.entity_id}>"
class ISMSReadinessCheckDB(Base):
"""
ISMS Readiness Check Results
Stores automated pre-audit checks to identify potential
Major findings before external audit.
"""
__tablename__ = 'compliance_isms_readiness'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
# Check run
check_date = Column(DateTime, nullable=False, default=lambda: datetime.now(timezone.utc))
triggered_by = Column(String(100)) # "scheduled", "manual", "pre-audit"
# Overall status
overall_status = Column(String(20), nullable=False) # "ready", "at_risk", "not_ready"
certification_possible = Column(Boolean, nullable=False)
# Chapter-by-chapter status (ISO 27001)
chapter_4_status = Column(String(20)) # Context
chapter_5_status = Column(String(20)) # Leadership
chapter_6_status = Column(String(20)) # Planning
chapter_7_status = Column(String(20)) # Support
chapter_8_status = Column(String(20)) # Operation
chapter_9_status = Column(String(20)) # Performance
chapter_10_status = Column(String(20)) # Improvement
# Potential Major findings
potential_majors = Column(JSON) # List of {"check": "", "status": "", "recommendation": ""}
# Potential Minor findings
potential_minors = Column(JSON)
# Improvement opportunities
improvement_opportunities = Column(JSON)
# Scores
readiness_score = Column(Float) # 0-100
documentation_score = Column(Float)
implementation_score = Column(Float)
evidence_score = Column(Float)
# Recommendations
priority_actions = Column(JSON) # List of recommended actions before audit
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
__table_args__ = (
Index('ix_readiness_date', 'check_date'),
Index('ix_readiness_status', 'overall_status'),
)
def __repr__(self):
return f"<ISMSReadiness {self.check_date}: {self.overall_status}>"
__all__ = [
"FindingTypeEnum",
"FindingStatusEnum",
"CAPATypeEnum",
"AuditFindingDB",
"CorrectiveActionDB",
"ManagementReviewDB",
"InternalAuditDB",
"AuditTrailDB",
"ISMSReadinessCheckDB",
]

View File

@@ -0,0 +1,499 @@
"""
ISMS repositories — extracted from compliance/db/isms_repository.py.
Phase 1 Step 5: split per sub-aggregate. Re-exported from
``compliance.db.isms_repository`` for backwards compatibility.
"""
import uuid
from datetime import datetime, date, timezone
from typing import List, Optional, Dict, Any, Tuple
from sqlalchemy.orm import Session as DBSession
from compliance.db.models import (
ISMSScopeDB, ISMSPolicyDB, SecurityObjectiveDB,
StatementOfApplicabilityDB, AuditFindingDB, CorrectiveActionDB,
ManagementReviewDB, InternalAuditDB, AuditTrailDB, ISMSReadinessCheckDB,
ApprovalStatusEnum, FindingTypeEnum, FindingStatusEnum, CAPATypeEnum,
)
class AuditFindingRepository:
"""Repository for Audit Findings (Major/Minor/OFI)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
finding_type: FindingTypeEnum,
title: str,
description: str,
auditor: str,
iso_chapter: Optional[str] = None,
annex_a_control: Optional[str] = None,
objective_evidence: Optional[str] = None,
owner: Optional[str] = None,
due_date: Optional[date] = None,
internal_audit_id: Optional[str] = None,
) -> AuditFindingDB:
"""Create a new audit finding."""
# Generate finding ID
year = date.today().year
existing_count = self.db.query(AuditFindingDB).filter(
AuditFindingDB.finding_id.like(f"FIND-{year}-%")
).count()
finding_id = f"FIND-{year}-{existing_count + 1:03d}"
finding = AuditFindingDB(
id=str(uuid.uuid4()),
finding_id=finding_id,
finding_type=finding_type,
iso_chapter=iso_chapter,
annex_a_control=annex_a_control,
title=title,
description=description,
objective_evidence=objective_evidence,
owner=owner,
auditor=auditor,
due_date=due_date,
internal_audit_id=internal_audit_id,
status=FindingStatusEnum.OPEN,
)
self.db.add(finding)
self.db.commit()
self.db.refresh(finding)
return finding
def get_by_id(self, finding_id: str) -> Optional[AuditFindingDB]:
"""Get finding by UUID or finding_id."""
return self.db.query(AuditFindingDB).filter(
(AuditFindingDB.id == finding_id) | (AuditFindingDB.finding_id == finding_id)
).first()
def get_all(
self,
finding_type: Optional[FindingTypeEnum] = None,
status: Optional[FindingStatusEnum] = None,
internal_audit_id: Optional[str] = None,
) -> List[AuditFindingDB]:
"""Get all findings with optional filters."""
query = self.db.query(AuditFindingDB)
if finding_type:
query = query.filter(AuditFindingDB.finding_type == finding_type)
if status:
query = query.filter(AuditFindingDB.status == status)
if internal_audit_id:
query = query.filter(AuditFindingDB.internal_audit_id == internal_audit_id)
return query.order_by(AuditFindingDB.identified_date.desc()).all()
def get_open_majors(self) -> List[AuditFindingDB]:
"""Get all open major findings (blocking certification)."""
return self.db.query(AuditFindingDB).filter(
AuditFindingDB.finding_type == FindingTypeEnum.MAJOR,
AuditFindingDB.status != FindingStatusEnum.CLOSED
).all()
def get_statistics(self) -> Dict[str, Any]:
"""Get finding statistics."""
findings = self.get_all()
return {
"total": len(findings),
"major": sum(1 for f in findings if f.finding_type == FindingTypeEnum.MAJOR),
"minor": sum(1 for f in findings if f.finding_type == FindingTypeEnum.MINOR),
"ofi": sum(1 for f in findings if f.finding_type == FindingTypeEnum.OFI),
"positive": sum(1 for f in findings if f.finding_type == FindingTypeEnum.POSITIVE),
"open": sum(1 for f in findings if f.status != FindingStatusEnum.CLOSED),
"closed": sum(1 for f in findings if f.status == FindingStatusEnum.CLOSED),
"blocking_certification": sum(
1 for f in findings
if f.finding_type == FindingTypeEnum.MAJOR and f.status != FindingStatusEnum.CLOSED
),
}
def close(
self,
finding_id: str,
closed_by: str,
closure_notes: str,
verification_method: Optional[str] = None,
verification_evidence: Optional[str] = None,
) -> Optional[AuditFindingDB]:
"""Close a finding after verification."""
finding = self.get_by_id(finding_id)
if not finding:
return None
finding.status = FindingStatusEnum.CLOSED
finding.closed_date = date.today()
finding.closed_by = closed_by
finding.closure_notes = closure_notes
finding.verification_method = verification_method
finding.verification_evidence = verification_evidence
finding.verified_by = closed_by
finding.verified_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(finding)
return finding
class CorrectiveActionRepository:
"""Repository for Corrective/Preventive Actions (CAPA)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
finding_id: str,
capa_type: CAPATypeEnum,
title: str,
description: str,
assigned_to: str,
planned_completion: date,
expected_outcome: Optional[str] = None,
effectiveness_criteria: Optional[str] = None,
) -> CorrectiveActionDB:
"""Create a new CAPA."""
# Generate CAPA ID
year = date.today().year
existing_count = self.db.query(CorrectiveActionDB).filter(
CorrectiveActionDB.capa_id.like(f"CAPA-{year}-%")
).count()
capa_id = f"CAPA-{year}-{existing_count + 1:03d}"
capa = CorrectiveActionDB(
id=str(uuid.uuid4()),
capa_id=capa_id,
finding_id=finding_id,
capa_type=capa_type,
title=title,
description=description,
expected_outcome=expected_outcome,
assigned_to=assigned_to,
planned_completion=planned_completion,
effectiveness_criteria=effectiveness_criteria,
status="planned",
)
self.db.add(capa)
# Update finding status
finding = self.db.query(AuditFindingDB).filter(AuditFindingDB.id == finding_id).first()
if finding:
finding.status = FindingStatusEnum.CORRECTIVE_ACTION_PENDING
self.db.commit()
self.db.refresh(capa)
return capa
def get_by_id(self, capa_id: str) -> Optional[CorrectiveActionDB]:
"""Get CAPA by UUID or capa_id."""
return self.db.query(CorrectiveActionDB).filter(
(CorrectiveActionDB.id == capa_id) | (CorrectiveActionDB.capa_id == capa_id)
).first()
def get_by_finding(self, finding_id: str) -> List[CorrectiveActionDB]:
"""Get all CAPAs for a finding."""
return self.db.query(CorrectiveActionDB).filter(
CorrectiveActionDB.finding_id == finding_id
).order_by(CorrectiveActionDB.planned_completion).all()
def verify(
self,
capa_id: str,
verified_by: str,
is_effective: bool,
effectiveness_notes: Optional[str] = None,
) -> Optional[CorrectiveActionDB]:
"""Verify a completed CAPA."""
capa = self.get_by_id(capa_id)
if not capa:
return None
capa.effectiveness_verified = is_effective
capa.effectiveness_verification_date = date.today()
capa.effectiveness_notes = effectiveness_notes
capa.status = "verified" if is_effective else "completed"
# If verified, check if all CAPAs for finding are verified
if is_effective:
finding = self.db.query(AuditFindingDB).filter(
AuditFindingDB.id == capa.finding_id
).first()
if finding:
unverified = self.db.query(CorrectiveActionDB).filter(
CorrectiveActionDB.finding_id == finding.id,
CorrectiveActionDB.id != capa.id,
CorrectiveActionDB.status != "verified"
).count()
if unverified == 0:
finding.status = FindingStatusEnum.VERIFICATION_PENDING
self.db.commit()
self.db.refresh(capa)
return capa
class ManagementReviewRepository:
"""Repository for Management Reviews (ISO 27001 Chapter 9.3)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
title: str,
review_date: date,
chairperson: str,
review_period_start: Optional[date] = None,
review_period_end: Optional[date] = None,
) -> ManagementReviewDB:
"""Create a new management review."""
# Generate review ID
year = review_date.year
quarter = (review_date.month - 1) // 3 + 1
review_id = f"MR-{year}-Q{quarter}"
# Check for duplicate
existing = self.db.query(ManagementReviewDB).filter(
ManagementReviewDB.review_id == review_id
).first()
if existing:
review_id = f"{review_id}-{str(uuid.uuid4())[:4]}"
review = ManagementReviewDB(
id=str(uuid.uuid4()),
review_id=review_id,
title=title,
review_date=review_date,
review_period_start=review_period_start,
review_period_end=review_period_end,
chairperson=chairperson,
status="draft",
)
self.db.add(review)
self.db.commit()
self.db.refresh(review)
return review
def get_by_id(self, review_id: str) -> Optional[ManagementReviewDB]:
"""Get review by UUID or review_id."""
return self.db.query(ManagementReviewDB).filter(
(ManagementReviewDB.id == review_id) | (ManagementReviewDB.review_id == review_id)
).first()
def get_latest_approved(self) -> Optional[ManagementReviewDB]:
"""Get the most recent approved management review."""
return self.db.query(ManagementReviewDB).filter(
ManagementReviewDB.status == "approved"
).order_by(ManagementReviewDB.review_date.desc()).first()
def approve(
self,
review_id: str,
approved_by: str,
next_review_date: date,
minutes_document_path: Optional[str] = None,
) -> Optional[ManagementReviewDB]:
"""Approve a management review."""
review = self.get_by_id(review_id)
if not review:
return None
review.status = "approved"
review.approved_by = approved_by
review.approved_at = datetime.now(timezone.utc)
review.next_review_date = next_review_date
review.minutes_document_path = minutes_document_path
self.db.commit()
self.db.refresh(review)
return review
class InternalAuditRepository:
"""Repository for Internal Audits (ISO 27001 Chapter 9.2)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
title: str,
audit_type: str,
planned_date: date,
lead_auditor: str,
scope_description: Optional[str] = None,
iso_chapters_covered: Optional[List[str]] = None,
annex_a_controls_covered: Optional[List[str]] = None,
) -> InternalAuditDB:
"""Create a new internal audit."""
# Generate audit ID
year = planned_date.year
existing_count = self.db.query(InternalAuditDB).filter(
InternalAuditDB.audit_id.like(f"IA-{year}-%")
).count()
audit_id = f"IA-{year}-{existing_count + 1:03d}"
audit = InternalAuditDB(
id=str(uuid.uuid4()),
audit_id=audit_id,
title=title,
audit_type=audit_type,
scope_description=scope_description,
iso_chapters_covered=iso_chapters_covered,
annex_a_controls_covered=annex_a_controls_covered,
planned_date=planned_date,
lead_auditor=lead_auditor,
status="planned",
)
self.db.add(audit)
self.db.commit()
self.db.refresh(audit)
return audit
def get_by_id(self, audit_id: str) -> Optional[InternalAuditDB]:
"""Get audit by UUID or audit_id."""
return self.db.query(InternalAuditDB).filter(
(InternalAuditDB.id == audit_id) | (InternalAuditDB.audit_id == audit_id)
).first()
def get_latest_completed(self) -> Optional[InternalAuditDB]:
"""Get the most recent completed internal audit."""
return self.db.query(InternalAuditDB).filter(
InternalAuditDB.status == "completed"
).order_by(InternalAuditDB.actual_end_date.desc()).first()
def complete(
self,
audit_id: str,
audit_conclusion: str,
overall_assessment: str,
follow_up_audit_required: bool = False,
) -> Optional[InternalAuditDB]:
"""Complete an internal audit."""
audit = self.get_by_id(audit_id)
if not audit:
return None
audit.status = "completed"
audit.actual_end_date = date.today()
audit.report_date = date.today()
audit.audit_conclusion = audit_conclusion
audit.overall_assessment = overall_assessment
audit.follow_up_audit_required = follow_up_audit_required
self.db.commit()
self.db.refresh(audit)
return audit
class AuditTrailRepository:
"""Repository for Audit Trail entries."""
def __init__(self, db: DBSession):
self.db = db
def log(
self,
entity_type: str,
entity_id: str,
entity_name: str,
action: str,
performed_by: str,
field_changed: Optional[str] = None,
old_value: Optional[str] = None,
new_value: Optional[str] = None,
change_summary: Optional[str] = None,
) -> AuditTrailDB:
"""Log an audit trail entry."""
import hashlib
entry = AuditTrailDB(
id=str(uuid.uuid4()),
entity_type=entity_type,
entity_id=entity_id,
entity_name=entity_name,
action=action,
field_changed=field_changed,
old_value=old_value,
new_value=new_value,
change_summary=change_summary,
performed_by=performed_by,
performed_at=datetime.now(timezone.utc),
checksum=hashlib.sha256(
f"{entity_type}|{entity_id}|{action}|{performed_by}".encode()
).hexdigest(),
)
self.db.add(entry)
self.db.commit()
self.db.refresh(entry)
return entry
def get_by_entity(
self,
entity_type: str,
entity_id: str,
limit: int = 100,
) -> List[AuditTrailDB]:
"""Get audit trail for a specific entity."""
return self.db.query(AuditTrailDB).filter(
AuditTrailDB.entity_type == entity_type,
AuditTrailDB.entity_id == entity_id
).order_by(AuditTrailDB.performed_at.desc()).limit(limit).all()
def get_paginated(
self,
page: int = 1,
page_size: int = 50,
entity_type: Optional[str] = None,
entity_id: Optional[str] = None,
performed_by: Optional[str] = None,
action: Optional[str] = None,
) -> Tuple[List[AuditTrailDB], int]:
"""Get paginated audit trail with filters."""
query = self.db.query(AuditTrailDB)
if entity_type:
query = query.filter(AuditTrailDB.entity_type == entity_type)
if entity_id:
query = query.filter(AuditTrailDB.entity_id == entity_id)
if performed_by:
query = query.filter(AuditTrailDB.performed_by == performed_by)
if action:
query = query.filter(AuditTrailDB.action == action)
total = query.count()
entries = query.order_by(AuditTrailDB.performed_at.desc()).offset(
(page - 1) * page_size
).limit(page_size).all()
return entries, total
class ISMSReadinessCheckRepository:
"""Repository for ISMS Readiness Check results."""
def __init__(self, db: DBSession):
self.db = db
def save(self, check: ISMSReadinessCheckDB) -> ISMSReadinessCheckDB:
"""Save a readiness check result."""
self.db.add(check)
self.db.commit()
self.db.refresh(check)
return check
def get_latest(self) -> Optional[ISMSReadinessCheckDB]:
"""Get the most recent readiness check."""
return self.db.query(ISMSReadinessCheckDB).order_by(
ISMSReadinessCheckDB.check_date.desc()
).first()
def get_history(self, limit: int = 10) -> List[ISMSReadinessCheckDB]:
"""Get readiness check history."""
return self.db.query(ISMSReadinessCheckDB).order_by(
ISMSReadinessCheckDB.check_date.desc()
).limit(limit).all()

View File

@@ -0,0 +1,323 @@
"""
ISMS Governance models (ISO 27001 Kapitel 4-6) — extracted from compliance/db/models.py.
Covers the documentation and planning side of the ISMS: scope, context,
policies, security objectives, and the Statement of Applicability. The audit
execution side (findings, CAPA, management reviews, internal audits, audit
trail, readiness checks) lives in ``isms_audit_models.py``.
Re-exported from ``compliance.db.models`` for backwards compatibility.
DO NOT change __tablename__, column names, or relationship strings — the
database schema is frozen.
"""
import uuid
import enum
from datetime import datetime, date, timezone
from sqlalchemy import (
Column, String, Text, Integer, Boolean, DateTime, Date,
ForeignKey, Enum, JSON, Index,
)
from classroom_engine.database import Base
# ============================================================================
# SHARED GOVERNANCE ENUMS
# ============================================================================
class ApprovalStatusEnum(str, enum.Enum):
"""Approval status for ISMS documents."""
DRAFT = "draft"
UNDER_REVIEW = "under_review"
APPROVED = "approved"
SUPERSEDED = "superseded"
# ============================================================================
# MODELS
# ============================================================================
class ISMSScopeDB(Base):
"""
ISMS Scope Definition (ISO 27001 Kapitel 4.3)
Defines the boundaries and applicability of the ISMS.
This is MANDATORY for certification.
"""
__tablename__ = 'compliance_isms_scope'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
version = Column(String(20), nullable=False, default="1.0")
# Scope definition
scope_statement = Column(Text, nullable=False) # Main scope text
included_locations = Column(JSON) # List of locations
included_processes = Column(JSON) # List of processes
included_services = Column(JSON) # List of services/products
excluded_items = Column(JSON) # Explicitly excluded items
exclusion_justification = Column(Text) # Why items are excluded
# Boundaries
organizational_boundary = Column(Text) # Legal entity, departments
physical_boundary = Column(Text) # Locations, networks
technical_boundary = Column(Text) # Systems, applications
# Approval
status = Column(Enum(ApprovalStatusEnum), default=ApprovalStatusEnum.DRAFT)
approved_by = Column(String(100))
approved_at = Column(DateTime)
approval_signature = Column(String(64)) # SHA-256 hash
# Validity
effective_date = Column(Date)
review_date = Column(Date) # Next mandatory review
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
created_by = Column(String(100))
updated_by = Column(String(100))
__table_args__ = (
Index('ix_isms_scope_status', 'status'),
)
def __repr__(self):
return f"<ISMSScope v{self.version} ({self.status.value})>"
class ISMSContextDB(Base):
"""
ISMS Context (ISO 27001 Kapitel 4.1, 4.2)
Documents internal/external issues and interested parties.
"""
__tablename__ = 'compliance_isms_context'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
version = Column(String(20), nullable=False, default="1.0")
# 4.1 Internal issues
internal_issues = Column(JSON) # List of {"issue": "", "impact": "", "treatment": ""}
# 4.1 External issues
external_issues = Column(JSON) # List of {"issue": "", "impact": "", "treatment": ""}
# 4.2 Interested parties
interested_parties = Column(JSON) # List of {"party": "", "requirements": [], "relevance": ""}
# Legal/regulatory requirements
regulatory_requirements = Column(JSON) # DSGVO, AI Act, etc.
contractual_requirements = Column(JSON) # Customer contracts
# Analysis
swot_strengths = Column(JSON)
swot_weaknesses = Column(JSON)
swot_opportunities = Column(JSON)
swot_threats = Column(JSON)
# Approval
status = Column(Enum(ApprovalStatusEnum), default=ApprovalStatusEnum.DRAFT)
approved_by = Column(String(100))
approved_at = Column(DateTime)
# Review
last_reviewed_at = Column(DateTime)
next_review_date = Column(Date)
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
def __repr__(self):
return f"<ISMSContext v{self.version}>"
class ISMSPolicyDB(Base):
"""
ISMS Policies (ISO 27001 Kapitel 5.2)
Information security policy and sub-policies.
"""
__tablename__ = 'compliance_isms_policies'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
policy_id = Column(String(30), unique=True, nullable=False, index=True) # e.g., "POL-ISMS-001"
# Policy details
title = Column(String(200), nullable=False)
policy_type = Column(String(50), nullable=False) # "master", "operational", "technical"
description = Column(Text)
policy_text = Column(Text, nullable=False) # Full policy content
# Scope
applies_to = Column(JSON) # Roles, departments, systems
# Document control
version = Column(String(20), nullable=False, default="1.0")
status = Column(Enum(ApprovalStatusEnum), default=ApprovalStatusEnum.DRAFT)
# Approval chain
authored_by = Column(String(100))
reviewed_by = Column(String(100))
approved_by = Column(String(100)) # Must be top management
approved_at = Column(DateTime)
approval_signature = Column(String(64))
# Validity
effective_date = Column(Date)
review_frequency_months = Column(Integer, default=12)
next_review_date = Column(Date)
# References
parent_policy_id = Column(String(36), ForeignKey('compliance_isms_policies.id'))
related_controls = Column(JSON) # List of control_ids
# Document path
document_path = Column(String(500)) # Link to full document
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
__table_args__ = (
Index('ix_policy_type_status', 'policy_type', 'status'),
)
def __repr__(self):
return f"<ISMSPolicy {self.policy_id}: {self.title}>"
class SecurityObjectiveDB(Base):
"""
Security Objectives (ISO 27001 Kapitel 6.2)
Measurable information security objectives.
"""
__tablename__ = 'compliance_security_objectives'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
objective_id = Column(String(30), unique=True, nullable=False, index=True) # e.g., "OBJ-001"
# Objective definition
title = Column(String(200), nullable=False)
description = Column(Text)
category = Column(String(50)) # "availability", "confidentiality", "integrity", "compliance"
# SMART criteria
specific = Column(Text) # What exactly
measurable = Column(Text) # How measured
achievable = Column(Text) # Is it realistic
relevant = Column(Text) # Why important
time_bound = Column(Text) # Deadline
# Metrics
kpi_name = Column(String(100))
kpi_target = Column(String(100)) # Target value
kpi_current = Column(String(100)) # Current value
kpi_unit = Column(String(50)) # %, count, score
measurement_frequency = Column(String(50)) # monthly, quarterly
# Responsibility
owner = Column(String(100))
accountable = Column(String(100)) # RACI: Accountable
# Status
status = Column(String(30), default="active") # active, achieved, not_achieved, cancelled
progress_percentage = Column(Integer, default=0)
# Timeline
target_date = Column(Date)
achieved_date = Column(Date)
# Linked items
related_controls = Column(JSON)
related_risks = Column(JSON)
# Approval
approved_by = Column(String(100))
approved_at = Column(DateTime)
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
__table_args__ = (
Index('ix_objective_status', 'status'),
Index('ix_objective_category', 'category'),
)
def __repr__(self):
return f"<SecurityObjective {self.objective_id}: {self.title}>"
class StatementOfApplicabilityDB(Base):
"""
Statement of Applicability (SoA) - ISO 27001 Anhang A Mapping
Documents which Annex A controls are applicable and why.
This is MANDATORY for certification.
"""
__tablename__ = 'compliance_soa'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
# ISO 27001:2022 Annex A reference
annex_a_control = Column(String(20), nullable=False, index=True) # e.g., "A.5.1"
annex_a_title = Column(String(300), nullable=False)
annex_a_category = Column(String(100)) # "Organizational", "People", "Physical", "Technological"
# Applicability decision
is_applicable = Column(Boolean, nullable=False)
applicability_justification = Column(Text, nullable=False) # MUST be documented
# Implementation status
implementation_status = Column(String(30), default="planned") # planned, partial, implemented, not_implemented
implementation_notes = Column(Text)
# Mapping to our controls
breakpilot_control_ids = Column(JSON) # List of our control_ids that address this
coverage_level = Column(String(20), default="full") # full, partial, planned
# Evidence
evidence_description = Column(Text)
evidence_ids = Column(JSON) # Links to EvidenceDB
# Risk-based justification (for exclusions)
risk_assessment_notes = Column(Text) # If not applicable, explain why
compensating_controls = Column(Text) # If partial, explain compensating measures
# Approval
reviewed_by = Column(String(100))
reviewed_at = Column(DateTime)
approved_by = Column(String(100))
approved_at = Column(DateTime)
# Version tracking
version = Column(String(20), default="1.0")
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
__table_args__ = (
Index('ix_soa_annex_control', 'annex_a_control', unique=True),
Index('ix_soa_applicable', 'is_applicable'),
Index('ix_soa_status', 'implementation_status'),
)
def __repr__(self):
return f"<SoA {self.annex_a_control}: {'Applicable' if self.is_applicable else 'N/A'}>"
__all__ = [
"ApprovalStatusEnum",
"ISMSScopeDB",
"ISMSContextDB",
"ISMSPolicyDB",
"SecurityObjectiveDB",
"StatementOfApplicabilityDB",
]

View File

@@ -0,0 +1,354 @@
"""
ISMS repositories — extracted from compliance/db/isms_repository.py.
Phase 1 Step 5: split per sub-aggregate. Re-exported from
``compliance.db.isms_repository`` for backwards compatibility.
"""
import uuid
from datetime import datetime, date, timezone
from typing import List, Optional, Dict, Any, Tuple
from sqlalchemy.orm import Session as DBSession
from compliance.db.models import (
ISMSScopeDB, ISMSPolicyDB, SecurityObjectiveDB,
StatementOfApplicabilityDB, AuditFindingDB, CorrectiveActionDB,
ManagementReviewDB, InternalAuditDB, AuditTrailDB, ISMSReadinessCheckDB,
ApprovalStatusEnum, FindingTypeEnum, FindingStatusEnum, CAPATypeEnum,
)
class ISMSScopeRepository:
"""Repository for ISMS Scope (ISO 27001 Chapter 4.3)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
scope_statement: str,
created_by: str,
included_locations: Optional[List[str]] = None,
included_processes: Optional[List[str]] = None,
included_services: Optional[List[str]] = None,
excluded_items: Optional[List[str]] = None,
exclusion_justification: Optional[str] = None,
organizational_boundary: Optional[str] = None,
physical_boundary: Optional[str] = None,
technical_boundary: Optional[str] = None,
) -> ISMSScopeDB:
"""Create a new ISMS scope definition."""
# Supersede existing scopes
existing = self.db.query(ISMSScopeDB).filter(
ISMSScopeDB.status != ApprovalStatusEnum.SUPERSEDED
).all()
for s in existing:
s.status = ApprovalStatusEnum.SUPERSEDED
scope = ISMSScopeDB(
id=str(uuid.uuid4()),
scope_statement=scope_statement,
included_locations=included_locations,
included_processes=included_processes,
included_services=included_services,
excluded_items=excluded_items,
exclusion_justification=exclusion_justification,
organizational_boundary=organizational_boundary,
physical_boundary=physical_boundary,
technical_boundary=technical_boundary,
status=ApprovalStatusEnum.DRAFT,
created_by=created_by,
)
self.db.add(scope)
self.db.commit()
self.db.refresh(scope)
return scope
def get_current(self) -> Optional[ISMSScopeDB]:
"""Get the current (non-superseded) ISMS scope."""
return self.db.query(ISMSScopeDB).filter(
ISMSScopeDB.status != ApprovalStatusEnum.SUPERSEDED
).order_by(ISMSScopeDB.created_at.desc()).first()
def get_by_id(self, scope_id: str) -> Optional[ISMSScopeDB]:
"""Get scope by ID."""
return self.db.query(ISMSScopeDB).filter(ISMSScopeDB.id == scope_id).first()
def approve(
self,
scope_id: str,
approved_by: str,
effective_date: date,
review_date: date,
) -> Optional[ISMSScopeDB]:
"""Approve the ISMS scope."""
scope = self.get_by_id(scope_id)
if not scope:
return None
import hashlib
scope.status = ApprovalStatusEnum.APPROVED
scope.approved_by = approved_by
scope.approved_at = datetime.now(timezone.utc)
scope.effective_date = effective_date
scope.review_date = review_date
scope.approval_signature = hashlib.sha256(
f"{scope.scope_statement}|{approved_by}|{datetime.now(timezone.utc).isoformat()}".encode()
).hexdigest()
self.db.commit()
self.db.refresh(scope)
return scope
class ISMSPolicyRepository:
"""Repository for ISMS Policies (ISO 27001 Chapter 5.2)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
policy_id: str,
title: str,
policy_type: str,
authored_by: str,
description: Optional[str] = None,
policy_text: Optional[str] = None,
applies_to: Optional[List[str]] = None,
review_frequency_months: int = 12,
related_controls: Optional[List[str]] = None,
) -> ISMSPolicyDB:
"""Create a new ISMS policy."""
policy = ISMSPolicyDB(
id=str(uuid.uuid4()),
policy_id=policy_id,
title=title,
policy_type=policy_type,
description=description,
policy_text=policy_text,
applies_to=applies_to,
review_frequency_months=review_frequency_months,
related_controls=related_controls,
authored_by=authored_by,
status=ApprovalStatusEnum.DRAFT,
)
self.db.add(policy)
self.db.commit()
self.db.refresh(policy)
return policy
def get_by_id(self, policy_id: str) -> Optional[ISMSPolicyDB]:
"""Get policy by UUID or policy_id."""
return self.db.query(ISMSPolicyDB).filter(
(ISMSPolicyDB.id == policy_id) | (ISMSPolicyDB.policy_id == policy_id)
).first()
def get_all(
self,
policy_type: Optional[str] = None,
status: Optional[ApprovalStatusEnum] = None,
) -> List[ISMSPolicyDB]:
"""Get all policies with optional filters."""
query = self.db.query(ISMSPolicyDB)
if policy_type:
query = query.filter(ISMSPolicyDB.policy_type == policy_type)
if status:
query = query.filter(ISMSPolicyDB.status == status)
return query.order_by(ISMSPolicyDB.policy_id).all()
def get_master_policy(self) -> Optional[ISMSPolicyDB]:
"""Get the approved master ISMS policy."""
return self.db.query(ISMSPolicyDB).filter(
ISMSPolicyDB.policy_type == "master",
ISMSPolicyDB.status == ApprovalStatusEnum.APPROVED
).first()
def approve(
self,
policy_id: str,
approved_by: str,
reviewed_by: str,
effective_date: date,
) -> Optional[ISMSPolicyDB]:
"""Approve a policy."""
policy = self.get_by_id(policy_id)
if not policy:
return None
import hashlib
policy.status = ApprovalStatusEnum.APPROVED
policy.reviewed_by = reviewed_by
policy.approved_by = approved_by
policy.approved_at = datetime.now(timezone.utc)
policy.effective_date = effective_date
policy.next_review_date = date(
effective_date.year + (policy.review_frequency_months // 12),
effective_date.month,
effective_date.day
)
policy.approval_signature = hashlib.sha256(
f"{policy.policy_id}|{approved_by}|{datetime.now(timezone.utc).isoformat()}".encode()
).hexdigest()
self.db.commit()
self.db.refresh(policy)
return policy
class SecurityObjectiveRepository:
"""Repository for Security Objectives (ISO 27001 Chapter 6.2)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
objective_id: str,
title: str,
description: str,
category: str,
owner: str,
kpi_name: Optional[str] = None,
kpi_target: Optional[float] = None,
kpi_unit: Optional[str] = None,
target_date: Optional[date] = None,
related_controls: Optional[List[str]] = None,
) -> SecurityObjectiveDB:
"""Create a new security objective."""
objective = SecurityObjectiveDB(
id=str(uuid.uuid4()),
objective_id=objective_id,
title=title,
description=description,
category=category,
kpi_name=kpi_name,
kpi_target=kpi_target,
kpi_unit=kpi_unit,
owner=owner,
target_date=target_date,
related_controls=related_controls,
status="active",
)
self.db.add(objective)
self.db.commit()
self.db.refresh(objective)
return objective
def get_by_id(self, objective_id: str) -> Optional[SecurityObjectiveDB]:
"""Get objective by UUID or objective_id."""
return self.db.query(SecurityObjectiveDB).filter(
(SecurityObjectiveDB.id == objective_id) |
(SecurityObjectiveDB.objective_id == objective_id)
).first()
def get_all(
self,
category: Optional[str] = None,
status: Optional[str] = None,
) -> List[SecurityObjectiveDB]:
"""Get all objectives with optional filters."""
query = self.db.query(SecurityObjectiveDB)
if category:
query = query.filter(SecurityObjectiveDB.category == category)
if status:
query = query.filter(SecurityObjectiveDB.status == status)
return query.order_by(SecurityObjectiveDB.objective_id).all()
def update_progress(
self,
objective_id: str,
kpi_current: float,
) -> Optional[SecurityObjectiveDB]:
"""Update objective progress."""
objective = self.get_by_id(objective_id)
if not objective:
return None
objective.kpi_current = kpi_current
if objective.kpi_target:
objective.progress_percentage = min(100, (kpi_current / objective.kpi_target) * 100)
# Auto-mark as achieved if 100%
if objective.progress_percentage >= 100 and objective.status == "active":
objective.status = "achieved"
objective.achieved_date = date.today()
self.db.commit()
self.db.refresh(objective)
return objective
class StatementOfApplicabilityRepository:
"""Repository for Statement of Applicability (SoA)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
annex_a_control: str,
annex_a_title: str,
annex_a_category: str,
is_applicable: bool = True,
applicability_justification: Optional[str] = None,
implementation_status: str = "planned",
breakpilot_control_ids: Optional[List[str]] = None,
) -> StatementOfApplicabilityDB:
"""Create a new SoA entry."""
entry = StatementOfApplicabilityDB(
id=str(uuid.uuid4()),
annex_a_control=annex_a_control,
annex_a_title=annex_a_title,
annex_a_category=annex_a_category,
is_applicable=is_applicable,
applicability_justification=applicability_justification,
implementation_status=implementation_status,
breakpilot_control_ids=breakpilot_control_ids or [],
)
self.db.add(entry)
self.db.commit()
self.db.refresh(entry)
return entry
def get_by_control(self, annex_a_control: str) -> Optional[StatementOfApplicabilityDB]:
"""Get SoA entry by Annex A control ID (e.g., 'A.5.1')."""
return self.db.query(StatementOfApplicabilityDB).filter(
StatementOfApplicabilityDB.annex_a_control == annex_a_control
).first()
def get_all(
self,
is_applicable: Optional[bool] = None,
implementation_status: Optional[str] = None,
category: Optional[str] = None,
) -> List[StatementOfApplicabilityDB]:
"""Get all SoA entries with optional filters."""
query = self.db.query(StatementOfApplicabilityDB)
if is_applicable is not None:
query = query.filter(StatementOfApplicabilityDB.is_applicable == is_applicable)
if implementation_status:
query = query.filter(StatementOfApplicabilityDB.implementation_status == implementation_status)
if category:
query = query.filter(StatementOfApplicabilityDB.annex_a_category == category)
return query.order_by(StatementOfApplicabilityDB.annex_a_control).all()
def get_statistics(self) -> Dict[str, Any]:
"""Get SoA statistics."""
entries = self.get_all()
total = len(entries)
applicable = sum(1 for e in entries if e.is_applicable)
implemented = sum(1 for e in entries if e.implementation_status == "implemented")
approved = sum(1 for e in entries if e.approved_at)
return {
"total": total,
"applicable": applicable,
"not_applicable": total - applicable,
"implemented": implemented,
"planned": sum(1 for e in entries if e.implementation_status == "planned"),
"approved": approved,
"pending_approval": total - approved,
"implementation_rate": round((implemented / applicable * 100) if applicable > 0 else 0, 1),
}

View File

@@ -1,838 +1,25 @@
"""
Repository layer for ISMS (Information Security Management System) entities.
compliance.db.isms_repository — backwards-compatibility re-export shim.
Provides CRUD operations for ISO 27001 certification-related entities:
- ISMS Scope & Context
- Policies & Objectives
- Statement of Applicability (SoA)
- Audit Findings & CAPA
- Management Reviews & Internal Audits
Phase 1 Step 5 split the 838-line ISMS repository module into two
sub-aggregate sibling modules: governance (scope, policy, objective, SoA)
and audit execution (finding, CAPA, review, internal audit, trail, readiness).
Every repository class is re-exported so existing imports continue to work.
New code SHOULD import from the sub-aggregate module directly.
"""
import uuid
from datetime import datetime, date
from typing import List, Optional, Dict, Any, Tuple
from sqlalchemy.orm import Session as DBSession
from .models import (
ISMSScopeDB, ISMSPolicyDB, SecurityObjectiveDB,
StatementOfApplicabilityDB, AuditFindingDB, CorrectiveActionDB,
ManagementReviewDB, InternalAuditDB, AuditTrailDB, ISMSReadinessCheckDB,
ApprovalStatusEnum, FindingTypeEnum, FindingStatusEnum, CAPATypeEnum
from compliance.db.isms_governance_repository import ( # noqa: F401
ISMSScopeRepository,
ISMSPolicyRepository,
SecurityObjectiveRepository,
StatementOfApplicabilityRepository,
)
from compliance.db.isms_audit_repository import ( # noqa: F401
AuditFindingRepository,
CorrectiveActionRepository,
ManagementReviewRepository,
InternalAuditRepository,
AuditTrailRepository,
ISMSReadinessCheckRepository,
)
class ISMSScopeRepository:
"""Repository for ISMS Scope (ISO 27001 Chapter 4.3)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
scope_statement: str,
created_by: str,
included_locations: Optional[List[str]] = None,
included_processes: Optional[List[str]] = None,
included_services: Optional[List[str]] = None,
excluded_items: Optional[List[str]] = None,
exclusion_justification: Optional[str] = None,
organizational_boundary: Optional[str] = None,
physical_boundary: Optional[str] = None,
technical_boundary: Optional[str] = None,
) -> ISMSScopeDB:
"""Create a new ISMS scope definition."""
# Supersede existing scopes
existing = self.db.query(ISMSScopeDB).filter(
ISMSScopeDB.status != ApprovalStatusEnum.SUPERSEDED
).all()
for s in existing:
s.status = ApprovalStatusEnum.SUPERSEDED
scope = ISMSScopeDB(
id=str(uuid.uuid4()),
scope_statement=scope_statement,
included_locations=included_locations,
included_processes=included_processes,
included_services=included_services,
excluded_items=excluded_items,
exclusion_justification=exclusion_justification,
organizational_boundary=organizational_boundary,
physical_boundary=physical_boundary,
technical_boundary=technical_boundary,
status=ApprovalStatusEnum.DRAFT,
created_by=created_by,
)
self.db.add(scope)
self.db.commit()
self.db.refresh(scope)
return scope
def get_current(self) -> Optional[ISMSScopeDB]:
"""Get the current (non-superseded) ISMS scope."""
return self.db.query(ISMSScopeDB).filter(
ISMSScopeDB.status != ApprovalStatusEnum.SUPERSEDED
).order_by(ISMSScopeDB.created_at.desc()).first()
def get_by_id(self, scope_id: str) -> Optional[ISMSScopeDB]:
"""Get scope by ID."""
return self.db.query(ISMSScopeDB).filter(ISMSScopeDB.id == scope_id).first()
def approve(
self,
scope_id: str,
approved_by: str,
effective_date: date,
review_date: date,
) -> Optional[ISMSScopeDB]:
"""Approve the ISMS scope."""
scope = self.get_by_id(scope_id)
if not scope:
return None
import hashlib
scope.status = ApprovalStatusEnum.APPROVED
scope.approved_by = approved_by
scope.approved_at = datetime.utcnow()
scope.effective_date = effective_date
scope.review_date = review_date
scope.approval_signature = hashlib.sha256(
f"{scope.scope_statement}|{approved_by}|{datetime.utcnow().isoformat()}".encode()
).hexdigest()
self.db.commit()
self.db.refresh(scope)
return scope
class ISMSPolicyRepository:
"""Repository for ISMS Policies (ISO 27001 Chapter 5.2)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
policy_id: str,
title: str,
policy_type: str,
authored_by: str,
description: Optional[str] = None,
policy_text: Optional[str] = None,
applies_to: Optional[List[str]] = None,
review_frequency_months: int = 12,
related_controls: Optional[List[str]] = None,
) -> ISMSPolicyDB:
"""Create a new ISMS policy."""
policy = ISMSPolicyDB(
id=str(uuid.uuid4()),
policy_id=policy_id,
title=title,
policy_type=policy_type,
description=description,
policy_text=policy_text,
applies_to=applies_to,
review_frequency_months=review_frequency_months,
related_controls=related_controls,
authored_by=authored_by,
status=ApprovalStatusEnum.DRAFT,
)
self.db.add(policy)
self.db.commit()
self.db.refresh(policy)
return policy
def get_by_id(self, policy_id: str) -> Optional[ISMSPolicyDB]:
"""Get policy by UUID or policy_id."""
return self.db.query(ISMSPolicyDB).filter(
(ISMSPolicyDB.id == policy_id) | (ISMSPolicyDB.policy_id == policy_id)
).first()
def get_all(
self,
policy_type: Optional[str] = None,
status: Optional[ApprovalStatusEnum] = None,
) -> List[ISMSPolicyDB]:
"""Get all policies with optional filters."""
query = self.db.query(ISMSPolicyDB)
if policy_type:
query = query.filter(ISMSPolicyDB.policy_type == policy_type)
if status:
query = query.filter(ISMSPolicyDB.status == status)
return query.order_by(ISMSPolicyDB.policy_id).all()
def get_master_policy(self) -> Optional[ISMSPolicyDB]:
"""Get the approved master ISMS policy."""
return self.db.query(ISMSPolicyDB).filter(
ISMSPolicyDB.policy_type == "master",
ISMSPolicyDB.status == ApprovalStatusEnum.APPROVED
).first()
def approve(
self,
policy_id: str,
approved_by: str,
reviewed_by: str,
effective_date: date,
) -> Optional[ISMSPolicyDB]:
"""Approve a policy."""
policy = self.get_by_id(policy_id)
if not policy:
return None
import hashlib
policy.status = ApprovalStatusEnum.APPROVED
policy.reviewed_by = reviewed_by
policy.approved_by = approved_by
policy.approved_at = datetime.utcnow()
policy.effective_date = effective_date
policy.next_review_date = date(
effective_date.year + (policy.review_frequency_months // 12),
effective_date.month,
effective_date.day
)
policy.approval_signature = hashlib.sha256(
f"{policy.policy_id}|{approved_by}|{datetime.utcnow().isoformat()}".encode()
).hexdigest()
self.db.commit()
self.db.refresh(policy)
return policy
class SecurityObjectiveRepository:
"""Repository for Security Objectives (ISO 27001 Chapter 6.2)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
objective_id: str,
title: str,
description: str,
category: str,
owner: str,
kpi_name: Optional[str] = None,
kpi_target: Optional[float] = None,
kpi_unit: Optional[str] = None,
target_date: Optional[date] = None,
related_controls: Optional[List[str]] = None,
) -> SecurityObjectiveDB:
"""Create a new security objective."""
objective = SecurityObjectiveDB(
id=str(uuid.uuid4()),
objective_id=objective_id,
title=title,
description=description,
category=category,
kpi_name=kpi_name,
kpi_target=kpi_target,
kpi_unit=kpi_unit,
owner=owner,
target_date=target_date,
related_controls=related_controls,
status="active",
)
self.db.add(objective)
self.db.commit()
self.db.refresh(objective)
return objective
def get_by_id(self, objective_id: str) -> Optional[SecurityObjectiveDB]:
"""Get objective by UUID or objective_id."""
return self.db.query(SecurityObjectiveDB).filter(
(SecurityObjectiveDB.id == objective_id) |
(SecurityObjectiveDB.objective_id == objective_id)
).first()
def get_all(
self,
category: Optional[str] = None,
status: Optional[str] = None,
) -> List[SecurityObjectiveDB]:
"""Get all objectives with optional filters."""
query = self.db.query(SecurityObjectiveDB)
if category:
query = query.filter(SecurityObjectiveDB.category == category)
if status:
query = query.filter(SecurityObjectiveDB.status == status)
return query.order_by(SecurityObjectiveDB.objective_id).all()
def update_progress(
self,
objective_id: str,
kpi_current: float,
) -> Optional[SecurityObjectiveDB]:
"""Update objective progress."""
objective = self.get_by_id(objective_id)
if not objective:
return None
objective.kpi_current = kpi_current
if objective.kpi_target:
objective.progress_percentage = min(100, (kpi_current / objective.kpi_target) * 100)
# Auto-mark as achieved if 100%
if objective.progress_percentage >= 100 and objective.status == "active":
objective.status = "achieved"
objective.achieved_date = date.today()
self.db.commit()
self.db.refresh(objective)
return objective
class StatementOfApplicabilityRepository:
"""Repository for Statement of Applicability (SoA)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
annex_a_control: str,
annex_a_title: str,
annex_a_category: str,
is_applicable: bool = True,
applicability_justification: Optional[str] = None,
implementation_status: str = "planned",
breakpilot_control_ids: Optional[List[str]] = None,
) -> StatementOfApplicabilityDB:
"""Create a new SoA entry."""
entry = StatementOfApplicabilityDB(
id=str(uuid.uuid4()),
annex_a_control=annex_a_control,
annex_a_title=annex_a_title,
annex_a_category=annex_a_category,
is_applicable=is_applicable,
applicability_justification=applicability_justification,
implementation_status=implementation_status,
breakpilot_control_ids=breakpilot_control_ids or [],
)
self.db.add(entry)
self.db.commit()
self.db.refresh(entry)
return entry
def get_by_control(self, annex_a_control: str) -> Optional[StatementOfApplicabilityDB]:
"""Get SoA entry by Annex A control ID (e.g., 'A.5.1')."""
return self.db.query(StatementOfApplicabilityDB).filter(
StatementOfApplicabilityDB.annex_a_control == annex_a_control
).first()
def get_all(
self,
is_applicable: Optional[bool] = None,
implementation_status: Optional[str] = None,
category: Optional[str] = None,
) -> List[StatementOfApplicabilityDB]:
"""Get all SoA entries with optional filters."""
query = self.db.query(StatementOfApplicabilityDB)
if is_applicable is not None:
query = query.filter(StatementOfApplicabilityDB.is_applicable == is_applicable)
if implementation_status:
query = query.filter(StatementOfApplicabilityDB.implementation_status == implementation_status)
if category:
query = query.filter(StatementOfApplicabilityDB.annex_a_category == category)
return query.order_by(StatementOfApplicabilityDB.annex_a_control).all()
def get_statistics(self) -> Dict[str, Any]:
"""Get SoA statistics."""
entries = self.get_all()
total = len(entries)
applicable = sum(1 for e in entries if e.is_applicable)
implemented = sum(1 for e in entries if e.implementation_status == "implemented")
approved = sum(1 for e in entries if e.approved_at)
return {
"total": total,
"applicable": applicable,
"not_applicable": total - applicable,
"implemented": implemented,
"planned": sum(1 for e in entries if e.implementation_status == "planned"),
"approved": approved,
"pending_approval": total - approved,
"implementation_rate": round((implemented / applicable * 100) if applicable > 0 else 0, 1),
}
class AuditFindingRepository:
"""Repository for Audit Findings (Major/Minor/OFI)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
finding_type: FindingTypeEnum,
title: str,
description: str,
auditor: str,
iso_chapter: Optional[str] = None,
annex_a_control: Optional[str] = None,
objective_evidence: Optional[str] = None,
owner: Optional[str] = None,
due_date: Optional[date] = None,
internal_audit_id: Optional[str] = None,
) -> AuditFindingDB:
"""Create a new audit finding."""
# Generate finding ID
year = date.today().year
existing_count = self.db.query(AuditFindingDB).filter(
AuditFindingDB.finding_id.like(f"FIND-{year}-%")
).count()
finding_id = f"FIND-{year}-{existing_count + 1:03d}"
finding = AuditFindingDB(
id=str(uuid.uuid4()),
finding_id=finding_id,
finding_type=finding_type,
iso_chapter=iso_chapter,
annex_a_control=annex_a_control,
title=title,
description=description,
objective_evidence=objective_evidence,
owner=owner,
auditor=auditor,
due_date=due_date,
internal_audit_id=internal_audit_id,
status=FindingStatusEnum.OPEN,
)
self.db.add(finding)
self.db.commit()
self.db.refresh(finding)
return finding
def get_by_id(self, finding_id: str) -> Optional[AuditFindingDB]:
"""Get finding by UUID or finding_id."""
return self.db.query(AuditFindingDB).filter(
(AuditFindingDB.id == finding_id) | (AuditFindingDB.finding_id == finding_id)
).first()
def get_all(
self,
finding_type: Optional[FindingTypeEnum] = None,
status: Optional[FindingStatusEnum] = None,
internal_audit_id: Optional[str] = None,
) -> List[AuditFindingDB]:
"""Get all findings with optional filters."""
query = self.db.query(AuditFindingDB)
if finding_type:
query = query.filter(AuditFindingDB.finding_type == finding_type)
if status:
query = query.filter(AuditFindingDB.status == status)
if internal_audit_id:
query = query.filter(AuditFindingDB.internal_audit_id == internal_audit_id)
return query.order_by(AuditFindingDB.identified_date.desc()).all()
def get_open_majors(self) -> List[AuditFindingDB]:
"""Get all open major findings (blocking certification)."""
return self.db.query(AuditFindingDB).filter(
AuditFindingDB.finding_type == FindingTypeEnum.MAJOR,
AuditFindingDB.status != FindingStatusEnum.CLOSED
).all()
def get_statistics(self) -> Dict[str, Any]:
"""Get finding statistics."""
findings = self.get_all()
return {
"total": len(findings),
"major": sum(1 for f in findings if f.finding_type == FindingTypeEnum.MAJOR),
"minor": sum(1 for f in findings if f.finding_type == FindingTypeEnum.MINOR),
"ofi": sum(1 for f in findings if f.finding_type == FindingTypeEnum.OFI),
"positive": sum(1 for f in findings if f.finding_type == FindingTypeEnum.POSITIVE),
"open": sum(1 for f in findings if f.status != FindingStatusEnum.CLOSED),
"closed": sum(1 for f in findings if f.status == FindingStatusEnum.CLOSED),
"blocking_certification": sum(
1 for f in findings
if f.finding_type == FindingTypeEnum.MAJOR and f.status != FindingStatusEnum.CLOSED
),
}
def close(
self,
finding_id: str,
closed_by: str,
closure_notes: str,
verification_method: Optional[str] = None,
verification_evidence: Optional[str] = None,
) -> Optional[AuditFindingDB]:
"""Close a finding after verification."""
finding = self.get_by_id(finding_id)
if not finding:
return None
finding.status = FindingStatusEnum.CLOSED
finding.closed_date = date.today()
finding.closed_by = closed_by
finding.closure_notes = closure_notes
finding.verification_method = verification_method
finding.verification_evidence = verification_evidence
finding.verified_by = closed_by
finding.verified_at = datetime.utcnow()
self.db.commit()
self.db.refresh(finding)
return finding
class CorrectiveActionRepository:
"""Repository for Corrective/Preventive Actions (CAPA)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
finding_id: str,
capa_type: CAPATypeEnum,
title: str,
description: str,
assigned_to: str,
planned_completion: date,
expected_outcome: Optional[str] = None,
effectiveness_criteria: Optional[str] = None,
) -> CorrectiveActionDB:
"""Create a new CAPA."""
# Generate CAPA ID
year = date.today().year
existing_count = self.db.query(CorrectiveActionDB).filter(
CorrectiveActionDB.capa_id.like(f"CAPA-{year}-%")
).count()
capa_id = f"CAPA-{year}-{existing_count + 1:03d}"
capa = CorrectiveActionDB(
id=str(uuid.uuid4()),
capa_id=capa_id,
finding_id=finding_id,
capa_type=capa_type,
title=title,
description=description,
expected_outcome=expected_outcome,
assigned_to=assigned_to,
planned_completion=planned_completion,
effectiveness_criteria=effectiveness_criteria,
status="planned",
)
self.db.add(capa)
# Update finding status
finding = self.db.query(AuditFindingDB).filter(AuditFindingDB.id == finding_id).first()
if finding:
finding.status = FindingStatusEnum.CORRECTIVE_ACTION_PENDING
self.db.commit()
self.db.refresh(capa)
return capa
def get_by_id(self, capa_id: str) -> Optional[CorrectiveActionDB]:
"""Get CAPA by UUID or capa_id."""
return self.db.query(CorrectiveActionDB).filter(
(CorrectiveActionDB.id == capa_id) | (CorrectiveActionDB.capa_id == capa_id)
).first()
def get_by_finding(self, finding_id: str) -> List[CorrectiveActionDB]:
"""Get all CAPAs for a finding."""
return self.db.query(CorrectiveActionDB).filter(
CorrectiveActionDB.finding_id == finding_id
).order_by(CorrectiveActionDB.planned_completion).all()
def verify(
self,
capa_id: str,
verified_by: str,
is_effective: bool,
effectiveness_notes: Optional[str] = None,
) -> Optional[CorrectiveActionDB]:
"""Verify a completed CAPA."""
capa = self.get_by_id(capa_id)
if not capa:
return None
capa.effectiveness_verified = is_effective
capa.effectiveness_verification_date = date.today()
capa.effectiveness_notes = effectiveness_notes
capa.status = "verified" if is_effective else "completed"
# If verified, check if all CAPAs for finding are verified
if is_effective:
finding = self.db.query(AuditFindingDB).filter(
AuditFindingDB.id == capa.finding_id
).first()
if finding:
unverified = self.db.query(CorrectiveActionDB).filter(
CorrectiveActionDB.finding_id == finding.id,
CorrectiveActionDB.id != capa.id,
CorrectiveActionDB.status != "verified"
).count()
if unverified == 0:
finding.status = FindingStatusEnum.VERIFICATION_PENDING
self.db.commit()
self.db.refresh(capa)
return capa
class ManagementReviewRepository:
"""Repository for Management Reviews (ISO 27001 Chapter 9.3)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
title: str,
review_date: date,
chairperson: str,
review_period_start: Optional[date] = None,
review_period_end: Optional[date] = None,
) -> ManagementReviewDB:
"""Create a new management review."""
# Generate review ID
year = review_date.year
quarter = (review_date.month - 1) // 3 + 1
review_id = f"MR-{year}-Q{quarter}"
# Check for duplicate
existing = self.db.query(ManagementReviewDB).filter(
ManagementReviewDB.review_id == review_id
).first()
if existing:
review_id = f"{review_id}-{str(uuid.uuid4())[:4]}"
review = ManagementReviewDB(
id=str(uuid.uuid4()),
review_id=review_id,
title=title,
review_date=review_date,
review_period_start=review_period_start,
review_period_end=review_period_end,
chairperson=chairperson,
status="draft",
)
self.db.add(review)
self.db.commit()
self.db.refresh(review)
return review
def get_by_id(self, review_id: str) -> Optional[ManagementReviewDB]:
"""Get review by UUID or review_id."""
return self.db.query(ManagementReviewDB).filter(
(ManagementReviewDB.id == review_id) | (ManagementReviewDB.review_id == review_id)
).first()
def get_latest_approved(self) -> Optional[ManagementReviewDB]:
"""Get the most recent approved management review."""
return self.db.query(ManagementReviewDB).filter(
ManagementReviewDB.status == "approved"
).order_by(ManagementReviewDB.review_date.desc()).first()
def approve(
self,
review_id: str,
approved_by: str,
next_review_date: date,
minutes_document_path: Optional[str] = None,
) -> Optional[ManagementReviewDB]:
"""Approve a management review."""
review = self.get_by_id(review_id)
if not review:
return None
review.status = "approved"
review.approved_by = approved_by
review.approved_at = datetime.utcnow()
review.next_review_date = next_review_date
review.minutes_document_path = minutes_document_path
self.db.commit()
self.db.refresh(review)
return review
class InternalAuditRepository:
"""Repository for Internal Audits (ISO 27001 Chapter 9.2)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
title: str,
audit_type: str,
planned_date: date,
lead_auditor: str,
scope_description: Optional[str] = None,
iso_chapters_covered: Optional[List[str]] = None,
annex_a_controls_covered: Optional[List[str]] = None,
) -> InternalAuditDB:
"""Create a new internal audit."""
# Generate audit ID
year = planned_date.year
existing_count = self.db.query(InternalAuditDB).filter(
InternalAuditDB.audit_id.like(f"IA-{year}-%")
).count()
audit_id = f"IA-{year}-{existing_count + 1:03d}"
audit = InternalAuditDB(
id=str(uuid.uuid4()),
audit_id=audit_id,
title=title,
audit_type=audit_type,
scope_description=scope_description,
iso_chapters_covered=iso_chapters_covered,
annex_a_controls_covered=annex_a_controls_covered,
planned_date=planned_date,
lead_auditor=lead_auditor,
status="planned",
)
self.db.add(audit)
self.db.commit()
self.db.refresh(audit)
return audit
def get_by_id(self, audit_id: str) -> Optional[InternalAuditDB]:
"""Get audit by UUID or audit_id."""
return self.db.query(InternalAuditDB).filter(
(InternalAuditDB.id == audit_id) | (InternalAuditDB.audit_id == audit_id)
).first()
def get_latest_completed(self) -> Optional[InternalAuditDB]:
"""Get the most recent completed internal audit."""
return self.db.query(InternalAuditDB).filter(
InternalAuditDB.status == "completed"
).order_by(InternalAuditDB.actual_end_date.desc()).first()
def complete(
self,
audit_id: str,
audit_conclusion: str,
overall_assessment: str,
follow_up_audit_required: bool = False,
) -> Optional[InternalAuditDB]:
"""Complete an internal audit."""
audit = self.get_by_id(audit_id)
if not audit:
return None
audit.status = "completed"
audit.actual_end_date = date.today()
audit.report_date = date.today()
audit.audit_conclusion = audit_conclusion
audit.overall_assessment = overall_assessment
audit.follow_up_audit_required = follow_up_audit_required
self.db.commit()
self.db.refresh(audit)
return audit
class AuditTrailRepository:
"""Repository for Audit Trail entries."""
def __init__(self, db: DBSession):
self.db = db
def log(
self,
entity_type: str,
entity_id: str,
entity_name: str,
action: str,
performed_by: str,
field_changed: Optional[str] = None,
old_value: Optional[str] = None,
new_value: Optional[str] = None,
change_summary: Optional[str] = None,
) -> AuditTrailDB:
"""Log an audit trail entry."""
import hashlib
entry = AuditTrailDB(
id=str(uuid.uuid4()),
entity_type=entity_type,
entity_id=entity_id,
entity_name=entity_name,
action=action,
field_changed=field_changed,
old_value=old_value,
new_value=new_value,
change_summary=change_summary,
performed_by=performed_by,
performed_at=datetime.utcnow(),
checksum=hashlib.sha256(
f"{entity_type}|{entity_id}|{action}|{performed_by}".encode()
).hexdigest(),
)
self.db.add(entry)
self.db.commit()
self.db.refresh(entry)
return entry
def get_by_entity(
self,
entity_type: str,
entity_id: str,
limit: int = 100,
) -> List[AuditTrailDB]:
"""Get audit trail for a specific entity."""
return self.db.query(AuditTrailDB).filter(
AuditTrailDB.entity_type == entity_type,
AuditTrailDB.entity_id == entity_id
).order_by(AuditTrailDB.performed_at.desc()).limit(limit).all()
def get_paginated(
self,
page: int = 1,
page_size: int = 50,
entity_type: Optional[str] = None,
entity_id: Optional[str] = None,
performed_by: Optional[str] = None,
action: Optional[str] = None,
) -> Tuple[List[AuditTrailDB], int]:
"""Get paginated audit trail with filters."""
query = self.db.query(AuditTrailDB)
if entity_type:
query = query.filter(AuditTrailDB.entity_type == entity_type)
if entity_id:
query = query.filter(AuditTrailDB.entity_id == entity_id)
if performed_by:
query = query.filter(AuditTrailDB.performed_by == performed_by)
if action:
query = query.filter(AuditTrailDB.action == action)
total = query.count()
entries = query.order_by(AuditTrailDB.performed_at.desc()).offset(
(page - 1) * page_size
).limit(page_size).all()
return entries, total
class ISMSReadinessCheckRepository:
"""Repository for ISMS Readiness Check results."""
def __init__(self, db: DBSession):
self.db = db
def save(self, check: ISMSReadinessCheckDB) -> ISMSReadinessCheckDB:
"""Save a readiness check result."""
self.db.add(check)
self.db.commit()
self.db.refresh(check)
return check
def get_latest(self) -> Optional[ISMSReadinessCheckDB]:
"""Get the most recent readiness check."""
return self.db.query(ISMSReadinessCheckDB).order_by(
ISMSReadinessCheckDB.check_date.desc()
).first()
def get_history(self, limit: int = 10) -> List[ISMSReadinessCheckDB]:
"""Get readiness check history."""
return self.db.query(ISMSReadinessCheckDB).order_by(
ISMSReadinessCheckDB.check_date.desc()
).limit(limit).all()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,134 @@
"""
Regulation & Requirement models — extracted from compliance/db/models.py.
The foundational compliance aggregate: regulations (GDPR, AI Act, CRA, ...) and
the individual requirements they contain. Re-exported from
``compliance.db.models`` for backwards compatibility.
DO NOT change __tablename__, column names, or relationship strings.
"""
import uuid
import enum
from datetime import datetime, timezone
from sqlalchemy import (
Column, String, Text, Integer, Boolean, DateTime, Date,
ForeignKey, Enum, JSON, Index,
)
from sqlalchemy.orm import relationship
from classroom_engine.database import Base
# ============================================================================
# ENUMS
# ============================================================================
class RegulationTypeEnum(str, enum.Enum):
"""Type of regulation/standard."""
EU_REGULATION = "eu_regulation" # Directly applicable EU law
EU_DIRECTIVE = "eu_directive" # Requires national implementation
DE_LAW = "de_law" # German national law
BSI_STANDARD = "bsi_standard" # BSI technical guidelines
INDUSTRY_STANDARD = "industry_standard" # ISO, OWASP, etc.
# ============================================================================
# MODELS
# ============================================================================
class RegulationDB(Base):
"""
Represents a regulation, directive, or standard.
Examples: GDPR, AI Act, CRA, BSI-TR-03161
"""
__tablename__ = 'compliance_regulations'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
code = Column(String(20), unique=True, nullable=False, index=True) # e.g., "GDPR", "AIACT"
name = Column(String(200), nullable=False) # Short name
full_name = Column(Text) # Full official name
regulation_type = Column(Enum(RegulationTypeEnum), nullable=False)
source_url = Column(String(500)) # EUR-Lex URL or similar
local_pdf_path = Column(String(500)) # Local PDF if available
effective_date = Column(Date) # When it came into force
description = Column(Text) # Brief description
is_active = Column(Boolean, default=True)
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
requirements = relationship("RequirementDB", back_populates="regulation", cascade="all, delete-orphan")
def __repr__(self):
return f"<Regulation {self.code}: {self.name}>"
class RequirementDB(Base):
"""
Individual requirement from a regulation.
Examples: GDPR Art. 32(1)(a), AI Act Art. 9, BSI-TR O.Auth_1
"""
__tablename__ = 'compliance_requirements'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
regulation_id = Column(String(36), ForeignKey('compliance_regulations.id'), nullable=False, index=True)
# Requirement identification
article = Column(String(50), nullable=False) # e.g., "Art. 32", "O.Auth_1"
paragraph = Column(String(20)) # e.g., "(1)(a)"
requirement_id_external = Column(String(50)) # External ID (e.g., BSI ID)
title = Column(String(300), nullable=False) # Requirement title
description = Column(Text) # Brief description
requirement_text = Column(Text) # Original text from regulation
# Breakpilot-specific interpretation and implementation
breakpilot_interpretation = Column(Text) # How Breakpilot interprets this
implementation_status = Column(String(30), default="not_started") # not_started, in_progress, implemented, verified
implementation_details = Column(Text) # How we implemented it
code_references = Column(JSON) # List of {"file": "...", "line": ..., "description": "..."}
documentation_links = Column(JSON) # List of internal doc links
# Evidence for auditors
evidence_description = Column(Text) # What evidence proves compliance
evidence_artifacts = Column(JSON) # List of {"type": "...", "path": "...", "description": "..."}
# Audit-specific fields
auditor_notes = Column(Text) # Notes from auditor review
audit_status = Column(String(30), default="pending") # pending, in_review, approved, rejected
last_audit_date = Column(DateTime)
last_auditor = Column(String(100))
is_applicable = Column(Boolean, default=True) # Applicable to Breakpilot?
applicability_reason = Column(Text) # Why/why not applicable
priority = Column(Integer, default=2) # 1=Critical, 2=High, 3=Medium
# Source document reference
source_page = Column(Integer) # Page number in source document
source_section = Column(String(100)) # Section in source document
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
regulation = relationship("RegulationDB", back_populates="requirements")
control_mappings = relationship("ControlMappingDB", back_populates="requirement", cascade="all, delete-orphan")
__table_args__ = (
Index('ix_requirement_regulation_article', 'regulation_id', 'article'),
Index('ix_requirement_audit_status', 'audit_status'),
Index('ix_requirement_impl_status', 'implementation_status'),
)
def __repr__(self):
return f"<Requirement {self.article} {self.paragraph or ''}>"
__all__ = ["RegulationTypeEnum", "RegulationDB", "RequirementDB"]

View File

@@ -0,0 +1,268 @@
"""
Compliance repositories — extracted from compliance/db/repository.py.
Phase 1 Step 5: the monolithic repository module is decomposed per
aggregate. Every repository class is re-exported from
``compliance.db.repository`` for backwards compatibility.
"""
import uuid
from datetime import datetime, date, timezone
from typing import List, Optional, Dict, Any, Tuple
from sqlalchemy.orm import Session as DBSession, selectinload, joinedload
from sqlalchemy import func, and_, or_
from compliance.db.models import (
RegulationDB, RequirementDB, ControlDB, ControlMappingDB,
EvidenceDB, RiskDB, AuditExportDB,
AuditSessionDB, AuditSignOffDB, AuditResultEnum, AuditSessionStatusEnum,
RegulationTypeEnum, ControlDomainEnum, ControlStatusEnum,
RiskLevelEnum, EvidenceStatusEnum, ExportStatusEnum,
ServiceModuleDB, ModuleRegulationMappingDB,
)
class RegulationRepository:
"""Repository for regulations/standards."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
code: str,
name: str,
regulation_type: RegulationTypeEnum,
full_name: Optional[str] = None,
source_url: Optional[str] = None,
local_pdf_path: Optional[str] = None,
effective_date: Optional[date] = None,
description: Optional[str] = None,
) -> RegulationDB:
"""Create a new regulation."""
regulation = RegulationDB(
id=str(uuid.uuid4()),
code=code,
name=name,
full_name=full_name,
regulation_type=regulation_type,
source_url=source_url,
local_pdf_path=local_pdf_path,
effective_date=effective_date,
description=description,
)
self.db.add(regulation)
self.db.commit()
self.db.refresh(regulation)
return regulation
def get_by_id(self, regulation_id: str) -> Optional[RegulationDB]:
"""Get regulation by ID."""
return self.db.query(RegulationDB).filter(RegulationDB.id == regulation_id).first()
def get_by_code(self, code: str) -> Optional[RegulationDB]:
"""Get regulation by code (e.g., 'GDPR')."""
return self.db.query(RegulationDB).filter(RegulationDB.code == code).first()
def get_all(
self,
regulation_type: Optional[RegulationTypeEnum] = None,
is_active: Optional[bool] = True
) -> List[RegulationDB]:
"""Get all regulations with optional filters."""
query = self.db.query(RegulationDB)
if regulation_type:
query = query.filter(RegulationDB.regulation_type == regulation_type)
if is_active is not None:
query = query.filter(RegulationDB.is_active == is_active)
return query.order_by(RegulationDB.code).all()
def update(self, regulation_id: str, **kwargs) -> Optional[RegulationDB]:
"""Update a regulation."""
regulation = self.get_by_id(regulation_id)
if not regulation:
return None
for key, value in kwargs.items():
if hasattr(regulation, key):
setattr(regulation, key, value)
regulation.updated_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(regulation)
return regulation
def delete(self, regulation_id: str) -> bool:
"""Delete a regulation."""
regulation = self.get_by_id(regulation_id)
if not regulation:
return False
self.db.delete(regulation)
self.db.commit()
return True
def get_active(self) -> List[RegulationDB]:
"""Get all active regulations."""
return self.get_all(is_active=True)
def count(self) -> int:
"""Count all regulations."""
return self.db.query(func.count(RegulationDB.id)).scalar() or 0
class RequirementRepository:
"""Repository for requirements."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
regulation_id: str,
article: str,
title: str,
paragraph: Optional[str] = None,
description: Optional[str] = None,
requirement_text: Optional[str] = None,
breakpilot_interpretation: Optional[str] = None,
is_applicable: bool = True,
priority: int = 2,
) -> RequirementDB:
"""Create a new requirement."""
requirement = RequirementDB(
id=str(uuid.uuid4()),
regulation_id=regulation_id,
article=article,
paragraph=paragraph,
title=title,
description=description,
requirement_text=requirement_text,
breakpilot_interpretation=breakpilot_interpretation,
is_applicable=is_applicable,
priority=priority,
)
self.db.add(requirement)
self.db.commit()
self.db.refresh(requirement)
return requirement
def get_by_id(self, requirement_id: str) -> Optional[RequirementDB]:
"""Get requirement by ID with eager-loaded relationships."""
return (
self.db.query(RequirementDB)
.options(
selectinload(RequirementDB.control_mappings).selectinload(ControlMappingDB.control),
joinedload(RequirementDB.regulation)
)
.filter(RequirementDB.id == requirement_id)
.first()
)
def get_by_regulation(
self,
regulation_id: str,
is_applicable: Optional[bool] = None
) -> List[RequirementDB]:
"""Get all requirements for a regulation with eager-loaded controls."""
query = (
self.db.query(RequirementDB)
.options(
selectinload(RequirementDB.control_mappings).selectinload(ControlMappingDB.control),
joinedload(RequirementDB.regulation)
)
.filter(RequirementDB.regulation_id == regulation_id)
)
if is_applicable is not None:
query = query.filter(RequirementDB.is_applicable == is_applicable)
return query.order_by(RequirementDB.article, RequirementDB.paragraph).all()
def get_by_regulation_code(self, code: str) -> List[RequirementDB]:
"""Get requirements by regulation code with eager-loaded relationships."""
return (
self.db.query(RequirementDB)
.options(
selectinload(RequirementDB.control_mappings).selectinload(ControlMappingDB.control),
joinedload(RequirementDB.regulation)
)
.join(RegulationDB)
.filter(RegulationDB.code == code)
.order_by(RequirementDB.article, RequirementDB.paragraph)
.all()
)
def get_all(self, is_applicable: Optional[bool] = None) -> List[RequirementDB]:
"""Get all requirements with optional filter and eager-loading."""
query = (
self.db.query(RequirementDB)
.options(
selectinload(RequirementDB.control_mappings).selectinload(ControlMappingDB.control),
joinedload(RequirementDB.regulation)
)
)
if is_applicable is not None:
query = query.filter(RequirementDB.is_applicable == is_applicable)
return query.order_by(RequirementDB.article, RequirementDB.paragraph).all()
def get_paginated(
self,
page: int = 1,
page_size: int = 50,
regulation_code: Optional[str] = None,
status: Optional[str] = None,
is_applicable: Optional[bool] = None,
search: Optional[str] = None,
) -> Tuple[List[RequirementDB], int]:
"""
Get paginated requirements with eager-loaded relationships.
Returns tuple of (items, total_count).
"""
query = (
self.db.query(RequirementDB)
.options(
selectinload(RequirementDB.control_mappings).selectinload(ControlMappingDB.control),
joinedload(RequirementDB.regulation)
)
)
# Filters
if regulation_code:
query = query.join(RegulationDB).filter(RegulationDB.code == regulation_code)
if status:
query = query.filter(RequirementDB.implementation_status == status)
if is_applicable is not None:
query = query.filter(RequirementDB.is_applicable == is_applicable)
if search:
search_term = f"%{search}%"
query = query.filter(
or_(
RequirementDB.title.ilike(search_term),
RequirementDB.description.ilike(search_term),
RequirementDB.article.ilike(search_term),
)
)
# Count before pagination
total = query.count()
# Apply pagination and ordering
items = (
query
.order_by(RequirementDB.priority.desc(), RequirementDB.article, RequirementDB.paragraph)
.offset((page - 1) * page_size)
.limit(page_size)
.all()
)
return items, total
def delete(self, requirement_id: str) -> bool:
"""Delete a requirement."""
requirement = self.db.query(RequirementDB).filter(RequirementDB.id == requirement_id).first()
if not requirement:
return False
self.db.delete(requirement)
self.db.commit()
return True
def count(self) -> int:
"""Count all requirements."""
return self.db.query(func.count(RequirementDB.id)).scalar() or 0

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,148 @@
"""
Compliance repositories — extracted from compliance/db/repository.py.
Phase 1 Step 5: the monolithic repository module is decomposed per
aggregate. Every repository class is re-exported from
``compliance.db.repository`` for backwards compatibility.
"""
import uuid
from datetime import datetime, date, timezone
from typing import List, Optional, Dict, Any, Tuple
from sqlalchemy.orm import Session as DBSession, selectinload, joinedload
from sqlalchemy import func, and_, or_
from compliance.db.models import (
RegulationDB, RequirementDB, ControlDB, ControlMappingDB,
EvidenceDB, RiskDB, AuditExportDB,
AuditSessionDB, AuditSignOffDB, AuditResultEnum, AuditSessionStatusEnum,
RegulationTypeEnum, ControlDomainEnum, ControlStatusEnum,
RiskLevelEnum, EvidenceStatusEnum, ExportStatusEnum,
ServiceModuleDB, ModuleRegulationMappingDB,
)
class RiskRepository:
"""Repository for risks."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
risk_id: str,
title: str,
category: str,
likelihood: int,
impact: int,
description: Optional[str] = None,
mitigating_controls: Optional[List[str]] = None,
owner: Optional[str] = None,
treatment_plan: Optional[str] = None,
) -> RiskDB:
"""Create a risk."""
inherent_risk = RiskDB.calculate_risk_level(likelihood, impact)
risk = RiskDB(
id=str(uuid.uuid4()),
risk_id=risk_id,
title=title,
description=description,
category=category,
likelihood=likelihood,
impact=impact,
inherent_risk=inherent_risk,
mitigating_controls=mitigating_controls or [],
owner=owner,
treatment_plan=treatment_plan,
)
self.db.add(risk)
self.db.commit()
self.db.refresh(risk)
return risk
def get_by_id(self, risk_uuid: str) -> Optional[RiskDB]:
"""Get risk by UUID."""
return self.db.query(RiskDB).filter(RiskDB.id == risk_uuid).first()
def get_by_risk_id(self, risk_id: str) -> Optional[RiskDB]:
"""Get risk by risk_id (e.g., 'RISK-001')."""
return self.db.query(RiskDB).filter(RiskDB.risk_id == risk_id).first()
def get_all(
self,
category: Optional[str] = None,
status: Optional[str] = None,
min_risk_level: Optional[RiskLevelEnum] = None,
) -> List[RiskDB]:
"""Get all risks with filters."""
query = self.db.query(RiskDB)
if category:
query = query.filter(RiskDB.category == category)
if status:
query = query.filter(RiskDB.status == status)
if min_risk_level:
risk_order = {
RiskLevelEnum.LOW: 1,
RiskLevelEnum.MEDIUM: 2,
RiskLevelEnum.HIGH: 3,
RiskLevelEnum.CRITICAL: 4,
}
min_order = risk_order.get(min_risk_level, 1)
query = query.filter(
RiskDB.inherent_risk.in_(
[k for k, v in risk_order.items() if v >= min_order]
)
)
return query.order_by(RiskDB.risk_id).all()
def update(self, risk_id: str, **kwargs) -> Optional[RiskDB]:
"""Update a risk."""
risk = self.get_by_risk_id(risk_id)
if not risk:
return None
for key, value in kwargs.items():
if hasattr(risk, key):
setattr(risk, key, value)
# Recalculate risk levels if likelihood/impact changed
if 'likelihood' in kwargs or 'impact' in kwargs:
risk.inherent_risk = RiskDB.calculate_risk_level(risk.likelihood, risk.impact)
if 'residual_likelihood' in kwargs or 'residual_impact' in kwargs:
if risk.residual_likelihood and risk.residual_impact:
risk.residual_risk = RiskDB.calculate_risk_level(
risk.residual_likelihood, risk.residual_impact
)
risk.updated_at = datetime.now(timezone.utc)
self.db.commit()
self.db.refresh(risk)
return risk
def get_matrix_data(self) -> Dict[str, Any]:
"""Get data for risk matrix visualization."""
risks = self.get_all()
matrix = {}
for risk in risks:
key = f"{risk.likelihood}_{risk.impact}"
if key not in matrix:
matrix[key] = []
matrix[key].append({
"risk_id": risk.risk_id,
"title": risk.title,
"inherent_risk": risk.inherent_risk.value if risk.inherent_risk else None,
})
return {
"matrix": matrix,
"total_risks": len(risks),
"by_level": {
"critical": len([r for r in risks if r.inherent_risk == RiskLevelEnum.CRITICAL]),
"high": len([r for r in risks if r.inherent_risk == RiskLevelEnum.HIGH]),
"medium": len([r for r in risks if r.inherent_risk == RiskLevelEnum.MEDIUM]),
"low": len([r for r in risks if r.inherent_risk == RiskLevelEnum.LOW]),
}
}

View File

@@ -0,0 +1,176 @@
"""
Service Module Registry models — extracted from compliance/db/models.py.
Sprint 3: registry of all Breakpilot services/modules for compliance mapping,
per-module regulation applicability, and per-module risk aggregation.
Re-exported from ``compliance.db.models`` for backwards compatibility.
DO NOT change __tablename__, column names, or relationship strings.
"""
import uuid
import enum
from datetime import datetime, timezone
from sqlalchemy import (
Column, String, Text, Integer, Boolean, DateTime,
ForeignKey, Enum, JSON, Index, Float,
)
from sqlalchemy.orm import relationship
from classroom_engine.database import Base
# RiskLevelEnum is re-used across aggregates; sourced here from control_models.
from compliance.db.control_models import RiskLevelEnum # noqa: F401
# ============================================================================
# ENUMS
# ============================================================================
class ServiceTypeEnum(str, enum.Enum):
"""Type of Breakpilot service/module."""
BACKEND = "backend" # API/Backend services
DATABASE = "database" # Data storage
AI = "ai" # AI/ML services
COMMUNICATION = "communication" # Chat/Video/Messaging
STORAGE = "storage" # File/Object storage
INFRASTRUCTURE = "infrastructure" # Load balancer, reverse proxy
MONITORING = "monitoring" # Logging, metrics
SECURITY = "security" # Auth, encryption, secrets
class RelevanceLevelEnum(str, enum.Enum):
"""Relevance level of a regulation to a service."""
CRITICAL = "critical" # Non-compliance = shutdown
HIGH = "high" # Major risk
MEDIUM = "medium" # Moderate risk
LOW = "low" # Minor risk
# ============================================================================
# MODELS
# ============================================================================
class ServiceModuleDB(Base):
"""
Registry of all Breakpilot services/modules for compliance mapping.
Tracks which regulations apply to which services, enabling:
- Service-specific compliance views
- Aggregated risk per service
- Gap analysis by module
"""
__tablename__ = 'compliance_service_modules'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
name = Column(String(100), unique=True, nullable=False, index=True) # e.g., "consent-service"
display_name = Column(String(200), nullable=False) # e.g., "Go Consent Service"
description = Column(Text)
# Technical details
service_type = Column(Enum(ServiceTypeEnum), nullable=False)
port = Column(Integer) # Primary port (if applicable)
technology_stack = Column(JSON) # e.g., ["Go", "Gin", "PostgreSQL"]
repository_path = Column(String(500)) # e.g., "/consent-service"
docker_image = Column(String(200)) # e.g., "breakpilot-pwa-consent-service"
# Data categories handled
data_categories = Column(JSON) # e.g., ["personal_data", "consent_records"]
processes_pii = Column(Boolean, default=False) # Handles personally identifiable info?
processes_health_data = Column(Boolean, default=False) # Handles special category health data?
ai_components = Column(Boolean, default=False) # Contains AI/ML components?
# Status
is_active = Column(Boolean, default=True)
criticality = Column(String(20), default="medium") # "critical", "high", "medium", "low"
# Compliance aggregation
compliance_score = Column(Float) # Calculated score 0-100
last_compliance_check = Column(DateTime)
# Owner
owner_team = Column(String(100)) # e.g., "Backend Team"
owner_contact = Column(String(200)) # e.g., "backend@breakpilot.app"
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
regulation_mappings = relationship("ModuleRegulationMappingDB", back_populates="module", cascade="all, delete-orphan")
module_risks = relationship("ModuleRiskDB", back_populates="module", cascade="all, delete-orphan")
__table_args__ = (
Index('ix_module_type_active', 'service_type', 'is_active'),
)
def __repr__(self):
return f"<ServiceModule {self.name}: {self.display_name}>"
class ModuleRegulationMappingDB(Base):
"""
Maps services to applicable regulations with relevance level.
Enables filtering: "Show all GDPR requirements for consent-service"
"""
__tablename__ = 'compliance_module_regulations'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
module_id = Column(String(36), ForeignKey('compliance_service_modules.id'), nullable=False, index=True)
regulation_id = Column(String(36), ForeignKey('compliance_regulations.id'), nullable=False, index=True)
relevance_level = Column(Enum(RelevanceLevelEnum), nullable=False, default=RelevanceLevelEnum.MEDIUM)
notes = Column(Text) # Why this regulation applies
applicable_articles = Column(JSON) # List of specific articles that apply
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
module = relationship("ServiceModuleDB", back_populates="regulation_mappings")
regulation = relationship("RegulationDB")
__table_args__ = (
Index('ix_module_regulation', 'module_id', 'regulation_id', unique=True),
)
class ModuleRiskDB(Base):
"""
Service-specific risks aggregated from requirements and controls.
"""
__tablename__ = 'compliance_module_risks'
id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
module_id = Column(String(36), ForeignKey('compliance_service_modules.id'), nullable=False, index=True)
risk_id = Column(String(36), ForeignKey('compliance_risks.id'), nullable=False, index=True)
# Module-specific assessment
module_likelihood = Column(Integer) # 1-5, may differ from global
module_impact = Column(Integer) # 1-5, may differ from global
module_risk_level = Column(Enum(RiskLevelEnum))
assessment_notes = Column(Text) # Module-specific notes
# Timestamps
created_at = Column(DateTime, default=lambda: datetime.now(timezone.utc))
updated_at = Column(DateTime, default=lambda: datetime.now(timezone.utc), onupdate=lambda: datetime.now(timezone.utc))
# Relationships
module = relationship("ServiceModuleDB", back_populates="module_risks")
risk = relationship("RiskDB")
__table_args__ = (
Index('ix_module_risk', 'module_id', 'risk_id', unique=True),
)
__all__ = [
"ServiceTypeEnum",
"RelevanceLevelEnum",
"ServiceModuleDB",
"ModuleRegulationMappingDB",
"ModuleRiskDB",
]

View File

@@ -0,0 +1,247 @@
"""
Compliance repositories — extracted from compliance/db/repository.py.
Phase 1 Step 5: the monolithic repository module is decomposed per
aggregate. Every repository class is re-exported from
``compliance.db.repository`` for backwards compatibility.
"""
import uuid
from datetime import datetime, date, timezone
from typing import List, Optional, Dict, Any, Tuple
from sqlalchemy.orm import Session as DBSession, selectinload, joinedload
from sqlalchemy import func, and_, or_
from compliance.db.models import (
RegulationDB, RequirementDB, ControlDB, ControlMappingDB,
EvidenceDB, RiskDB, AuditExportDB,
AuditSessionDB, AuditSignOffDB, AuditResultEnum, AuditSessionStatusEnum,
RegulationTypeEnum, ControlDomainEnum, ControlStatusEnum,
RiskLevelEnum, EvidenceStatusEnum, ExportStatusEnum,
ServiceModuleDB, ModuleRegulationMappingDB,
)
class ServiceModuleRepository:
"""Repository for service modules (Sprint 3)."""
def __init__(self, db: DBSession):
self.db = db
def create(
self,
name: str,
display_name: str,
service_type: str,
description: Optional[str] = None,
port: Optional[int] = None,
technology_stack: Optional[List[str]] = None,
repository_path: Optional[str] = None,
docker_image: Optional[str] = None,
data_categories: Optional[List[str]] = None,
processes_pii: bool = False,
processes_health_data: bool = False,
ai_components: bool = False,
criticality: str = "medium",
owner_team: Optional[str] = None,
owner_contact: Optional[str] = None,
) -> "ServiceModuleDB":
"""Create a service module."""
from .models import ServiceModuleDB, ServiceTypeEnum
module = ServiceModuleDB(
id=str(uuid.uuid4()),
name=name,
display_name=display_name,
description=description,
service_type=ServiceTypeEnum(service_type),
port=port,
technology_stack=technology_stack or [],
repository_path=repository_path,
docker_image=docker_image,
data_categories=data_categories or [],
processes_pii=processes_pii,
processes_health_data=processes_health_data,
ai_components=ai_components,
criticality=criticality,
owner_team=owner_team,
owner_contact=owner_contact,
)
self.db.add(module)
self.db.commit()
self.db.refresh(module)
return module
def get_by_id(self, module_id: str) -> Optional["ServiceModuleDB"]:
"""Get module by ID."""
from .models import ServiceModuleDB
return self.db.query(ServiceModuleDB).filter(ServiceModuleDB.id == module_id).first()
def get_by_name(self, name: str) -> Optional["ServiceModuleDB"]:
"""Get module by name."""
from .models import ServiceModuleDB
return self.db.query(ServiceModuleDB).filter(ServiceModuleDB.name == name).first()
def get_all(
self,
service_type: Optional[str] = None,
criticality: Optional[str] = None,
processes_pii: Optional[bool] = None,
ai_components: Optional[bool] = None,
) -> List["ServiceModuleDB"]:
"""Get all modules with filters."""
from .models import ServiceModuleDB, ServiceTypeEnum
query = self.db.query(ServiceModuleDB).filter(ServiceModuleDB.is_active)
if service_type:
query = query.filter(ServiceModuleDB.service_type == ServiceTypeEnum(service_type))
if criticality:
query = query.filter(ServiceModuleDB.criticality == criticality)
if processes_pii is not None:
query = query.filter(ServiceModuleDB.processes_pii == processes_pii)
if ai_components is not None:
query = query.filter(ServiceModuleDB.ai_components == ai_components)
return query.order_by(ServiceModuleDB.name).all()
def get_with_regulations(self, module_id: str) -> Optional["ServiceModuleDB"]:
"""Get module with regulation mappings loaded."""
from .models import ServiceModuleDB, ModuleRegulationMappingDB
from sqlalchemy.orm import selectinload
return (
self.db.query(ServiceModuleDB)
.options(
selectinload(ServiceModuleDB.regulation_mappings)
.selectinload(ModuleRegulationMappingDB.regulation)
)
.filter(ServiceModuleDB.id == module_id)
.first()
)
def add_regulation_mapping(
self,
module_id: str,
regulation_id: str,
relevance_level: str = "medium",
notes: Optional[str] = None,
applicable_articles: Optional[List[str]] = None,
) -> "ModuleRegulationMappingDB":
"""Add a regulation mapping to a module."""
from .models import ModuleRegulationMappingDB, RelevanceLevelEnum
mapping = ModuleRegulationMappingDB(
id=str(uuid.uuid4()),
module_id=module_id,
regulation_id=regulation_id,
relevance_level=RelevanceLevelEnum(relevance_level),
notes=notes,
applicable_articles=applicable_articles,
)
self.db.add(mapping)
self.db.commit()
self.db.refresh(mapping)
return mapping
def get_overview(self) -> Dict[str, Any]:
"""Get overview statistics for all modules."""
from .models import ModuleRegulationMappingDB
modules = self.get_all()
total = len(modules)
by_type = {}
by_criticality = {}
pii_count = 0
ai_count = 0
for m in modules:
type_key = m.service_type.value if m.service_type else "unknown"
by_type[type_key] = by_type.get(type_key, 0) + 1
by_criticality[m.criticality] = by_criticality.get(m.criticality, 0) + 1
if m.processes_pii:
pii_count += 1
if m.ai_components:
ai_count += 1
# Get regulation coverage
regulation_coverage = {}
mappings = self.db.query(ModuleRegulationMappingDB).all()
for mapping in mappings:
reg = mapping.regulation
if reg:
code = reg.code
regulation_coverage[code] = regulation_coverage.get(code, 0) + 1
# Calculate average compliance score
scores = [m.compliance_score for m in modules if m.compliance_score is not None]
avg_score = sum(scores) / len(scores) if scores else None
return {
"total_modules": total,
"modules_by_type": by_type,
"modules_by_criticality": by_criticality,
"modules_processing_pii": pii_count,
"modules_with_ai": ai_count,
"average_compliance_score": round(avg_score, 1) if avg_score else None,
"regulations_coverage": regulation_coverage,
}
def seed_from_data(self, services_data: List[Dict[str, Any]], force: bool = False) -> Dict[str, int]:
"""Seed modules from service_modules.py data."""
modules_created = 0
mappings_created = 0
for svc in services_data:
# Check if module exists
existing = self.get_by_name(svc["name"])
if existing and not force:
continue
if existing and force:
# Delete existing module (cascades to mappings)
self.db.delete(existing)
self.db.commit()
# Create module
module = self.create(
name=svc["name"],
display_name=svc["display_name"],
description=svc.get("description"),
service_type=svc["service_type"],
port=svc.get("port"),
technology_stack=svc.get("technology_stack"),
repository_path=svc.get("repository_path"),
docker_image=svc.get("docker_image"),
data_categories=svc.get("data_categories"),
processes_pii=svc.get("processes_pii", False),
processes_health_data=svc.get("processes_health_data", False),
ai_components=svc.get("ai_components", False),
criticality=svc.get("criticality", "medium"),
owner_team=svc.get("owner_team"),
)
modules_created += 1
# Create regulation mappings
for reg_data in svc.get("regulations", []):
# Find regulation by code
reg = self.db.query(RegulationDB).filter(
RegulationDB.code == reg_data["code"]
).first()
if reg:
self.add_regulation_mapping(
module_id=module.id,
regulation_id=reg.id,
relevance_level=reg_data.get("relevance", "medium"),
notes=reg_data.get("notes"),
)
mappings_created += 1
return {
"modules_created": modules_created,
"mappings_created": mappings_created,
}

View File

@@ -0,0 +1,30 @@
"""Domain layer: value objects, enums, and domain exceptions.
Pure Python — no FastAPI, no SQLAlchemy, no HTTP concerns. Upper layers depend on
this package; it depends on nothing except the standard library and small libraries
like ``pydantic`` or ``attrs``.
"""
class DomainError(Exception):
"""Base class for all domain-level errors.
Services raise subclasses of this; the HTTP layer is responsible for mapping
them to status codes. Never raise ``HTTPException`` from a service.
"""
class NotFoundError(DomainError):
"""Requested entity does not exist."""
class ConflictError(DomainError):
"""Operation conflicts with the current state (e.g. duplicate, stale version)."""
class ValidationError(DomainError):
"""Input failed domain-level validation (beyond what Pydantic catches)."""
class PermissionError(DomainError):
"""Caller lacks permission for the operation."""

View File

@@ -0,0 +1,10 @@
"""Repository layer: database access.
Each aggregate gets its own module (e.g. ``dsr_repository.py``) exposing a single
class with intent-named methods. Repositories own SQLAlchemy session usage; they
do not run business logic, and they do not import anything from
``compliance.api`` or ``compliance.services``.
Phase 1 refactor target: ``compliance.db.repository`` (1547 lines) is being
decomposed into per-aggregate modules under this package.
"""

View File

@@ -0,0 +1,11 @@
"""Pydantic schemas, split per domain.
Phase 1 refactor target: the monolithic ``compliance.api.schemas`` module (1899 lines)
is being decomposed into one module per domain under this package. Until every domain
has been migrated, ``compliance.api.schemas`` re-exports from here so existing imports
continue to work unchanged.
New code MUST import from the specific domain module (e.g.
``from compliance.schemas.dsr import DSRRequestCreate``) rather than from
``compliance.api.schemas``.
"""

View File

@@ -0,0 +1,63 @@
"""
AI System (AI Act) Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# AI System Schemas (AI Act Compliance)
# ============================================================================
class AISystemBase(BaseModel):
name: str
description: Optional[str] = None
purpose: Optional[str] = None
sector: Optional[str] = None
classification: str = "unclassified"
status: str = "draft"
obligations: Optional[List[str]] = None
class AISystemCreate(AISystemBase):
pass
class AISystemUpdate(BaseModel):
name: Optional[str] = None
description: Optional[str] = None
purpose: Optional[str] = None
sector: Optional[str] = None
classification: Optional[str] = None
status: Optional[str] = None
obligations: Optional[List[str]] = None
class AISystemResponse(AISystemBase):
id: str
assessment_date: Optional[datetime] = None
assessment_result: Optional[Dict[str, Any]] = None
risk_factors: Optional[List[Dict[str, Any]]] = None
recommendations: Optional[List[str]] = None
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class AISystemListResponse(BaseModel):
systems: List[AISystemResponse]
total: int

View File

@@ -0,0 +1,172 @@
"""
Audit Session Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# Audit Session & Sign-off Schemas (Phase 3 - Sprint 3)
# ============================================================================
class AuditResult(str):
"""Audit result values for sign-off."""
COMPLIANT = "compliant"
COMPLIANT_WITH_NOTES = "compliant_notes"
NON_COMPLIANT = "non_compliant"
NOT_APPLICABLE = "not_applicable"
PENDING = "pending"
class AuditSessionStatus(str):
"""Audit session status values."""
DRAFT = "draft"
IN_PROGRESS = "in_progress"
COMPLETED = "completed"
ARCHIVED = "archived"
class CreateAuditSessionRequest(BaseModel):
"""Request to create a new audit session."""
name: str = Field(..., min_length=1, max_length=200)
description: Optional[str] = None
auditor_name: str = Field(..., min_length=1, max_length=100)
auditor_email: Optional[str] = None
auditor_organization: Optional[str] = None
regulation_codes: Optional[List[str]] = None # Filter by regulations
class UpdateAuditSessionRequest(BaseModel):
"""Request to update an audit session."""
name: Optional[str] = Field(None, min_length=1, max_length=200)
description: Optional[str] = None
status: Optional[str] = None
class AuditSessionSummary(BaseModel):
"""Summary of an audit session for list views."""
id: str
name: str
auditor_name: str
status: str
total_items: int
completed_items: int
completion_percentage: float
created_at: datetime
started_at: Optional[datetime] = None
completed_at: Optional[datetime] = None
model_config = ConfigDict(from_attributes=True)
class AuditSessionResponse(AuditSessionSummary):
"""Full response for an audit session."""
description: Optional[str] = None
auditor_email: Optional[str] = None
auditor_organization: Optional[str] = None
regulation_ids: Optional[List[str]] = None
compliant_count: int = 0
non_compliant_count: int = 0
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class AuditSessionListResponse(BaseModel):
"""List response for audit sessions."""
sessions: List[AuditSessionSummary]
total: int
class AuditSessionDetailResponse(AuditSessionResponse):
"""Detailed response including statistics breakdown."""
statistics: Optional["AuditStatistics"] = None
class SignOffRequest(BaseModel):
"""Request to sign off a single requirement."""
result: str = Field(..., description="Audit result: compliant, compliant_notes, non_compliant, not_applicable, pending")
notes: Optional[str] = None
sign: bool = Field(False, description="Whether to create digital signature")
class SignOffResponse(BaseModel):
"""Response for a sign-off operation."""
id: str
session_id: str
requirement_id: str
result: str
notes: Optional[str] = None
is_signed: bool
signature_hash: Optional[str] = None
signed_at: Optional[datetime] = None
signed_by: Optional[str] = None
created_at: datetime
updated_at: Optional[datetime] = None
model_config = ConfigDict(from_attributes=True)
class AuditChecklistItem(BaseModel):
"""A single item in the audit checklist."""
requirement_id: str
regulation_code: str
article: str
paragraph: Optional[str] = None
title: str
description: Optional[str] = None
# Current audit state
current_result: str = "pending" # AuditResult
notes: Optional[str] = None
is_signed: bool = False
signed_at: Optional[datetime] = None
signed_by: Optional[str] = None
# Context info
evidence_count: int = 0
controls_mapped: int = 0
implementation_status: Optional[str] = None
# Priority
priority: int = 2
class AuditStatistics(BaseModel):
"""Statistics for an audit session."""
total: int
compliant: int
compliant_with_notes: int
non_compliant: int
not_applicable: int
pending: int
completion_percentage: float
class AuditChecklistResponse(BaseModel):
"""Response for audit checklist endpoint."""
session: AuditSessionSummary
items: List[AuditChecklistItem]
pagination: PaginationMeta
statistics: AuditStatistics
class AuditChecklistFilterRequest(BaseModel):
"""Filter options for audit checklist."""
regulation_code: Optional[str] = None
result_filter: Optional[str] = None # "pending", "compliant", "non_compliant", etc.
search: Optional[str] = None
signed_only: bool = False

View File

@@ -0,0 +1,58 @@
"""
BSI / PDF Extraction Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# PDF Extraction Schemas
# ============================================================================
class BSIAspectResponse(BaseModel):
"""A single extracted BSI-TR Pruefaspekt (test aspect)."""
aspect_id: str
title: str
full_text: str
category: str
page_number: int
section: str
requirement_level: str
source_document: str
keywords: Optional[List[str]] = None
related_aspects: Optional[List[str]] = None
class PDFExtractionRequest(BaseModel):
"""Request for PDF extraction."""
document_code: str = Field(..., description="BSI-TR document code, e.g. BSI-TR-03161-2")
save_to_db: bool = Field(True, description="Whether to save extracted requirements to database")
force: bool = Field(False, description="Force re-extraction even if requirements exist")
class PDFExtractionResponse(BaseModel):
"""Response from PDF extraction endpoint."""
# Simple endpoint format (new /pdf/extract/{doc_code})
doc_code: Optional[str] = None
total_extracted: Optional[int] = None
saved_to_db: Optional[int] = None
aspects: Optional[List[BSIAspectResponse]] = None
# Legacy scraper endpoint format (/scraper/extract-pdf)
success: Optional[bool] = None
source_document: Optional[str] = None
total_aspects: Optional[int] = None
statistics: Optional[Dict[str, Any]] = None
requirements_created: Optional[int] = None

View File

@@ -0,0 +1,79 @@
"""
Common (shared enums and pagination) Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
# ============================================================================
# Enums as strings for API
# ============================================================================
class RegulationType(str):
EU_REGULATION = "eu_regulation"
EU_DIRECTIVE = "eu_directive"
DE_LAW = "de_law"
BSI_STANDARD = "bsi_standard"
INDUSTRY_STANDARD = "industry_standard"
class ControlType(str):
PREVENTIVE = "preventive"
DETECTIVE = "detective"
CORRECTIVE = "corrective"
class ControlDomain(str):
GOVERNANCE = "gov"
PRIVACY = "priv"
IAM = "iam"
CRYPTO = "crypto"
SDLC = "sdlc"
OPS = "ops"
AI = "ai"
CRA = "cra"
AUDIT = "aud"
class ControlStatus(str):
PASS = "pass"
PARTIAL = "partial"
FAIL = "fail"
NOT_APPLICABLE = "n/a"
PLANNED = "planned"
class RiskLevel(str):
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CRITICAL = "critical"
class EvidenceStatus(str):
VALID = "valid"
EXPIRED = "expired"
PENDING = "pending"
FAILED = "failed"
# ============================================================================
# Pagination Schemas (defined here, completed after Response classes)
# ============================================================================
class PaginationMeta(BaseModel):
"""Pagination metadata for list responses."""
page: int
page_size: int
total: int
total_pages: int
has_next: bool
has_prev: bool

View File

@@ -0,0 +1,119 @@
"""
Control and ControlMapping Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# Control Schemas
# ============================================================================
class ControlBase(BaseModel):
control_id: str
domain: str
control_type: str
title: str
description: Optional[str] = None
pass_criteria: str
implementation_guidance: Optional[str] = None
code_reference: Optional[str] = None
documentation_url: Optional[str] = None
is_automated: bool = False
automation_tool: Optional[str] = None
automation_config: Optional[Dict[str, Any]] = None
owner: Optional[str] = None
review_frequency_days: int = 90
class ControlCreate(ControlBase):
pass
class ControlUpdate(BaseModel):
title: Optional[str] = None
description: Optional[str] = None
pass_criteria: Optional[str] = None
implementation_guidance: Optional[str] = None
code_reference: Optional[str] = None
documentation_url: Optional[str] = None
is_automated: Optional[bool] = None
automation_tool: Optional[str] = None
automation_config: Optional[Dict[str, Any]] = None
owner: Optional[str] = None
status: Optional[str] = None
status_notes: Optional[str] = None
class ControlResponse(ControlBase):
id: str
status: str
status_notes: Optional[str] = None
last_reviewed_at: Optional[datetime] = None
next_review_at: Optional[datetime] = None
created_at: datetime
updated_at: datetime
evidence_count: Optional[int] = None
requirement_count: Optional[int] = None
model_config = ConfigDict(from_attributes=True)
class ControlListResponse(BaseModel):
controls: List[ControlResponse]
total: int
class PaginatedControlResponse(BaseModel):
"""Paginated response for controls - optimized for large datasets."""
data: List[ControlResponse]
pagination: PaginationMeta
class ControlReviewRequest(BaseModel):
status: str
status_notes: Optional[str] = None
# ============================================================================
# Control Mapping Schemas
# ============================================================================
class MappingBase(BaseModel):
requirement_id: str
control_id: str
coverage_level: str = "full"
notes: Optional[str] = None
class MappingCreate(MappingBase):
pass
class MappingResponse(MappingBase):
id: str
requirement_article: Optional[str] = None
requirement_title: Optional[str] = None
control_control_id: Optional[str] = None
control_title: Optional[str] = None
created_at: datetime
model_config = ConfigDict(from_attributes=True)
class MappingListResponse(BaseModel):
mappings: List[MappingResponse]
total: int

View File

@@ -0,0 +1,195 @@
"""
Dashboard, Export, Executive Dashboard Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
from compliance.schemas.evidence import EvidenceResponse
from compliance.schemas.risk import RiskResponse
# ============================================================================
# Dashboard & Export Schemas
# ============================================================================
class DashboardResponse(BaseModel):
compliance_score: float
total_regulations: int
total_requirements: int
total_controls: int
controls_by_status: Dict[str, int]
controls_by_domain: Dict[str, Dict[str, int]]
total_evidence: int
evidence_by_status: Dict[str, int]
total_risks: int
risks_by_level: Dict[str, int]
recent_activity: List[Dict[str, Any]]
class ExportRequest(BaseModel):
export_type: str = "full" # "full", "controls_only", "evidence_only"
included_regulations: Optional[List[str]] = None
included_domains: Optional[List[str]] = None
date_range_start: Optional[date] = None
date_range_end: Optional[date] = None
class ExportResponse(BaseModel):
id: str
export_type: str
export_name: Optional[str] = None
status: str
requested_by: str
requested_at: datetime
completed_at: Optional[datetime] = None
file_path: Optional[str] = None
file_hash: Optional[str] = None
file_size_bytes: Optional[int] = None
total_controls: Optional[int] = None
total_evidence: Optional[int] = None
compliance_score: Optional[float] = None
error_message: Optional[str] = None
model_config = ConfigDict(from_attributes=True)
class ExportListResponse(BaseModel):
exports: List[ExportResponse]
total: int
# ============================================================================
# Seeding Schemas
# ============================================================================
class SeedRequest(BaseModel):
force: bool = False
class SeedResponse(BaseModel):
success: bool
message: str
counts: Dict[str, int]
class PaginatedEvidenceResponse(BaseModel):
"""Paginated response for evidence."""
data: List[EvidenceResponse]
pagination: PaginationMeta
class PaginatedRiskResponse(BaseModel):
"""Paginated response for risks."""
data: List[RiskResponse]
pagination: PaginationMeta
# ============================================================================
# Executive Dashboard Schemas (Phase 3 - Sprint 1)
# ============================================================================
class TrendDataPoint(BaseModel):
"""A single data point for trend charts."""
date: str # ISO date string
score: float
label: Optional[str] = None # Formatted date for display (e.g., "Jan 26")
class RiskSummary(BaseModel):
"""Summary of a risk for executive display."""
id: str
risk_id: str
title: str
risk_level: str # "low", "medium", "high", "critical"
owner: Optional[str] = None
status: str
category: str
impact: int
likelihood: int
class DeadlineItem(BaseModel):
"""An upcoming deadline for executive display."""
id: str
title: str
deadline: str # ISO date string
days_remaining: int
type: str # "control_review", "evidence_expiry", "audit"
status: str # "on_track", "at_risk", "overdue"
owner: Optional[str] = None
class TeamWorkloadItem(BaseModel):
"""Workload distribution for a team or person."""
name: str
pending_tasks: int
in_progress_tasks: int
completed_tasks: int
total_tasks: int
completion_rate: float
class ExecutiveDashboardResponse(BaseModel):
"""
Executive Dashboard Response
Provides a high-level overview for managers and executives:
- Traffic light status (green/yellow/red)
- Overall compliance score
- 12-month trend data
- Top 5 risks
- Upcoming deadlines
- Team workload distribution
"""
traffic_light_status: str # "green", "yellow", "red"
overall_score: float
score_trend: List[TrendDataPoint]
previous_score: Optional[float] = None
score_change: Optional[float] = None # Positive = improvement
# Counts
total_regulations: int
total_requirements: int
total_controls: int
open_risks: int
# Top items
top_risks: List[RiskSummary]
upcoming_deadlines: List[DeadlineItem]
# Workload
team_workload: List[TeamWorkloadItem]
# Last updated
last_updated: str
class ComplianceSnapshotCreate(BaseModel):
"""Request to create a compliance snapshot."""
notes: Optional[str] = None
class ComplianceSnapshotResponse(BaseModel):
"""Response for a compliance snapshot."""
id: str
snapshot_date: str
overall_score: float
scores_by_regulation: Dict[str, float]
scores_by_domain: Dict[str, float]
total_controls: int
passed_controls: int
failed_controls: int
notes: Optional[str] = None
created_at: str

View File

@@ -0,0 +1,66 @@
"""
Evidence Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# Evidence Schemas
# ============================================================================
class EvidenceBase(BaseModel):
control_id: str
evidence_type: str
title: str
description: Optional[str] = None
artifact_url: Optional[str] = None
valid_from: Optional[datetime] = None
valid_until: Optional[datetime] = None
source: Optional[str] = None
ci_job_id: Optional[str] = None
class EvidenceCreate(EvidenceBase):
pass
class EvidenceResponse(EvidenceBase):
id: str
artifact_path: Optional[str] = None
artifact_hash: Optional[str] = None
file_size_bytes: Optional[int] = None
mime_type: Optional[str] = None
status: str
uploaded_by: Optional[str] = None
collected_at: datetime
created_at: datetime
model_config = ConfigDict(from_attributes=True)
class EvidenceListResponse(BaseModel):
evidence: List[EvidenceResponse]
total: int
class EvidenceCollectRequest(BaseModel):
"""Request to auto-collect evidence from CI."""
control_id: str
evidence_type: str
title: str
ci_job_id: str
artifact_url: str

View File

@@ -0,0 +1,431 @@
"""
ISMS Audit Execution (Findings, CAPA, Reviews, Internal Audit, Readiness) Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
class AuditFindingBase(BaseModel):
"""Base schema for Audit Finding."""
finding_type: str # "major", "minor", "ofi", "positive"
iso_chapter: Optional[str] = None
annex_a_control: Optional[str] = None
title: str
description: str
objective_evidence: str
impact_description: Optional[str] = None
affected_processes: Optional[List[str]] = None
affected_assets: Optional[List[str]] = None
owner: Optional[str] = None
due_date: Optional[date] = None
class AuditFindingCreate(AuditFindingBase):
"""Schema for creating Audit Finding."""
audit_session_id: Optional[str] = None
internal_audit_id: Optional[str] = None
auditor: str
class AuditFindingUpdate(BaseModel):
"""Schema for updating Audit Finding."""
title: Optional[str] = None
description: Optional[str] = None
root_cause: Optional[str] = None
root_cause_method: Optional[str] = None
owner: Optional[str] = None
due_date: Optional[date] = None
status: Optional[str] = None
class AuditFindingResponse(AuditFindingBase):
"""Response schema for Audit Finding."""
id: str
finding_id: str
audit_session_id: Optional[str] = None
internal_audit_id: Optional[str] = None
root_cause: Optional[str] = None
root_cause_method: Optional[str] = None
status: str
auditor: Optional[str] = None
identified_date: date
closed_date: Optional[date] = None
verification_method: Optional[str] = None
verified_by: Optional[str] = None
verified_at: Optional[datetime] = None
closure_notes: Optional[str] = None
closed_by: Optional[str] = None
is_blocking: bool
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class AuditFindingListResponse(BaseModel):
"""List response for Audit Findings."""
findings: List[AuditFindingResponse]
total: int
major_count: int
minor_count: int
ofi_count: int
open_count: int
class AuditFindingCloseRequest(BaseModel):
"""Request to close an Audit Finding."""
closure_notes: str
closed_by: str
verification_method: str
verification_evidence: str
# --- Corrective Actions (CAPA) ---
class CorrectiveActionBase(BaseModel):
"""Base schema for Corrective Action."""
capa_type: str # "corrective", "preventive", "both"
title: str
description: str
expected_outcome: Optional[str] = None
assigned_to: str
planned_completion: date
effectiveness_criteria: Optional[str] = None
estimated_effort_hours: Optional[int] = None
resources_required: Optional[str] = None
class CorrectiveActionCreate(CorrectiveActionBase):
"""Schema for creating Corrective Action."""
finding_id: str
planned_start: Optional[date] = None
class CorrectiveActionUpdate(BaseModel):
"""Schema for updating Corrective Action."""
title: Optional[str] = None
description: Optional[str] = None
assigned_to: Optional[str] = None
planned_completion: Optional[date] = None
status: Optional[str] = None
progress_percentage: Optional[int] = None
implementation_evidence: Optional[str] = None
class CorrectiveActionResponse(CorrectiveActionBase):
"""Response schema for Corrective Action."""
id: str
capa_id: str
finding_id: str
planned_start: Optional[date] = None
actual_completion: Optional[date] = None
status: str
progress_percentage: int
approved_by: Optional[str] = None
actual_effort_hours: Optional[int] = None
implementation_evidence: Optional[str] = None
evidence_ids: Optional[List[str]] = None
effectiveness_verified: bool
effectiveness_verification_date: Optional[date] = None
effectiveness_notes: Optional[str] = None
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class CorrectiveActionListResponse(BaseModel):
"""List response for Corrective Actions."""
actions: List[CorrectiveActionResponse]
total: int
class CAPAVerifyRequest(BaseModel):
"""Request to verify CAPA effectiveness."""
verified_by: str
effectiveness_notes: str
is_effective: bool
# --- Management Review (ISO 27001 9.3) ---
class ReviewAttendee(BaseModel):
"""Single attendee in management review."""
name: str
role: str
class ReviewActionItem(BaseModel):
"""Single action item from management review."""
action: str
owner: str
due_date: date
class ManagementReviewBase(BaseModel):
"""Base schema for Management Review."""
title: str
review_date: date
review_period_start: Optional[date] = None
review_period_end: Optional[date] = None
chairperson: str
attendees: Optional[List[ReviewAttendee]] = None
class ManagementReviewCreate(ManagementReviewBase):
"""Schema for creating Management Review."""
pass
class ManagementReviewUpdate(BaseModel):
"""Schema for updating Management Review."""
# Inputs (9.3)
input_previous_actions: Optional[str] = None
input_isms_changes: Optional[str] = None
input_security_performance: Optional[str] = None
input_interested_party_feedback: Optional[str] = None
input_risk_assessment_results: Optional[str] = None
input_improvement_opportunities: Optional[str] = None
input_policy_effectiveness: Optional[str] = None
input_objective_achievement: Optional[str] = None
input_resource_adequacy: Optional[str] = None
# Outputs (9.3)
output_improvement_decisions: Optional[str] = None
output_isms_changes: Optional[str] = None
output_resource_needs: Optional[str] = None
# Action items
action_items: Optional[List[ReviewActionItem]] = None
# Assessment
isms_effectiveness_rating: Optional[str] = None
key_decisions: Optional[str] = None
status: Optional[str] = None
class ManagementReviewResponse(ManagementReviewBase):
"""Response schema for Management Review."""
id: str
review_id: str
input_previous_actions: Optional[str] = None
input_isms_changes: Optional[str] = None
input_security_performance: Optional[str] = None
input_interested_party_feedback: Optional[str] = None
input_risk_assessment_results: Optional[str] = None
input_improvement_opportunities: Optional[str] = None
input_policy_effectiveness: Optional[str] = None
input_objective_achievement: Optional[str] = None
input_resource_adequacy: Optional[str] = None
output_improvement_decisions: Optional[str] = None
output_isms_changes: Optional[str] = None
output_resource_needs: Optional[str] = None
action_items: Optional[List[ReviewActionItem]] = None
isms_effectiveness_rating: Optional[str] = None
key_decisions: Optional[str] = None
status: str
approved_by: Optional[str] = None
approved_at: Optional[datetime] = None
minutes_document_path: Optional[str] = None
next_review_date: Optional[date] = None
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class ManagementReviewListResponse(BaseModel):
"""List response for Management Reviews."""
reviews: List[ManagementReviewResponse]
total: int
class ManagementReviewApproveRequest(BaseModel):
"""Request to approve Management Review."""
approved_by: str
next_review_date: date
minutes_document_path: Optional[str] = None
# --- Internal Audit (ISO 27001 9.2) ---
class InternalAuditBase(BaseModel):
"""Base schema for Internal Audit."""
title: str
audit_type: str # "scheduled", "surveillance", "special"
scope_description: str
iso_chapters_covered: Optional[List[str]] = None
annex_a_controls_covered: Optional[List[str]] = None
processes_covered: Optional[List[str]] = None
departments_covered: Optional[List[str]] = None
criteria: Optional[str] = None
planned_date: date
lead_auditor: str
audit_team: Optional[List[str]] = None
class InternalAuditCreate(InternalAuditBase):
"""Schema for creating Internal Audit."""
pass
class InternalAuditUpdate(BaseModel):
"""Schema for updating Internal Audit."""
title: Optional[str] = None
scope_description: Optional[str] = None
actual_start_date: Optional[date] = None
actual_end_date: Optional[date] = None
auditee_representatives: Optional[List[str]] = None
status: Optional[str] = None
audit_conclusion: Optional[str] = None
overall_assessment: Optional[str] = None
class InternalAuditResponse(InternalAuditBase):
"""Response schema for Internal Audit."""
id: str
audit_id: str
actual_start_date: Optional[date] = None
actual_end_date: Optional[date] = None
auditee_representatives: Optional[List[str]] = None
status: str
total_findings: int
major_findings: int
minor_findings: int
ofi_count: int
positive_observations: int
audit_conclusion: Optional[str] = None
overall_assessment: Optional[str] = None
report_date: Optional[date] = None
report_document_path: Optional[str] = None
report_approved_by: Optional[str] = None
report_approved_at: Optional[datetime] = None
follow_up_audit_required: bool
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class InternalAuditListResponse(BaseModel):
"""List response for Internal Audits."""
audits: List[InternalAuditResponse]
total: int
class InternalAuditCompleteRequest(BaseModel):
"""Request to complete Internal Audit."""
audit_conclusion: str
overall_assessment: str # "conforming", "minor_nc", "major_nc"
follow_up_audit_required: bool
# --- ISMS Readiness Check ---
class PotentialFinding(BaseModel):
"""Potential finding from readiness check."""
check: str
status: str # "pass", "fail", "warning"
recommendation: str
iso_reference: Optional[str] = None
class ISMSReadinessCheckResponse(BaseModel):
"""Response for ISMS Readiness Check."""
id: str
check_date: datetime
triggered_by: Optional[str] = None
overall_status: str # "ready", "at_risk", "not_ready"
certification_possible: bool
# Chapter statuses
chapter_4_status: Optional[str] = None
chapter_5_status: Optional[str] = None
chapter_6_status: Optional[str] = None
chapter_7_status: Optional[str] = None
chapter_8_status: Optional[str] = None
chapter_9_status: Optional[str] = None
chapter_10_status: Optional[str] = None
# Findings
potential_majors: List[PotentialFinding]
potential_minors: List[PotentialFinding]
improvement_opportunities: List[PotentialFinding]
# Scores
readiness_score: float
documentation_score: Optional[float] = None
implementation_score: Optional[float] = None
evidence_score: Optional[float] = None
# Priority actions
priority_actions: List[str]
model_config = ConfigDict(from_attributes=True)
class ISMSReadinessCheckRequest(BaseModel):
"""Request to run ISMS Readiness Check."""
triggered_by: str = "manual"
# --- Audit Trail ---
class AuditTrailEntry(BaseModel):
"""Single audit trail entry."""
id: str
entity_type: str
entity_id: str
entity_name: Optional[str] = None
action: str
field_changed: Optional[str] = None
old_value: Optional[str] = None
new_value: Optional[str] = None
change_summary: Optional[str] = None
performed_by: str
performed_at: datetime
model_config = ConfigDict(from_attributes=True)
class AuditTrailResponse(BaseModel):
"""Response for Audit Trail query."""
entries: List[AuditTrailEntry]
total: int
pagination: PaginationMeta
# --- ISO 27001 Chapter Status Overview ---
class ISO27001ChapterStatus(BaseModel):
"""Status of a single ISO 27001 chapter."""
chapter: str
title: str
status: str # "compliant", "partial", "non_compliant", "not_started"
completion_percentage: float
open_findings: int
key_documents: List[str]
last_reviewed: Optional[datetime] = None
class ISO27001OverviewResponse(BaseModel):
"""Complete ISO 27001 status overview."""
overall_status: str # "ready", "at_risk", "not_ready", "not_started"
certification_readiness: float # 0-100
chapters: List[ISO27001ChapterStatus]
scope_approved: bool
soa_approved: bool
last_management_review: Optional[datetime] = None
last_internal_audit: Optional[datetime] = None
open_major_findings: int
open_minor_findings: int
policies_count: int
policies_approved: int
objectives_count: int
objectives_achieved: int

View File

@@ -0,0 +1,343 @@
"""
ISMS Governance (Scope, Context, Policy, Objective, SoA) Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# ISO 27001 ISMS Schemas (Kapitel 4-10)
# ============================================================================
# --- Enums ---
class ApprovalStatus(str):
DRAFT = "draft"
UNDER_REVIEW = "under_review"
APPROVED = "approved"
SUPERSEDED = "superseded"
class FindingType(str):
MAJOR = "major"
MINOR = "minor"
OFI = "ofi"
POSITIVE = "positive"
class FindingStatus(str):
OPEN = "open"
IN_PROGRESS = "in_progress"
CAPA_PENDING = "capa_pending"
VERIFICATION_PENDING = "verification_pending"
VERIFIED = "verified"
CLOSED = "closed"
class CAPAType(str):
CORRECTIVE = "corrective"
PREVENTIVE = "preventive"
BOTH = "both"
# --- ISMS Scope (ISO 27001 4.3) ---
class ISMSScopeBase(BaseModel):
"""Base schema for ISMS Scope."""
scope_statement: str
included_locations: Optional[List[str]] = None
included_processes: Optional[List[str]] = None
included_services: Optional[List[str]] = None
excluded_items: Optional[List[str]] = None
exclusion_justification: Optional[str] = None
organizational_boundary: Optional[str] = None
physical_boundary: Optional[str] = None
technical_boundary: Optional[str] = None
class ISMSScopeCreate(ISMSScopeBase):
"""Schema for creating ISMS Scope."""
pass
class ISMSScopeUpdate(BaseModel):
"""Schema for updating ISMS Scope."""
scope_statement: Optional[str] = None
included_locations: Optional[List[str]] = None
included_processes: Optional[List[str]] = None
included_services: Optional[List[str]] = None
excluded_items: Optional[List[str]] = None
exclusion_justification: Optional[str] = None
organizational_boundary: Optional[str] = None
physical_boundary: Optional[str] = None
technical_boundary: Optional[str] = None
class ISMSScopeResponse(ISMSScopeBase):
"""Response schema for ISMS Scope."""
id: str
version: str
status: str
approved_by: Optional[str] = None
approved_at: Optional[datetime] = None
effective_date: Optional[date] = None
review_date: Optional[date] = None
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class ISMSScopeApproveRequest(BaseModel):
"""Request to approve ISMS Scope."""
approved_by: str
effective_date: date
review_date: date
# --- ISMS Context (ISO 27001 4.1, 4.2) ---
class ContextIssue(BaseModel):
"""Single context issue."""
issue: str
impact: str
treatment: Optional[str] = None
class InterestedParty(BaseModel):
"""Single interested party."""
party: str
requirements: List[str]
relevance: str
class ISMSContextBase(BaseModel):
"""Base schema for ISMS Context."""
internal_issues: Optional[List[ContextIssue]] = None
external_issues: Optional[List[ContextIssue]] = None
interested_parties: Optional[List[InterestedParty]] = None
regulatory_requirements: Optional[List[str]] = None
contractual_requirements: Optional[List[str]] = None
swot_strengths: Optional[List[str]] = None
swot_weaknesses: Optional[List[str]] = None
swot_opportunities: Optional[List[str]] = None
swot_threats: Optional[List[str]] = None
class ISMSContextCreate(ISMSContextBase):
"""Schema for creating ISMS Context."""
pass
class ISMSContextResponse(ISMSContextBase):
"""Response schema for ISMS Context."""
id: str
version: str
status: str
approved_by: Optional[str] = None
approved_at: Optional[datetime] = None
last_reviewed_at: Optional[datetime] = None
next_review_date: Optional[date] = None
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
# --- ISMS Policies (ISO 27001 5.2) ---
class ISMSPolicyBase(BaseModel):
"""Base schema for ISMS Policy."""
policy_id: str
title: str
policy_type: str # "master", "operational", "technical"
description: Optional[str] = None
policy_text: str
applies_to: Optional[List[str]] = None
review_frequency_months: int = 12
related_controls: Optional[List[str]] = None
class ISMSPolicyCreate(ISMSPolicyBase):
"""Schema for creating ISMS Policy."""
authored_by: str
class ISMSPolicyUpdate(BaseModel):
"""Schema for updating ISMS Policy."""
title: Optional[str] = None
description: Optional[str] = None
policy_text: Optional[str] = None
applies_to: Optional[List[str]] = None
review_frequency_months: Optional[int] = None
related_controls: Optional[List[str]] = None
class ISMSPolicyResponse(ISMSPolicyBase):
"""Response schema for ISMS Policy."""
id: str
version: str
status: str
authored_by: Optional[str] = None
reviewed_by: Optional[str] = None
approved_by: Optional[str] = None
approved_at: Optional[datetime] = None
effective_date: Optional[date] = None
next_review_date: Optional[date] = None
document_path: Optional[str] = None
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class ISMSPolicyListResponse(BaseModel):
"""List response for ISMS Policies."""
policies: List[ISMSPolicyResponse]
total: int
class ISMSPolicyApproveRequest(BaseModel):
"""Request to approve ISMS Policy."""
reviewed_by: str
approved_by: str
effective_date: date
# --- Security Objectives (ISO 27001 6.2) ---
class SecurityObjectiveBase(BaseModel):
"""Base schema for Security Objective."""
objective_id: str
title: str
description: Optional[str] = None
category: str # "availability", "confidentiality", "integrity", "compliance"
specific: Optional[str] = None
measurable: Optional[str] = None
achievable: Optional[str] = None
relevant: Optional[str] = None
time_bound: Optional[str] = None
kpi_name: Optional[str] = None
kpi_target: Optional[str] = None
kpi_unit: Optional[str] = None
measurement_frequency: Optional[str] = None
owner: Optional[str] = None
target_date: Optional[date] = None
related_controls: Optional[List[str]] = None
related_risks: Optional[List[str]] = None
class SecurityObjectiveCreate(SecurityObjectiveBase):
"""Schema for creating Security Objective."""
pass
class SecurityObjectiveUpdate(BaseModel):
"""Schema for updating Security Objective."""
title: Optional[str] = None
description: Optional[str] = None
kpi_current: Optional[str] = None
progress_percentage: Optional[int] = None
status: Optional[str] = None
class SecurityObjectiveResponse(SecurityObjectiveBase):
"""Response schema for Security Objective."""
id: str
kpi_current: Optional[str] = None
status: str
progress_percentage: int
achieved_date: Optional[date] = None
approved_by: Optional[str] = None
approved_at: Optional[datetime] = None
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class SecurityObjectiveListResponse(BaseModel):
"""List response for Security Objectives."""
objectives: List[SecurityObjectiveResponse]
total: int
# --- Statement of Applicability (SoA) ---
class SoAEntryBase(BaseModel):
"""Base schema for SoA Entry."""
annex_a_control: str # e.g., "A.5.1"
annex_a_title: str
annex_a_category: Optional[str] = None
is_applicable: bool
applicability_justification: str
implementation_status: str = "planned"
implementation_notes: Optional[str] = None
breakpilot_control_ids: Optional[List[str]] = None
coverage_level: str = "full"
evidence_description: Optional[str] = None
risk_assessment_notes: Optional[str] = None
compensating_controls: Optional[str] = None
class SoAEntryCreate(SoAEntryBase):
"""Schema for creating SoA Entry."""
pass
class SoAEntryUpdate(BaseModel):
"""Schema for updating SoA Entry."""
is_applicable: Optional[bool] = None
applicability_justification: Optional[str] = None
implementation_status: Optional[str] = None
implementation_notes: Optional[str] = None
breakpilot_control_ids: Optional[List[str]] = None
coverage_level: Optional[str] = None
evidence_description: Optional[str] = None
class SoAEntryResponse(SoAEntryBase):
"""Response schema for SoA Entry."""
id: str
evidence_ids: Optional[List[str]] = None
reviewed_by: Optional[str] = None
reviewed_at: Optional[datetime] = None
approved_by: Optional[str] = None
approved_at: Optional[datetime] = None
version: str
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class SoAListResponse(BaseModel):
"""List response for SoA."""
entries: List[SoAEntryResponse]
total: int
applicable_count: int
not_applicable_count: int
implemented_count: int
planned_count: int
class SoAApproveRequest(BaseModel):
"""Request to approve SoA entry."""
reviewed_by: str
approved_by: str
# --- Audit Findings (Major/Minor/OFI) ---

View File

@@ -0,0 +1,52 @@
"""
Regulation Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# Regulation Schemas
# ============================================================================
class RegulationBase(BaseModel):
code: str
name: str
full_name: Optional[str] = None
regulation_type: str
source_url: Optional[str] = None
local_pdf_path: Optional[str] = None
effective_date: Optional[date] = None
description: Optional[str] = None
is_active: bool = True
class RegulationCreate(RegulationBase):
pass
class RegulationResponse(RegulationBase):
id: str
created_at: datetime
updated_at: datetime
requirement_count: Optional[int] = None
model_config = ConfigDict(from_attributes=True)
class RegulationListResponse(BaseModel):
regulations: List[RegulationResponse]
total: int

View File

@@ -0,0 +1,53 @@
"""
Report generation Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# Report Generation Schemas (Phase 3 - Sprint 3)
# ============================================================================
class GenerateReportRequest(BaseModel):
"""Request to generate an audit report."""
session_id: str
report_type: str = "full" # "full", "summary", "non_compliant_only"
include_evidence: bool = True
include_signatures: bool = True
language: str = "de" # "de" or "en"
class ReportGenerationResponse(BaseModel):
"""Response for report generation."""
report_id: str
session_id: str
status: str # "pending", "generating", "completed", "failed"
report_type: str
file_path: Optional[str] = None
file_size_bytes: Optional[int] = None
created_at: datetime
completed_at: Optional[datetime] = None
error_message: Optional[str] = None
class ReportDownloadResponse(BaseModel):
"""Response for report download."""
report_id: str
filename: str
mime_type: str
file_size_bytes: int
download_url: str

View File

@@ -0,0 +1,80 @@
"""
Requirement Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# Requirement Schemas
# ============================================================================
class RequirementBase(BaseModel):
article: str
paragraph: Optional[str] = None
title: str
description: Optional[str] = None
requirement_text: Optional[str] = None
breakpilot_interpretation: Optional[str] = None
is_applicable: bool = True
applicability_reason: Optional[str] = None
priority: int = 2
class RequirementCreate(RequirementBase):
regulation_id: str
class RequirementResponse(RequirementBase):
id: str
regulation_id: str
regulation_code: Optional[str] = None
# Implementation tracking
implementation_status: Optional[str] = "not_started"
implementation_details: Optional[str] = None
code_references: Optional[List[Dict[str, Any]]] = None
documentation_links: Optional[List[str]] = None
# Evidence for auditors
evidence_description: Optional[str] = None
evidence_artifacts: Optional[List[Dict[str, Any]]] = None
# Audit tracking
auditor_notes: Optional[str] = None
audit_status: Optional[str] = "pending"
last_audit_date: Optional[datetime] = None
last_auditor: Optional[str] = None
# Source reference
source_page: Optional[int] = None
source_section: Optional[str] = None
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class RequirementListResponse(BaseModel):
requirements: List[RequirementResponse]
total: int
class PaginatedRequirementResponse(BaseModel):
"""Paginated response for requirements - optimized for large datasets."""
data: List[RequirementResponse]
pagination: PaginationMeta

View File

@@ -0,0 +1,79 @@
"""
Risk Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# Risk Schemas
# ============================================================================
class RiskBase(BaseModel):
risk_id: str
title: str
description: Optional[str] = None
category: str
likelihood: int = Field(ge=1, le=5)
impact: int = Field(ge=1, le=5)
mitigating_controls: Optional[List[str]] = None
owner: Optional[str] = None
treatment_plan: Optional[str] = None
class RiskCreate(RiskBase):
pass
class RiskUpdate(BaseModel):
title: Optional[str] = None
description: Optional[str] = None
category: Optional[str] = None
likelihood: Optional[int] = Field(default=None, ge=1, le=5)
impact: Optional[int] = Field(default=None, ge=1, le=5)
residual_likelihood: Optional[int] = Field(default=None, ge=1, le=5)
residual_impact: Optional[int] = Field(default=None, ge=1, le=5)
mitigating_controls: Optional[List[str]] = None
owner: Optional[str] = None
status: Optional[str] = None
treatment_plan: Optional[str] = None
class RiskResponse(RiskBase):
id: str
inherent_risk: str
residual_likelihood: Optional[int] = None
residual_impact: Optional[int] = None
residual_risk: Optional[str] = None
status: str
identified_date: Optional[date] = None
review_date: Optional[date] = None
last_assessed_at: Optional[datetime] = None
created_at: datetime
updated_at: datetime
model_config = ConfigDict(from_attributes=True)
class RiskListResponse(BaseModel):
risks: List[RiskResponse]
total: int
class RiskMatrixResponse(BaseModel):
"""Risk matrix data for visualization."""
matrix: Dict[str, Dict[str, List[str]]] # likelihood -> impact -> risk_ids
risks: List[RiskResponse]

View File

@@ -0,0 +1,121 @@
"""
Service Module Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# Service Module Schemas (Sprint 3)
# ============================================================================
class ServiceModuleBase(BaseModel):
"""Base schema for service modules."""
name: str
display_name: str
description: Optional[str] = None
service_type: str
port: Optional[int] = None
technology_stack: Optional[List[str]] = None
repository_path: Optional[str] = None
docker_image: Optional[str] = None
data_categories: Optional[List[str]] = None
processes_pii: bool = False
processes_health_data: bool = False
ai_components: bool = False
criticality: str = "medium"
owner_team: Optional[str] = None
owner_contact: Optional[str] = None
class ServiceModuleCreate(ServiceModuleBase):
"""Schema for creating a service module."""
pass
class ServiceModuleResponse(ServiceModuleBase):
"""Response schema for service module."""
id: str
is_active: bool
compliance_score: Optional[float] = None
last_compliance_check: Optional[datetime] = None
created_at: datetime
updated_at: datetime
regulation_count: Optional[int] = None
risk_count: Optional[int] = None
model_config = ConfigDict(from_attributes=True)
class ServiceModuleListResponse(BaseModel):
"""List response for service modules."""
modules: List[ServiceModuleResponse]
total: int
class ServiceModuleDetailResponse(ServiceModuleResponse):
"""Detailed response including regulations and risks."""
regulations: Optional[List[Dict[str, Any]]] = None
risks: Optional[List[Dict[str, Any]]] = None
class ModuleRegulationMappingBase(BaseModel):
"""Base schema for module-regulation mapping."""
module_id: str
regulation_id: str
relevance_level: str = "medium"
notes: Optional[str] = None
applicable_articles: Optional[List[str]] = None
class ModuleRegulationMappingCreate(ModuleRegulationMappingBase):
"""Schema for creating a module-regulation mapping."""
pass
class ModuleRegulationMappingResponse(ModuleRegulationMappingBase):
"""Response schema for module-regulation mapping."""
id: str
module_name: Optional[str] = None
regulation_code: Optional[str] = None
regulation_name: Optional[str] = None
created_at: datetime
model_config = ConfigDict(from_attributes=True)
class ModuleSeedRequest(BaseModel):
"""Request to seed service modules."""
force: bool = False
class ModuleSeedResponse(BaseModel):
"""Response from seeding service modules."""
success: bool
message: str
modules_created: int
mappings_created: int
class ModuleComplianceOverview(BaseModel):
"""Overview of compliance status for all modules."""
total_modules: int
modules_by_type: Dict[str, int]
modules_by_criticality: Dict[str, int]
modules_processing_pii: int
modules_with_ai: int
average_compliance_score: Optional[float] = None
regulations_coverage: Dict[str, int] # regulation_code -> module_count

View File

@@ -0,0 +1,71 @@
"""
TOM (Technisch-Organisatorische Maßnahmen) Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# TOM — Technisch-Organisatorische Massnahmen (Art. 32 DSGVO)
# ============================================================================
class TOMStateResponse(BaseModel):
tenant_id: str
state: Dict[str, Any] = {}
version: int = 0
last_modified: Optional[datetime] = None
is_new: bool = False
class TOMMeasureResponse(BaseModel):
id: str
tenant_id: str
control_id: str
name: str
description: Optional[str] = None
category: str
type: str
applicability: str = "REQUIRED"
applicability_reason: Optional[str] = None
implementation_status: str = "NOT_IMPLEMENTED"
responsible_person: Optional[str] = None
responsible_department: Optional[str] = None
implementation_date: Optional[datetime] = None
review_date: Optional[datetime] = None
review_frequency: Optional[str] = None
priority: Optional[str] = None
complexity: Optional[str] = None
linked_evidence: List[Any] = []
evidence_gaps: List[Any] = []
related_controls: Dict[str, Any] = {}
verified_at: Optional[datetime] = None
verified_by: Optional[str] = None
effectiveness_rating: Optional[str] = None
created_by: Optional[str] = None
created_at: Optional[datetime] = None
updated_at: Optional[datetime] = None
model_config = ConfigDict(from_attributes=True)
class TOMStatsResponse(BaseModel):
total: int = 0
by_status: Dict[str, int] = {}
by_category: Dict[str, int] = {}
overdue_review_count: int = 0
implemented: int = 0
partial: int = 0
not_implemented: int = 0

View File

@@ -0,0 +1,168 @@
"""
VVT (Verzeichnis von Verarbeitungstätigkeiten) Pydantic schemas — extracted from compliance/api/schemas.py.
Phase 1 Step 3: the monolithic ``compliance.api.schemas`` module is being
split per domain under ``compliance.schemas``. This module is re-exported
from ``compliance.api.schemas`` for backwards compatibility.
"""
from datetime import datetime, date
from typing import Optional, List, Any, Dict
from pydantic import BaseModel, ConfigDict, Field
from compliance.schemas.common import (
PaginationMeta, RegulationType, ControlType, ControlDomain,
ControlStatus, RiskLevel, EvidenceStatus,
)
# ============================================================================
# VVT Schemas — Verzeichnis von Verarbeitungstaetigkeiten (Art. 30 DSGVO)
# ============================================================================
class VVTOrganizationUpdate(BaseModel):
organization_name: Optional[str] = None
industry: Optional[str] = None
locations: Optional[List[str]] = None
employee_count: Optional[int] = None
dpo_name: Optional[str] = None
dpo_contact: Optional[str] = None
vvt_version: Optional[str] = None
last_review_date: Optional[date] = None
next_review_date: Optional[date] = None
review_interval: Optional[str] = None
class VVTOrganizationResponse(BaseModel):
id: str
organization_name: str
industry: Optional[str] = None
locations: List[Any] = []
employee_count: Optional[int] = None
dpo_name: Optional[str] = None
dpo_contact: Optional[str] = None
vvt_version: str = '1.0'
last_review_date: Optional[date] = None
next_review_date: Optional[date] = None
review_interval: str = 'annual'
created_at: datetime
updated_at: Optional[datetime] = None
model_config = ConfigDict(from_attributes=True)
class VVTActivityCreate(BaseModel):
vvt_id: str
name: str
description: Optional[str] = None
purposes: List[str] = []
legal_bases: List[str] = []
data_subject_categories: List[str] = []
personal_data_categories: List[str] = []
recipient_categories: List[str] = []
third_country_transfers: List[Any] = []
retention_period: Dict[str, Any] = {}
tom_description: Optional[str] = None
business_function: Optional[str] = None
systems: List[str] = []
deployment_model: Optional[str] = None
data_sources: List[Any] = []
data_flows: List[Any] = []
protection_level: str = 'MEDIUM'
dpia_required: bool = False
structured_toms: Dict[str, Any] = {}
status: str = 'DRAFT'
responsible: Optional[str] = None
owner: Optional[str] = None
last_reviewed_at: Optional[datetime] = None
next_review_at: Optional[datetime] = None
created_by: Optional[str] = None
dsfa_id: Optional[str] = None
class VVTActivityUpdate(BaseModel):
name: Optional[str] = None
description: Optional[str] = None
purposes: Optional[List[str]] = None
legal_bases: Optional[List[str]] = None
data_subject_categories: Optional[List[str]] = None
personal_data_categories: Optional[List[str]] = None
recipient_categories: Optional[List[str]] = None
third_country_transfers: Optional[List[Any]] = None
retention_period: Optional[Dict[str, Any]] = None
tom_description: Optional[str] = None
business_function: Optional[str] = None
systems: Optional[List[str]] = None
deployment_model: Optional[str] = None
data_sources: Optional[List[Any]] = None
data_flows: Optional[List[Any]] = None
protection_level: Optional[str] = None
dpia_required: Optional[bool] = None
structured_toms: Optional[Dict[str, Any]] = None
status: Optional[str] = None
responsible: Optional[str] = None
owner: Optional[str] = None
last_reviewed_at: Optional[datetime] = None
next_review_at: Optional[datetime] = None
created_by: Optional[str] = None
dsfa_id: Optional[str] = None
class VVTActivityResponse(BaseModel):
id: str
vvt_id: str
name: str
description: Optional[str] = None
purposes: List[Any] = []
legal_bases: List[Any] = []
data_subject_categories: List[Any] = []
personal_data_categories: List[Any] = []
recipient_categories: List[Any] = []
third_country_transfers: List[Any] = []
retention_period: Dict[str, Any] = {}
tom_description: Optional[str] = None
business_function: Optional[str] = None
systems: List[Any] = []
deployment_model: Optional[str] = None
data_sources: List[Any] = []
data_flows: List[Any] = []
protection_level: str = 'MEDIUM'
dpia_required: bool = False
structured_toms: Dict[str, Any] = {}
status: str = 'DRAFT'
responsible: Optional[str] = None
owner: Optional[str] = None
last_reviewed_at: Optional[datetime] = None
next_review_at: Optional[datetime] = None
created_by: Optional[str] = None
dsfa_id: Optional[str] = None
created_at: datetime
updated_at: Optional[datetime] = None
model_config = ConfigDict(from_attributes=True)
class VVTStatsResponse(BaseModel):
total: int
by_status: Dict[str, int]
by_business_function: Dict[str, int]
dpia_required_count: int
third_country_count: int
draft_count: int
approved_count: int
overdue_review_count: int = 0
class VVTAuditLogEntry(BaseModel):
id: str
action: str
entity_type: str
entity_id: Optional[str] = None
changed_by: Optional[str] = None
old_values: Optional[Dict[str, Any]] = None
new_values: Optional[Dict[str, Any]] = None
created_at: datetime
model_config = ConfigDict(from_attributes=True)

View File

@@ -16,7 +16,7 @@ Uses reportlab for PDF generation (lightweight, no external dependencies).
import io
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Dict, List, Any, Optional, Tuple
from sqlalchemy.orm import Session
@@ -255,7 +255,7 @@ class AuditPDFGenerator:
doc.build(story)
# Generate filename
date_str = datetime.utcnow().strftime('%Y%m%d')
date_str = datetime.now(timezone.utc).strftime('%Y%m%d')
filename = f"audit_report_{session.name.replace(' ', '_')}_{date_str}.pdf"
return buffer.getvalue(), filename
@@ -429,7 +429,7 @@ class AuditPDFGenerator:
story.append(Spacer(1, 30*mm))
gen_label = 'Generiert am' if language == 'de' else 'Generated on'
story.append(Paragraph(
f"{gen_label}: {datetime.utcnow().strftime('%d.%m.%Y %H:%M')} UTC",
f"{gen_label}: {datetime.now(timezone.utc).strftime('%d.%m.%Y %H:%M')} UTC",
self.styles['Footer']
))

View File

@@ -11,7 +11,7 @@ Sprint 6: CI/CD Evidence Collection (2026-01-18)
"""
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Dict, List, Optional
from dataclasses import dataclass
from enum import Enum
@@ -140,7 +140,7 @@ class AutoRiskUpdater:
if new_status != old_status:
control.status = ControlStatusEnum(new_status)
control.status_notes = self._generate_status_notes(scan_result)
control.updated_at = datetime.utcnow()
control.updated_at = datetime.now(timezone.utc)
control_updated = True
logger.info(f"Control {scan_result.control_id} status changed: {old_status} -> {new_status}")
@@ -225,7 +225,7 @@ class AutoRiskUpdater:
source="ci_pipeline",
ci_job_id=scan_result.ci_job_id,
status=EvidenceStatusEnum.VALID,
valid_from=datetime.utcnow(),
valid_from=datetime.now(timezone.utc),
collected_at=scan_result.timestamp,
)
@@ -298,8 +298,8 @@ class AutoRiskUpdater:
risk_updated = True
if risk_updated:
risk.last_assessed_at = datetime.utcnow()
risk.updated_at = datetime.utcnow()
risk.last_assessed_at = datetime.now(timezone.utc)
risk.updated_at = datetime.now(timezone.utc)
affected_risks.append(risk.risk_id)
logger.info(f"Updated risk {risk.risk_id} due to control {control.control_id} status change")
@@ -354,7 +354,7 @@ class AutoRiskUpdater:
try:
ts = datetime.fromisoformat(timestamp.replace('Z', '+00:00'))
except (ValueError, AttributeError):
ts = datetime.utcnow()
ts = datetime.now(timezone.utc)
# Determine scan type from evidence_type
scan_type = ScanType.SAST # Default

View File

@@ -16,7 +16,7 @@ import os
import shutil
import tempfile
import zipfile
from datetime import datetime, date
from datetime import datetime, date, timezone
from pathlib import Path
from typing import Dict, List, Optional, Any
@@ -98,7 +98,7 @@ class AuditExportGenerator:
export_record.file_hash = file_hash
export_record.file_size_bytes = file_size
export_record.status = ExportStatusEnum.COMPLETED
export_record.completed_at = datetime.utcnow()
export_record.completed_at = datetime.now(timezone.utc)
# Calculate statistics
stats = self._calculate_statistics(

View File

@@ -11,7 +11,7 @@ Similar pattern to edu-search and zeugnisse-crawler.
import logging
import re
from datetime import datetime
from datetime import datetime, timezone
from typing import Dict, List, Any, Optional
from enum import Enum
@@ -198,7 +198,7 @@ class RegulationScraperService:
async def scrape_all(self) -> Dict[str, Any]:
"""Scrape all known regulation sources."""
self.status = ScraperStatus.RUNNING
self.stats["last_run"] = datetime.utcnow().isoformat()
self.stats["last_run"] = datetime.now(timezone.utc).isoformat()
results = {
"success": [],

View File

@@ -11,7 +11,7 @@ Reports include:
"""
import logging
from datetime import datetime, date, timedelta
from datetime import datetime, date, timedelta, timezone
from typing import Dict, List, Any, Optional
from enum import Enum
@@ -75,7 +75,7 @@ class ComplianceReportGenerator:
report = {
"report_metadata": {
"generated_at": datetime.utcnow().isoformat(),
"generated_at": datetime.now(timezone.utc).isoformat(),
"period": period.value,
"as_of_date": as_of_date.isoformat(),
"date_range_start": date_range["start"].isoformat(),
@@ -415,7 +415,7 @@ class ComplianceReportGenerator:
evidence_stats = self.evidence_repo.get_statistics()
return {
"generated_at": datetime.utcnow().isoformat(),
"generated_at": datetime.now(timezone.utc).isoformat(),
"compliance_score": stats.get("compliance_score", 0),
"controls": {
"total": stats.get("total", 0),

View File

@@ -8,7 +8,7 @@ Run with: pytest backend/compliance/tests/test_audit_routes.py -v
import pytest
import hashlib
from datetime import datetime
from datetime import datetime, timezone
from unittest.mock import MagicMock
from uuid import uuid4
@@ -78,7 +78,7 @@ def sample_session():
completed_items=0,
compliant_count=0,
non_compliant_count=0,
created_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
)
@@ -94,7 +94,7 @@ def sample_signoff(sample_session, sample_requirement):
signature_hash=None,
signed_at=None,
signed_by=None,
created_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
)
@@ -214,7 +214,7 @@ class TestAuditSessionLifecycle:
assert sample_session.status == AuditSessionStatusEnum.DRAFT
sample_session.status = AuditSessionStatusEnum.IN_PROGRESS
sample_session.started_at = datetime.utcnow()
sample_session.started_at = datetime.now(timezone.utc)
assert sample_session.status == AuditSessionStatusEnum.IN_PROGRESS
assert sample_session.started_at is not None
@@ -231,7 +231,7 @@ class TestAuditSessionLifecycle:
sample_session.status = AuditSessionStatusEnum.IN_PROGRESS
sample_session.status = AuditSessionStatusEnum.COMPLETED
sample_session.completed_at = datetime.utcnow()
sample_session.completed_at = datetime.now(timezone.utc)
assert sample_session.status == AuditSessionStatusEnum.COMPLETED
assert sample_session.completed_at is not None
@@ -353,7 +353,7 @@ class TestSignOff:
def test_signoff_with_signature_creates_hash(self, sample_session, sample_requirement):
"""Signing off with signature should create SHA-256 hash."""
result = AuditResultEnum.COMPLIANT
timestamp = datetime.utcnow().isoformat()
timestamp = datetime.now(timezone.utc).isoformat()
data = f"{result.value}|{sample_requirement.id}|{sample_session.auditor_name}|{timestamp}"
signature_hash = hashlib.sha256(data.encode()).hexdigest()
@@ -382,7 +382,7 @@ class TestSignOff:
# First sign-off should trigger auto-start
sample_session.status = AuditSessionStatusEnum.IN_PROGRESS
sample_session.started_at = datetime.utcnow()
sample_session.started_at = datetime.now(timezone.utc)
assert sample_session.status == AuditSessionStatusEnum.IN_PROGRESS
@@ -390,7 +390,7 @@ class TestSignOff:
"""Updating an existing sign-off should work."""
sample_signoff.result = AuditResultEnum.NON_COMPLIANT
sample_signoff.notes = "Updated: needs improvement"
sample_signoff.updated_at = datetime.utcnow()
sample_signoff.updated_at = datetime.now(timezone.utc)
assert sample_signoff.result == AuditResultEnum.NON_COMPLIANT
assert "Updated" in sample_signoff.notes
@@ -423,7 +423,7 @@ class TestGetSignOff:
# With signature
sample_signoff.signature_hash = "abc123"
sample_signoff.signed_at = datetime.utcnow()
sample_signoff.signed_at = datetime.now(timezone.utc)
sample_signoff.signed_by = "Test Auditor"
assert sample_signoff.signature_hash == "abc123"

View File

@@ -4,7 +4,7 @@ Tests for the AutoRiskUpdater Service.
Sprint 6: CI/CD Evidence Collection & Automatic Risk Updates (2026-01-18)
"""
from datetime import datetime
from datetime import datetime, timezone
from unittest.mock import MagicMock
from ..services.auto_risk_updater import (
@@ -188,7 +188,7 @@ class TestGenerateAlerts:
scan_result = ScanResult(
scan_type=ScanType.DEPENDENCY,
tool="Trivy",
timestamp=datetime.utcnow(),
timestamp=datetime.now(timezone.utc),
commit_sha="abc123",
branch="main",
control_id="SDLC-002",
@@ -209,7 +209,7 @@ class TestGenerateAlerts:
scan_result = ScanResult(
scan_type=ScanType.SAST,
tool="Semgrep",
timestamp=datetime.utcnow(),
timestamp=datetime.now(timezone.utc),
commit_sha="def456",
branch="main",
control_id="SDLC-001",
@@ -228,7 +228,7 @@ class TestGenerateAlerts:
scan_result = ScanResult(
scan_type=ScanType.CONTAINER,
tool="Trivy",
timestamp=datetime.utcnow(),
timestamp=datetime.now(timezone.utc),
commit_sha="ghi789",
branch="main",
control_id="SDLC-006",
@@ -247,7 +247,7 @@ class TestGenerateAlerts:
scan_result = ScanResult(
scan_type=ScanType.SAST,
tool="Semgrep",
timestamp=datetime.utcnow(),
timestamp=datetime.now(timezone.utc),
commit_sha="jkl012",
branch="main",
control_id="SDLC-001",
@@ -369,7 +369,7 @@ class TestScanResult:
result = ScanResult(
scan_type=ScanType.DEPENDENCY,
tool="Trivy",
timestamp=datetime.utcnow(),
timestamp=datetime.now(timezone.utc),
commit_sha="xyz789",
branch="develop",
control_id="SDLC-002",

View File

@@ -8,7 +8,7 @@ Run with: pytest compliance/tests/test_compliance_routes.py -v
"""
import pytest
from datetime import datetime
from datetime import datetime, timezone
from unittest.mock import MagicMock
from uuid import uuid4
@@ -41,8 +41,8 @@ def sample_regulation():
name="Datenschutz-Grundverordnung",
full_name="Verordnung (EU) 2016/679",
is_active=True,
created_at=datetime.utcnow(),
updated_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
updated_at=datetime.now(timezone.utc),
)
@@ -57,8 +57,8 @@ def sample_requirement(sample_regulation):
description="Personenbezogene Daten duerfen nur verarbeitet werden, wenn eine Rechtsgrundlage vorliegt.",
priority=4,
is_applicable=True,
created_at=datetime.utcnow(),
updated_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
updated_at=datetime.now(timezone.utc),
)
@@ -74,8 +74,8 @@ def sample_ai_system():
classification=AIClassificationEnum.UNCLASSIFIED,
status=AISystemStatusEnum.DRAFT,
obligations=[],
created_at=datetime.utcnow(),
updated_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
updated_at=datetime.now(timezone.utc),
)
@@ -96,8 +96,8 @@ class TestCreateRequirement:
description="Geeignete technische Massnahmen",
priority=3,
is_applicable=True,
created_at=datetime.utcnow(),
updated_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
updated_at=datetime.now(timezone.utc),
)
assert req.regulation_id == sample_regulation.id
@@ -196,7 +196,7 @@ class TestUpdateRequirement:
def test_update_audit_status_sets_audit_date(self, sample_requirement):
"""Updating audit_status should set last_audit_date."""
sample_requirement.audit_status = "compliant"
sample_requirement.last_audit_date = datetime.utcnow()
sample_requirement.last_audit_date = datetime.now(timezone.utc)
assert sample_requirement.audit_status == "compliant"
assert sample_requirement.last_audit_date is not None
@@ -287,7 +287,7 @@ class TestAISystemCRUD:
def test_update_ai_system_with_assessment(self, sample_ai_system):
"""After assessment, system should have assessment_date and result."""
sample_ai_system.assessment_date = datetime.utcnow()
sample_ai_system.assessment_date = datetime.now(timezone.utc)
sample_ai_system.assessment_result = {
"overall_risk": "high",
"risk_factors": [{"factor": "education sector", "severity": "high"}],

View File

@@ -15,7 +15,7 @@ Run with: pytest backend/compliance/tests/test_isms_routes.py -v
"""
import pytest
from datetime import datetime, date
from datetime import datetime, date, timezone
from unittest.mock import MagicMock
from uuid import uuid4
@@ -56,7 +56,7 @@ def sample_scope():
status=ApprovalStatusEnum.DRAFT,
version="1.0",
created_by="admin@breakpilot.de",
created_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
)
@@ -65,7 +65,7 @@ def sample_approved_scope(sample_scope):
"""Create an approved ISMS scope for testing."""
sample_scope.status = ApprovalStatusEnum.APPROVED
sample_scope.approved_by = "ceo@breakpilot.de"
sample_scope.approved_at = datetime.utcnow()
sample_scope.approved_at = datetime.now(timezone.utc)
sample_scope.effective_date = date.today()
sample_scope.review_date = date(date.today().year + 1, date.today().month, date.today().day)
sample_scope.approval_signature = "sha256_signature_hash"
@@ -88,7 +88,7 @@ def sample_policy():
authored_by="iso@breakpilot.de",
status=ApprovalStatusEnum.DRAFT,
version="1.0",
created_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
)
@@ -116,7 +116,7 @@ def sample_objective():
related_controls=["OPS-003"],
status="active",
progress_percentage=0.0,
created_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
)
@@ -136,7 +136,7 @@ def sample_soa_entry():
coverage_level="full",
evidence_description="ISMS Policy v2.0, signed by CEO",
version="1.0",
created_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
)
@@ -158,7 +158,7 @@ def sample_finding():
identified_date=date.today(),
due_date=date(2026, 3, 31),
status=FindingStatusEnum.OPEN,
created_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
)
@@ -178,7 +178,7 @@ def sample_major_finding():
identified_date=date.today(),
due_date=date(2026, 2, 28),
status=FindingStatusEnum.OPEN,
created_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
)
@@ -198,7 +198,7 @@ def sample_capa(sample_finding):
planned_completion=date(2026, 2, 15),
effectiveness_criteria="Document approved and distributed to audit team",
status="planned",
created_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
)
@@ -219,7 +219,7 @@ def sample_management_review():
{"name": "ISO", "role": "ISMS Manager"},
],
status="draft",
created_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
)
@@ -239,7 +239,7 @@ def sample_internal_audit():
lead_auditor="internal.auditor@breakpilot.de",
audit_team=["internal.auditor@breakpilot.de", "qa@breakpilot.de"],
status="planned",
created_at=datetime.utcnow(),
created_at=datetime.now(timezone.utc),
)
@@ -502,7 +502,7 @@ class TestISMSReadinessCheck:
"""Readiness check should identify potential major findings."""
check = ISMSReadinessCheckDB(
id=str(uuid4()),
check_date=datetime.utcnow(),
check_date=datetime.now(timezone.utc),
triggered_by="admin@breakpilot.de",
overall_status="not_ready",
certification_possible=False,
@@ -532,7 +532,7 @@ class TestISMSReadinessCheck:
"""Readiness check should show status for each ISO chapter."""
check = ISMSReadinessCheckDB(
id=str(uuid4()),
check_date=datetime.utcnow(),
check_date=datetime.now(timezone.utc),
triggered_by="admin@breakpilot.de",
overall_status="ready",
certification_possible=True,
@@ -606,7 +606,7 @@ class TestAuditTrail:
entity_name="ISMS Scope v1.0",
action="approve",
performed_by="ceo@breakpilot.de",
performed_at=datetime.utcnow(),
performed_at=datetime.now(timezone.utc),
checksum="sha256_hash",
)
@@ -630,7 +630,7 @@ class TestAuditTrail:
new_value="approved",
change_summary="Policy approved by CEO",
performed_by="ceo@breakpilot.de",
performed_at=datetime.utcnow(),
performed_at=datetime.now(timezone.utc),
checksum="sha256_hash",
)

View File

@@ -5,7 +5,7 @@ Kommuniziert mit dem Consent Management Service für GDPR-Compliance
import httpx
import jwt
from datetime import datetime, timedelta
from datetime import datetime, timedelta, timezone
from typing import Optional, List, Dict, Any
from dataclasses import dataclass
from enum import Enum
@@ -44,8 +44,8 @@ def generate_jwt_token(
"user_id": user_id,
"email": email,
"role": role,
"exp": datetime.utcnow() + timedelta(hours=expires_hours),
"iat": datetime.utcnow(),
"exp": datetime.now(timezone.utc) + timedelta(hours=expires_hours),
"iat": datetime.now(timezone.utc),
}
return jwt.encode(payload, JWT_SECRET, algorithm="HS256")

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,25 @@
#!/usr/bin/env python3
"""Regenerate the OpenAPI baseline.
Run this ONLY when you have intentionally made an additive API change and want
the contract test to pick up the new baseline. Removing or renaming anything is
a breaking change and requires updating every consumer in the same change set.
Usage:
python tests/contracts/regenerate_baseline.py
"""
from __future__ import annotations
import json
import sys
from pathlib import Path
THIS_DIR = Path(__file__).parent
REPO_ROOT = THIS_DIR.parent.parent # backend-compliance/
sys.path.insert(0, str(REPO_ROOT))
from main import app # type: ignore[import-not-found] # noqa: E402
out = THIS_DIR / "openapi.baseline.json"
out.write_text(json.dumps(app.openapi(), indent=2, sort_keys=True) + "\n")
print(f"wrote {out}")

View File

@@ -0,0 +1,102 @@
"""OpenAPI contract test.
This test pins the public HTTP contract of backend-compliance. It loads the
FastAPI app, extracts the live OpenAPI schema, and compares it against a
checked-in baseline at ``tests/contracts/openapi.baseline.json``.
Rules:
- Adding new paths/operations/fields → OK (additive change).
- Removing a path, changing a method, changing a status code, removing or
renaming a response/request field → FAIL. Such changes require updating
every consumer (admin-compliance, developer-portal, SDKs) in the same
change, then regenerating the baseline with:
python tests/contracts/regenerate_baseline.py
and explaining the contract change in the PR description.
The baseline is missing on first run — the test prints the command to create
it and skips. This is intentional: Phase 1 step 1 generates it fresh from the
current app state before any refactoring begins.
"""
from __future__ import annotations
import json
from pathlib import Path
from typing import Any
import pytest
BASELINE_PATH = Path(__file__).parent / "openapi.baseline.json"
def _load_live_schema() -> dict[str, Any]:
"""Import the FastAPI app and extract its OpenAPI schema.
Kept inside the function so that test collection does not fail if the app
has import-time side effects that aren't satisfied in the test env.
"""
from main import app # type: ignore[import-not-found]
return app.openapi()
def _collect_operations(schema: dict[str, Any]) -> dict[str, dict[str, Any]]:
"""Return a flat {f'{METHOD} {path}': operation} map for diffing."""
out: dict[str, dict[str, Any]] = {}
for path, methods in schema.get("paths", {}).items():
for method, op in methods.items():
if method.lower() in {"get", "post", "put", "patch", "delete", "options", "head"}:
out[f"{method.upper()} {path}"] = op
return out
@pytest.mark.contract
def test_openapi_no_breaking_changes() -> None:
if not BASELINE_PATH.exists():
pytest.skip(
f"Baseline missing. Run: python {Path(__file__).parent}/regenerate_baseline.py"
)
baseline = json.loads(BASELINE_PATH.read_text())
live = _load_live_schema()
baseline_ops = _collect_operations(baseline)
live_ops = _collect_operations(live)
# 1. No operation may disappear.
removed = sorted(set(baseline_ops) - set(live_ops))
assert not removed, (
f"Breaking change: {len(removed)} operation(s) removed from public API:\n "
+ "\n ".join(removed)
)
# 2. For operations that exist in both, response status codes must be a superset.
for key, baseline_op in baseline_ops.items():
live_op = live_ops[key]
baseline_codes = set((baseline_op.get("responses") or {}).keys())
live_codes = set((live_op.get("responses") or {}).keys())
missing = baseline_codes - live_codes
assert not missing, (
f"Breaking change: {key} no longer returns status code(s) {sorted(missing)}"
)
# 3. Required request-body fields may not be added (would break existing clients).
for key, baseline_op in baseline_ops.items():
live_op = live_ops[key]
base_req = _required_body_fields(baseline_op)
live_req = _required_body_fields(live_op)
new_required = live_req - base_req
assert not new_required, (
f"Breaking change: {key} added required request field(s) {sorted(new_required)}"
)
def _required_body_fields(op: dict[str, Any]) -> set[str]:
rb = op.get("requestBody") or {}
content = rb.get("content") or {}
for media in content.values():
schema = media.get("schema") or {}
return set(schema.get("required") or [])
return set()

View File

@@ -10,7 +10,7 @@ import pytest
import uuid
import os
import sys
from datetime import datetime
from datetime import datetime, timezone
from unittest.mock import MagicMock
from fastapi import FastAPI
@@ -51,7 +51,7 @@ _RawSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
@event.listens_for(engine, "connect")
def _register_sqlite_functions(dbapi_conn, connection_record):
"""Register PostgreSQL-compatible functions for SQLite."""
dbapi_conn.create_function("NOW", 0, lambda: datetime.utcnow().isoformat())
dbapi_conn.create_function("NOW", 0, lambda: datetime.now(timezone.utc).isoformat())
TENANT_ID = "default"

View File

@@ -6,7 +6,7 @@ Pattern: app.dependency_overrides[get_db] for FastAPI DI.
import uuid
import os
import sys
from datetime import datetime, timedelta
from datetime import datetime, timedelta, timezone
import pytest
from fastapi import FastAPI
@@ -75,7 +75,7 @@ def db_session():
def _create_dsr_in_db(db, **kwargs):
"""Helper to create a DSR directly in DB."""
now = datetime.utcnow()
now = datetime.now(timezone.utc)
defaults = {
"tenant_id": uuid.UUID(TENANT_ID),
"request_number": f"DSR-2026-{str(uuid.uuid4())[:6].upper()}",
@@ -241,8 +241,8 @@ class TestListDSR:
assert len(data["requests"]) == 2
def test_list_overdue_only(self, db_session):
_create_dsr_in_db(db_session, deadline_at=datetime.utcnow() - timedelta(days=5), status="processing")
_create_dsr_in_db(db_session, deadline_at=datetime.utcnow() + timedelta(days=20), status="processing")
_create_dsr_in_db(db_session, deadline_at=datetime.now(timezone.utc) - timedelta(days=5), status="processing")
_create_dsr_in_db(db_session, deadline_at=datetime.now(timezone.utc) + timedelta(days=20), status="processing")
resp = client.get("/api/compliance/dsr?overdue_only=true", headers=HEADERS)
assert resp.status_code == 200
@@ -339,7 +339,7 @@ class TestDSRStats:
_create_dsr_in_db(db_session, status="intake", request_type="access")
_create_dsr_in_db(db_session, status="processing", request_type="erasure")
_create_dsr_in_db(db_session, status="completed", request_type="access",
completed_at=datetime.utcnow())
completed_at=datetime.now(timezone.utc))
resp = client.get("/api/compliance/dsr/stats", headers=HEADERS)
assert resp.status_code == 200
@@ -561,9 +561,9 @@ class TestDeadlineProcessing:
def test_process_deadlines_with_overdue(self, db_session):
_create_dsr_in_db(db_session, status="processing",
deadline_at=datetime.utcnow() - timedelta(days=5))
deadline_at=datetime.now(timezone.utc) - timedelta(days=5))
_create_dsr_in_db(db_session, status="processing",
deadline_at=datetime.utcnow() + timedelta(days=20))
deadline_at=datetime.now(timezone.utc) + timedelta(days=20))
resp = client.post("/api/compliance/dsr/deadlines/process", headers=HEADERS)
assert resp.status_code == 200
@@ -609,7 +609,7 @@ class TestDSRTemplates:
subject="Bestaetigung",
body_html="<p>Test</p>",
status="published",
published_at=datetime.utcnow(),
published_at=datetime.now(timezone.utc),
)
db_session.add(v)
db_session.commit()

View File

@@ -7,7 +7,7 @@ Consent widerrufen, Statistiken.
import pytest
from unittest.mock import MagicMock, patch
from datetime import datetime
from datetime import datetime, timezone
import uuid
@@ -25,7 +25,7 @@ def make_catalog(tenant_id='test-tenant'):
rec.tenant_id = tenant_id
rec.selected_data_point_ids = ['dp-001', 'dp-002']
rec.custom_data_points = []
rec.updated_at = datetime.utcnow()
rec.updated_at = datetime.now(timezone.utc)
return rec
@@ -34,7 +34,7 @@ def make_company(tenant_id='test-tenant'):
rec.id = uuid.uuid4()
rec.tenant_id = tenant_id
rec.data = {'company_name': 'Test GmbH', 'email': 'datenschutz@test.de'}
rec.updated_at = datetime.utcnow()
rec.updated_at = datetime.now(timezone.utc)
return rec
@@ -47,7 +47,7 @@ def make_cookies(tenant_id='test-tenant'):
{'id': 'analytics', 'name': 'Analyse', 'isRequired': False, 'defaultEnabled': False},
]
rec.config = {'position': 'bottom', 'style': 'bar'}
rec.updated_at = datetime.utcnow()
rec.updated_at = datetime.now(timezone.utc)
return rec
@@ -58,13 +58,13 @@ def make_consent(tenant_id='test-tenant', user_id='user-001', data_point_id='dp-
rec.user_id = user_id
rec.data_point_id = data_point_id
rec.granted = granted
rec.granted_at = datetime.utcnow()
rec.granted_at = datetime.now(timezone.utc)
rec.revoked_at = None
rec.consent_version = '1.0'
rec.source = 'website'
rec.ip_address = None
rec.user_agent = None
rec.created_at = datetime.utcnow()
rec.created_at = datetime.now(timezone.utc)
return rec
@@ -263,7 +263,7 @@ class TestConsentDB:
user_id='user-001',
data_point_id='dp-marketing',
granted=True,
granted_at=datetime.utcnow(),
granted_at=datetime.now(timezone.utc),
consent_version='1.0',
source='website',
)
@@ -276,13 +276,13 @@ class TestConsentDB:
consent = make_consent()
assert consent.revoked_at is None
consent.revoked_at = datetime.utcnow()
consent.revoked_at = datetime.now(timezone.utc)
assert consent.revoked_at is not None
def test_cannot_revoke_already_revoked(self):
"""Should not be possible to revoke an already revoked consent."""
consent = make_consent()
consent.revoked_at = datetime.utcnow()
consent.revoked_at = datetime.now(timezone.utc)
# Simulate the guard logic from the route
already_revoked = consent.revoked_at is not None
@@ -315,7 +315,7 @@ class TestConsentStats:
make_consent(user_id='user-2', data_point_id='dp-1', granted=True),
]
# Revoke one
consents[1].revoked_at = datetime.utcnow()
consents[1].revoked_at = datetime.now(timezone.utc)
total = len(consents)
active = sum(1 for c in consents if c.granted and not c.revoked_at)
@@ -334,7 +334,7 @@ class TestConsentStats:
make_consent(user_id='user-2', granted=True),
make_consent(user_id='user-3', granted=True),
]
consents[2].revoked_at = datetime.utcnow() # user-3 revoked
consents[2].revoked_at = datetime.now(timezone.utc) # user-3 revoked
unique_users = len(set(c.user_id for c in consents))
users_with_active = len(set(c.user_id for c in consents if c.granted and not c.revoked_at))
@@ -501,7 +501,7 @@ class TestConsentHistoryTracking:
from compliance.db.einwilligungen_models import EinwilligungenConsentHistoryDB
consent = make_consent()
consent.revoked_at = datetime.utcnow()
consent.revoked_at = datetime.now(timezone.utc)
entry = EinwilligungenConsentHistoryDB(
consent_id=consent.id,
tenant_id=consent.tenant_id,
@@ -516,7 +516,7 @@ class TestConsentHistoryTracking:
entry_id = _uuid.uuid4()
consent_id = _uuid.uuid4()
now = datetime.utcnow()
now = datetime.now(timezone.utc)
row = {
"id": str(entry_id),

Some files were not shown because too many files have changed in this diff Show More